A Small Institute

The Ansible scripts herein configure a small institute's hosts according to their roles in the institute's network of public and private servers. The network topology allows the institute to present an expendable public face (easily wiped clean) while maintaining a secure and private campus that can function with or without the Internet.

1. Overview

This small institute has a public server on the Internet, Front, that handles the institute's email, web site, and cloud. Front is small, cheap, and expendable, contains only public information, and functions mostly as a VPN server relaying to a campus network.

The campus network is one or more machines physically connected via Ethernet (or a similarly secure medium) for private, un-encrypted communication in a core locality. One of the machines on this Ethernet is Core, the small institute's main server. Core provides a number of essential localnet services (DHCP, DNS, NTP), and a private, campus web site. It is also the home of the institute cloud and is where all of the institute's data actually reside. When the campus ISP (Internet Service Provider) is connected, a separate host, Gate, routes campus traffic to the ISP (via NAT). Through Gate, Core connects to Front making the institute email, cloud, etc. available to members off campus.

                =                                                   
              _|||_                                                 
        =-The-Institute-=                                           
          =   =   =   =                                             
          =   =   =   =                                             
        =====-Front-=====                                           
                |                                                   
        -----------------                                           
      (                   )                                         
     (   The Internet(s)   )----(Hotel Wi-Fi)                       
      (                   )        |                                
        -----------------          +----Member's notebook off campus
                |                                                   
=============== | ==================================================
                |                                           Premises
          (Campus ISP)                                              
                |            +----Member's notebook on campus       
                |            |                                      
                | +----(Campus Wi-Fi)                               
                | |                                                 
============== Gate ================================================
                |                                            Private
                +----(Ethernet switch)                              
                        |                                           
                        +----Core                                   
                        +----Servers (NAS, DVR, etc.)               

Members of the institute use commodity notebooks and open source desktops. When off campus, members access institute resources via the VPN on Front (via hotel Wi-Fi). When on campus, members can use the much faster and always available (despite Internet connectivity issues) VPN on Gate (via campus Wi-Fi). A member's Android phones and devices can use the same Wi-Fis, VPNs (via the WireGuard™ app) and services. On a desktop or by phone, at home or abroad, members can access their email and the institute's private web and cloud.

The institute email service reliably delivers messages in seconds, so it is the main mode of communication amongst the membership, which uses OpenPGP encryption to secure message content.

2. Caveats

This small institute prizes its privacy, so there is little or no accommodation for spyware (aka commercial software). The members of the institute are dedicated to refining good tools, making the best use of software that does not need nor want our hearts, our money, nor even our attention.

Unlike a commercial cloud service with redundant hardware and multiple ISPs, Gate is a real choke point. When Gate cannot reach the Internet, members abroad will not be able to reach Core, their email folders, nor the institute cloud. They can chat privately with other members abroad or consult the public web site on Front. Members on campus will have their email and cloud, but no Internet and thus no new email and no chat with members abroad. Keeping our data on campus means we can keep operating without the Internet if we are on campus.

Keeping your data secure on campus, not on the Internet, means when your campus goes up in smoke, so does your data, unless you made an off-site (or at least fire-safe!) backup copy.

Security and privacy are the focus of the network architecture and configuration, not anonymity. There is no support for Tor. The VPNs do not funnel all Internet traffic through anonymizing services. They do not try to defeat geo-fencing.

This is not a showcase of the latest technologies. It is not expected to change except slowly.

The services are intended for the SOHO (small office, home office, 4-H chapter, medical clinic, gun-running biker gang, etc.) with a small, fairly static membership. Front can be small and cheap (10USD per month) because of this assumption.

3. The Services

The small institute's network is designed to provide a number of services. Understanding how institute hosts co-operate is essential to understanding the configuration of specific hosts. This chapter covers institute services from a network wide perspective, and gets right down in its subsections to the Ansible code that enforces its policies. On first reading, those subsections should be skipped; they reference particulars first introduced in the following chapter.

3.1. The Name Service

The institute has a public domain, e.g. small.example.org, and a private domain, e.g. small.private. The public has access only to the former and, as currently configured, to only one address (A record): Front's public IP address. Members connected to the campus, via wire or VPN, use the campus name server which can resolve institute private domain names like core.small.private. If small.private is also used as a search domain, members can use short names like core.

3.2. The Email Service

Front provides the public SMTP (Simple Mail Transfer Protocol) service that accepts email from the Internet, delivering messages addressed to the institute's domain name, e.g. to postmaster@small.example.org. Its Postfix server accepts email for member accounts and any public aliases (e.g. postmaster). Messages are delivered to member ~/Maildir/ directories via Dovecot.

If the campus is connected to the Internet, the new messages are quickly picked up by Core and stored in member ~/Maildir/ directories there. Securely stored on Core, members can decrypt and sort their email using common, IMAP-based tools. (Most mail apps can use IMAP, the Internet Message Access Protocol.)

Core transfers messages from Front using Fetchmail's --idle option, which instructs Fetchmail to maintain a connection to Front so that it can (with good campus connectivity) get notifications to pick up new email. Members of the institute typically employ email apps that work similarly, alerting them to new email on Core. Thus members enjoy email messages that arrive as fast as text messages (but with the option of real, end-to-end encryption).

If the campus loses connectivity to the Internet, new email accumulates in ~/Maildir/ directories on Front. If a member is abroad, with Internet access, their new emails can be accessed via Front's IMAPS (IMAP Secured [with SSL/TLS]) service, available at the institute domain name. When the campus regains Internet connectivity, Core will collect the new email.

Core is the campus mail hub, securely storing members' incoming emails, and relaying their outgoing emails. It is the "smarthost" for the campus. Campus machines send all outgoing email to Core, and Core's Postfix server accepts messages from any of the institute's networks.

Core delivers messages addressed to internal host names locally. For example webmaster@test.small.private is delivered to webmaster on Core. Core relays other messages to its smarthost, Front, which is declared by the institute's SPF (Sender Policy Framework) DNS record to be the only legitimate sender of institute emails. Thus the Internet sees the institute's outgoing email coming from a server at an address matching the domain's SPF record. The institute does not sign outgoing emails per DKIM (Domain Keys Identified Mail) yet.

Example Small Institute SPF Record
TXT    v=spf1 ip4:159.65.75.60 -all

There are a number of configuration settings that, for interoperability, should be in agreement on the Postfix servers and the campus clients. Policy also requires certain settings on both Postfix or both Dovecot servers. To ensure that the same settings are applied on both, the shared settings are defined here and included via noweb reference in the server configurations. For example the Postfix setting for the maximum message size is given in a code block labeled postfix-message-size below and then included in both Postfix configurations wherever <<postfix-message-size>> appears.

3.2.1. The Postfix Configurations

The institute aims to accommodate encrypted email containing short videos, messages that can quickly exceed the default limit of 9.77MiB, so the institute uses a limit 10 times greater than the default, 100MiB. Front should always have several gigabytes free to spool a modest number (several 10s) of maximally sized messages. Furthermore a maxi-message's time in the spool is nominally a few seconds, after which it moves on to Core (the big disks). This Postfix setting should be the same throughout the institute, so that all hosts can handle maxi-messages.

postfix-message-size
- { p: message_size_limit, v: 104857600 }

Queue warning and bounce times were shortened at the institute. Email should be delivered in seconds. If it cannot be delivered in an hour, the recipient has been cut off, and a warning is appropriate. If it cannot be delivered in 4 hours, the information in the message is probably stale and further attempts to deliver it have limited and diminishing value. The sender should decide whether to continue by re-sending the bounce (or just grabbing the go-bag!).

postfix-queue-times
- { p: delay_warning_time, v: 1h }
- { p: maximal_queue_lifetime, v: 4h }
- { p: bounce_queue_lifetime, v: 4h }

The Debian default Postfix configuration enables SASL authenticated relaying and opportunistic TLS with a self-signed, "snake oil" certificate. The institute substitutes its own certificates and disables relaying (other than for the local networks).

postfix-relaying
- p: smtpd_relay_restrictions
  v: permit_mynetworks reject_unauth_destination

Dovecot is configured to store emails in each member's ~/Maildir/. The same instruction is given to Postfix for the belt-and-suspenders effect.

postfix-maildir
- { p: home_mailbox, v: Maildir/ }

The complete Postfix configurations for Front and Core use these common settings as well as several host-specific settings as discussed in the respective roles below.

3.2.2. The Dovecot Configurations

The Dovecot settings on both Front and Core disable POP and require TLS.

The official documentation for Dovecot once was a Wiki but now is https://doc.dovecot.org, yet the Wiki is still distributed in /usr/share/doc/dovecot-core/wiki/.

dovecot-tls
protocols = imap
ssl = required

Both servers should accept only IMAPS connections. The following configuration keeps them from even listening at the IMAP port (e.g. for STARTTLS commands).

dovecot-ports
service imap-login {
  inet_listener imap {
    port = 0
  }
}

Both Dovecot servers store member email in members' local ~/Maildir/ directories.

dovecot-maildir
mail_location = maildir:~/Maildir

The complete Dovecot configurations for Front and Core use these common settings with host specific settings for ssl_cert and ssl_key.

3.3. The Web Services

Front provides the public HTTP service that serves institute web pages at e.g. https://small.example.org/. The small institute initially runs with a self-signed, "snake oil" server certificate, causing browsers to warn of possible fraud, but this certificate is easily replaced by one signed by a recognized authority, as discussed in The Front Role.

The Apache2 server finds its web pages in the /home/www/ directory tree. Pages can also come from member home directories. For example the HTML for https://small.example.org/~member would come from the /home/member/Public/HTML/index.html file.

The server does not run CGI scripts. This keeps Front's CPU requirements cheap. CGI scripts can be used on Core. Indeed Nextcloud on Core uses PHP and the whole LAMP (Linux, Apache, MySQL, PHP) stack.

Core provides a campus HTTP service with several virtual hosts. These web sites can only be accessed via the campus Ethernet or an institute VPN. In either situation Core's many private domain names become available, e.g. www.small.private. In many cases these domain names can be shortened e.g. to www. Thus the campus home page is accessible in a dozen keystrokes: http://www/ (plus Enter).

Core's web sites:

http://www/
is the small institute's campus web site. It serves files from the staff-writable /WWW/campus/ directory tree.
http://live/
is a local copy of the institute's public web site. It serves the files in the /WWW/live/ directory tree, which is mirrored to Front.
http://test/
is a test copy of the institute's public web site. It tests new web designs in the /WWW/test/ directory tree. Changes here are merged into the live tree, /WWW/live/, once they are complete and tested.
http://core/
is the Debian default site. The institute does not munge this site, to avoid conflicts with Debian-packaged web services (e.g. Nextcloud, AgentDVR, MythTV's MythWeb).

Core runs a cron job under a system account named monkey that mirrors /WWW/live/ to Front's /home/www/ every 15 minutes. Vandalism on Front should not be possible, but if it happens Monkey will automatically wipe it within 15 minutes.

3.4. The Cloud Service

Core runs Nextcloud to provide a private institute cloud at https://core.small.private/nextcloud/. It is managed manually per The Nextcloud Server Administration Guide. The code and data, including especially database dumps, are stored in /Nextcloud/ which is included in Core's backup procedure as described in Backups. The default Apache2 configuration expects to find the web scripts in /var/www/nextcloud/, so the institute symbolically links this to /Nextcloud/nextcloud/.

Note that authenticating to a non-HTTPS URL like http://core.small.private/ is often called out as insecure, but the domain name is private and the service is on a directly connected private network.

3.5. Accounts

A small institute has just a handful of members. For simplicity (and thus security) static configuration files are preferred over complex account management systems, LDAP, Active Directory, and the like. The Ansible scripts configure the same set of user accounts on Core and Front. The Institute Commands (e.g. ./inst new dick) capture the processes of enrolling, modifying and retiring members of the institute. They update the administrator's membership roll, and run Ansible to create (and disable) accounts on Core, Front, Nextcloud, etc.

The small institute does not use disk quotas nor access control lists. It relies on Unix group membership and permissions. It is Debian based and thus uses "user groups" by default. Sharing is typically accomplished via the campus cloud and the resulting desktop files can all be private (readable and writable only by the owner) by default.

3.5.1. The Administration Accounts

The institute avoids the use of the root account (uid 0) because it is exempt from the normal Unix permissions checking. The sudo command is used to consciously (conscientiously!) run specific scripts and programs as root. When installation of a Debian OS leaves the host with no user accounts, just the root account, the next step is to create a system administrator's account named sysadm and to give it permission to use the sudo command (e.g. as described in The Front Machine). When installation prompts for the name of an initial, privileged user account the same name is given (e.g. as described in The Core Machine). Installation may not prompt and still create an initial user account with a distribution specific name (e.g. pi). Any name can be used as long as it is provided as the value of ansible_user in hosts. Its password is specified by a vault-encrypted variable in the Secret/become.yml file. (The hosts and Secret/become.yml files are described in The Ansible Configuration.)

3.5.2. The Monkey Accounts

The institute's Core uses a special account named monkey to run background jobs with limited privileges. One of those jobs is to keep the public web site mirror up-to-date, so a corresponding monkey account is created on Front as well.

3.6. Keys

The institute keeps its "master secrets" in an encrypted volume on an off-line hard drive, e.g. a LUKS (Linux Unified Key Setup) format partition on a USB pen/stick. The Secret/ sub-directory is actually a symbolic link to this partition's automatic mount point, e.g. /media/sysadm/ADE7-F866/. Unless this volume is mounted (unlocked) at Secret/, none of the ./inst commands will work.

Chief among the institute's master secrets is the SSH key authorized to access privileged accounts on all of the institute servers. It is stored in Secret/ssh_admin/id_rsa. The complete list of the institute's SSH keys:

Secret/ssh_admin/
The SSH key pair for A Small Institute Administrator.
Secret/ssh_monkey/
The key pair used by Monkey to update the website on Front (and other unprivileged tasks).
Secret/ssh_front/
The host key pair used by Front to authenticate itself. The automatically generated key pair is not used. (Thus Core's configuration does not depend on Front's.)

The institute uses a couple X.509 certificates to authenticate servers. They are created by the EasyRSA Certificate Authority stored in Secret/CA/.

Secret/CA/pki/ca.crt
The institute CA certificate, used to sign the other certificates.
Secret/CA/pki/issued/small.example.org.crt
The public Postfix, Dovecot and Apache servers on Front.
Secret/CA/pki/issued/core.small.private.crt
The campus Postfix, Dovecot and Apache (thus Nextcloud) servers on Core.

The ./inst client command updates the institute membership roll, which lists members and their clients' public keys, and is stored in private/members.yml.

Finally, the institute uses an OpenPGP key to secure sensitive emails (containing passwords or private keys) to Core.

Secret/root.gnupg/
The "home directory" used to create the public/secret key pair.
Secret/root-pub.pem
The ASCII armored OpenPGP public key for e.g. root@core.small.private.
Secret/root-sec.pem
The ASCII armored OpenPGP secret key.

The institute administrator updates a couple encrypted copies of this drive after enrolling new members, changing a password, (de)authorizing a VPN client, etc.

rsync -a Secret/ Secret2/
rsync -a Secret/ Secret3/

This is out of consideration for the fragility of USB drives, and the importance of a certain SSH private key, without which the administrator will have to login with a password, hopefully stored in the administrator's password keep, to install a new SSH key.

3.7. Backups

The small institute backs up its data, but not so much so that nothing can be deleted. It actually mirrors user directories (/home/), the web sites (/WWW/), Nextcloud (/Nextcloud/), and any capitalized root directory entry, to a large off-line disk. Where incremental backups are desired, a CMS like git is used.

Off-site backups are not a priority due to cost and trust issues, and the low return on the investment given the minuscule risk of a catastrophe big enough to obliterate all local copies. And the institute's public contributions are typically replicated in public code repositories like GitHub and GNU Savannah.

The following example /usr/local/sbin/backup script pauses Nextcloud, dumps its database, rsyncs /home/, /WWW/ and /Nextcloud/ to a /backup/ volume (mounting and unmounting /backup/ if necessary), then continues Nextcloud. The script assumes the backup volume is labeled Backup and formatted per LUKS version 2.

Given the -n flag, the script does a "pre-sync" which does not pause Nextcloud nor dump its DB. A pre-sync gets the big file (video) copies done while Nextcloud continues to run. A follow-up sudo backup, without -n, produces the complete copy (with all the files mentioned in the Nextcloud database dump).

private/backup
#!/bin/bash -e
#
# DO NOT EDIT.
#
# Maintained (will be replaced) by Ansible.
#
# sudo backup [-n]

if [ `id -u` != "0" ]
then
    echo "This script must be run as root."
    exit 1
fi

if [ "$1" = "-n" ]
then
    presync=yes
    shift
fi

if [ "$#" != "0" ]
then
    echo "usage: $0 [-n]"
    exit 2
fi

function cleanup () {
    sleep 2
    finish
}

trap cleanup SIGHUP SIGINT SIGQUIT SIGPIPE SIGTERM

function start () {

    if ! mountpoint -q /backup/
    then
        echo "Mounting /backup/."
        cryptsetup luksOpen /dev/disk/by-partlabel/Backup backup
        mount /dev/mapper/backup /backup
    else
        echo "Found /backup/ already mounted."
    fi

    if [ ! -d /backup/home ]
    then
        echo "The backup device should be mounted at /backup/"
        echo "yet there is no /backup/home/ directory."
        exit 2
    fi

    if [ ! $presync ]
    then
        echo "Putting Nextcloud into maintenance mode."
        ( cd /Nextcloud/nextcloud/
          sudo -u www-data php occ maintenance:mode --on &>/dev/null )

        echo "Dumping Nextcloud database."
        ( cd /Nextcloud/
          umask 07
          BAK=`date +"%Y%m%d%H%M"`-dbbackup.bak.gz
          CNF=/Nextcloud/dbbackup.cnf
          mysqldump --defaults-file=$CNF nextcloud | gzip > $BAK
          chmod 440 $BAK
          ls -t1 *-dbbackup.bak.gz | tail -n +4 \
          | while read; do rm "$REPLY"; done
        )
    fi

}

function finish () {

    if [ ! $presync ]
    then
        echo "Putting Nextcloud back into service."
        ( cd /Nextcloud/nextcloud/
          sudo -u www-data php occ maintenance:mode --off &>/dev/null )
    fi

    if mountpoint -q /backup/
    then
        echo "Unmounting /backup/."
        umount /backup
        cryptsetup luksClose backup
        echo "Done."
        echo "The backup device can be safely disconnected."
    fi
}

start

for D in /home /[A-Z]*; do
    echo "Updating /backup$D/."
    ionice --class Idle --ignore \
        rsync -av --delete --exclude=.NoBackups $D/ /backup$D/
done

finish

4. The Particulars

This chapter introduces Ansible variables intended to simplify changes, like customization for another institute's particulars. The variables are separated into public information (e.g. an institute's name) or private information (e.g. a network interface address), and stored in separate files: public/vars.yml and private/vars.yml.

The example settings in this document configure VirtualBox VMs as described in the Testing chapter. For more information about how a small institute turns the example Ansible code into a working Ansible configuration, see chapter The Ansible Configuration.

4.1. Generic Particulars

The small institute's domain name is used quite frequently in the Ansible code. The example used here is small.example.org. The following line sets domain_name to that value. (Ansible will then replace {{ domain_name }} in the code with small.example.org.)

public/vars.yml
---
domain_name: small.example.org

The institute's private domain is treated as sensitive information, and so is "tangled" into the example file private/vars.yml rather than public/vars.yml. The example file is used for testing, and serves as the template for an actual, private, private/var.yml file that customizes this Ansible code for an actual, private, small institute.

The institute's private domain name should end with one of the top-level domains set aside for this purpose: .intranet, .internal, .private, .corp, .home or .lan.1 It is hoped that doing so will increase the chances that some abomination like DNS-over-HTTPS will pass us by.

private/vars.yml
---
domain_priv: small.private

4.2. Subnets

The small institute uses a private Ethernet, two VPNs, and a "wild", untrusted Ethernet for the campus Wi-Fi access point(s) and wired IoT appliances. Each must have a unique private network address. Hosts using the VPNs are also using foreign private networks, e.g. a notebook on a hotel Wi-Fi. To better the chances that all of these networks get unique addresses, the small institute uses addresses in the IANA's (Internet Assigned Numbers Authority's) private network address ranges except the 192.168 address range already in widespread use. This still leaves 69,632 8 bit networks (each addressing up to 254 hosts) from which to choose. The following table lists their CIDRs (subnet numbers in Classless Inter-Domain Routing notation) in abbreviated form (eliding 69,624 rows).

Table 1: IANA Private 8bit Subnetwork CIDRs
Subnet CIDR Host Addresses
10.0.0.0/24 10.0.0.1 – 10.0.0.254
10.0.1.0/24 10.0.1.1 – 10.0.1.254
10.0.2.0/24 10.0.2.1 – 10.0.2.254
10.255.255.0/24 10.255.255.1 – 10.255.255.254
172.16.0.0/24 172.16.0.1 – 172.16.0.254
172.16.1.0/24 172.16.1.1 – 172.16.1.254
172.16.2.0/24 172.16.2.1 – 172.16.2.254
172.31.255.0/24 172.31.255.1 – 172.31.255.254

The following Emacs Lisp randomly chooses one of these 8 bit subnets. The small institute used it to pick its four private subnets. An example result follows the code.

(let ((bytes
         (let ((i (random (+ 256 16))))
           (if (< i 256)
               (list 10        i         (1+ (random 254)))
             (list  172 (+ 16 (- i 256)) (1+ (random 254)))))))
  (format "%d.%d.%d.0/24" (car bytes) (cadr bytes) (caddr bytes)))

=> 10.62.17.0/24

The four private networks are named and given example CIDRs in the code block below. The small institute treats these addresses as sensitive information so again the code block below "tangles" into private/vars.yml rather than public/vars.yml. Two of the addresses are in 192.168 subnets because they are part of a test configuration using mostly-default VirtualBoxes (described here).

private/vars.yml

private_net_cidr:           192.168.56.0/24
wild_net_cidr:              192.168.57.0/24
public_wg_net_cidr:         10.177.87.0/24
campus_wg_net_cidr:         10.84.139.0/24

The network addresses are needed in several additional formats, e.g. network address and subnet mask (10.84.139.0 255.255.255.0). The following boilerplate uses Ansible's ipaddr filter to set several corresponding variables, each with an appropriate suffix, e.g. _net_and_mask rather than _net_cidr.

network-vars
private_net:
           "{{ private_net_cidr | ansible.utils.ipaddr('network') }}"
private_net_mask:
           "{{ private_net_cidr | ansible.utils.ipaddr('netmask') }}"
private_net_and_mask:      "{{ private_net }} {{ private_net_mask }}"
wild_net:     "{{ wild_net_cidr | ansible.utils.ipaddr('network') }}"
wild_net_mask:
              "{{ wild_net_cidr | ansible.utils.ipaddr('netmask') }}"
wild_net_and_mask:               "{{ wild_net }} {{ wild_net_mask }}"
wild_net_broadcast:
            "{{ wild_net_cidr | ansible.utils.ipaddr('broadcast') }}"
public_wg_net:
         "{{ public_wg_net_cidr | ansible.utils.ipaddr('network') }}"
public_wg_net_mask:
         "{{ public_wg_net_cidr | ansible.utils.ipaddr('netmask') }}"
public_wg_net_and_mask:
                       "{{ public_wg_net }} {{ public_wg_net_mask }}"
campus_wg_net:
         "{{ campus_wg_net_cidr | ansible.utils.ipaddr('network') }}"
campus_wg_net_mask:
         "{{ campus_wg_net_cidr | ansible.utils.ipaddr('netmask') }}"
campus_wg_net_and_mask:
                       "{{ campus_wg_net }} {{ campus_wg_net_mask }}"

This is obvious, site-independent, non-private boilerplate and so goes in a defaults/main.yml file in each role. The variables can then be overridden by adding them to the site-specific private/vars.yml. The block is referenced with <<network-vars>> and tangled into each role's defaults/main.yml file.

The institute prefers to configure its services with IP addresses rather than domain names, and one of the most important for secure and reliable operation is Front's public IP address known to the world by the institute's Internet domain name.

public/vars.yml
front_addr: 192.168.15.4

The example address is a private network address because the example configuration is intended to run in a test jig made up of VirtualBox virtual machines and networks.

Finally, five host addresses are needed frequently in the Ansible code. Each is made available in both CIDR and IPv4 address formats. Again this is site-independent, non-private boilerplate referenced with address-vars in the default/main.yml files.

address-vars

core_addr_cidr:  "{{ private_net_cidr | ansible.utils.ipaddr('1') }}"
gate_addr_cidr:  "{{ private_net_cidr | ansible.utils.ipaddr('2') }}"
gate_wild_addr_cidr:
                    "{{ wild_net_cidr | ansible.utils.ipaddr('1') }}"
front_wg_addr_cidr:
               "{{ public_wg_net_cidr | ansible.utils.ipaddr('1') }}"
core_wg_addr_cidr:
               "{{ public_wg_net_cidr | ansible.utils.ipaddr('2') }}"

core_addr:   "{{ core_addr_cidr | ansible.utils.ipaddr('address') }}"
gate_addr:   "{{ gate_addr_cidr | ansible.utils.ipaddr('address') }}"
gate_wild_addr:
        "{{ gate_wild_addr_cidr | ansible.utils.ipaddr('address') }}"
front_wg_addr:
         "{{ front_wg_addr_cidr | ansible.utils.ipaddr('address') }}"
core_wg_addr:
          "{{ core_wg_addr_cidr | ansible.utils.ipaddr('address') }}"

5. The Hardware

The small institute's network was built by its system administrator using Ansible on a trusted notebook. The Ansible configuration and scripts were generated by "tangling" the Ansible code included here. (The Ansible Configuration describes how to do this.) The following sections describe how Front, Gate and Core were prepared for Ansible.

5.1. The Front Machine

Front is the small institute's public facing server, a virtual machine on the Internets. It needs only as much disk as required by the institute's public web site. Often the cheapest offering (4GB RAM, 1 core, 20GB disk) is sufficient. The provider should make it easy and fast to (re)initialize the machine to a factory fresh Debian Server, and install additional Debian software packages. Indeed it should be possible to quickly re-provision a new Front machine from a frontier Internet café using just the administrator's notebook.

5.1.1. A Digital Ocean Droplet

The following example prepared a new front on a Digital Ocean droplet. The institute administrator opened an account at Digital Ocean, registered an ssh key, and used a Digital Ocean control panel to create a new machine (again, one of the cheapest, smallest available) with Ubuntu Server 20.04LTS installed. Once created, the machine and its IP address (159.65.75.60) appeared on the panel. Using that address, the administrator logged into the new machine with ssh.

On the administrator's notebook (in a terminal):

notebook$ ssh root@159.65.75.60
root@ubuntu# 

The freshly created Digital Ocean droplet came with just one account, root, but the small institute avoids remote access to the "super user" account (per the policy in The Administration Accounts), so the administrator created a sysadm account with the ability to request escalated privileges via the sudo command.

root@ubuntu# adduser sysadm
...
New password: givitysticangout
Retype new password: givitysticangout
...
        Full Name []: System Administrator
...
Is the information correct? [Y/n] 
root@ubuntu# adduser sysadm sudo
root@ubuntu# logout
notebook$

The password was generated by gpw, saved in the administrator's password keep, and later added to Secret/become.yml as shown below. (Producing a working Ansible configuration with Secret/become.yml file is described in The Ansible Configuration.)

notebook$ gpw 1 16
givitysticangout
notebook$ echo -n "become_front: " >>Secret/become.yml
notebook$ ansible-vault encrypt_string givitysticangout \
notebook_     >>Secret/become.yml

After creating the sysadm account on the droplet, the administrator concatenated a personal public ssh key and the key found in Secret/ssh_admin/ (created by The CA Command) into an admin_keys file, copied it to the droplet, and installed it as the authorized_keys for sysadm.

notebook$ cat ~/.ssh/id_rsa.pub Secret/ssh_admin/id_rsa.pub \
notebook_     > admin_keys
notebook$ scp admin_keys sysadm@159.65.75.60:
The authenticity of host '159.65.75.60' can't be established.
....
Are you sure you want to continue connecting (...)? yes
...
sysadm@159.65.75.60's password: givitysticangout
notebook$ ssh sysadm@159.65.75.60
sysadm@159.65.75.60's password: givitysticangout
sysadm@ubuntu$ ( umask 077; mkdir .ssh; \
sysadm@ubuntu_   cp admin_keys .ssh/authorized_keys; \
sysadm@ubuntu_   rm admin_keys )
sysadm@ubuntu$ logout
notebook$ rm admin_keys
notebook$

The Ansible configuration expects certain host keys on the new front. The administrator should install them now, and deal with the machine's change of SSH identity. The following commands copied the host keys in Secret/ssh_front/ to the droplet and restarted the SSH server.

notebook$ ( cd Secret/ssh_front/etc/ssh/;
notebook_   scp ssh_host_* sysadm@159.65.75.60: )
notebook$ ssh sysadm@159.65.75.60
sysadm@ubuntu$ chmod 600 ssh_host_*
sysadm@ubuntu$ chmod 644 ssh_host_*.pub
sysadm@ubuntu$ sudo cp -b ssh_host_* /etc/ssh/
sysadm@ubuntu$ sudo systemctl restart ssh
sysadm@ubuntu$ logout
notebook$ ssh-keygen -f ~/.ssh/known_hosts -R 159.65.75.60

The last command removed the old host key from the administrator's known_hosts file. The next few commands served to test password-less login as well as the privilege escalation command sudo.

The Droplet needed a couple additional software packages immediately. The wireguard package was needed to generate the Droplet's private key. The systemd-resolved package was installed so that the subsequent reboot gets ResolveD configured properly (else resolvectl hangs, causing wg-quick@wg0 to hang…). The rest are included just to speed up (re)testing of "prepared" test machines, e.g. prepared as described in The Test Front Machine.

notebook$ ssh sysadm@159.65.75.60
sysadm@ubuntu$ sudo apt install wireguard systemd-resolved \
    unattended-upgrades postfix dovecot-imapd rsync apache2 kamailio

With WireGuard™ installed, the following commands generated a new private key, and displayed its public key.

sysadm@ubuntu$ umask 077
susadm@ubuntu$ wg genkey \
sysadm@ubuntu_ | sudo tee /etc/wireguard/private-key \
sysadm@ubuntu_ | wg pubkey
S+6HaTnOwwhWgUGXjSBcPAvifKw+j8BDTRfq534gNW4=

The public key is copied and pasted into private/vars.yml as the value of front_wg_pubkey (as in the example here).

After collecting Front's public key, the administrator disabled root logins on the droplet. The last command below tested that root logins were indeed denied.

sysadm@ubuntu$ sudo rm -r /root/.ssh
sysadm@ubuntu# logout
notebook$ ssh root@159.65.75.60
root@159.65.75.60: Permission denied (publickey).
notebook$ 

At this point the droplet was ready for configuration by Ansible. Later, provisioned with all of Front's services and tested, the institute's domain name was changed, making 159.65.75.60 its new address.

5.2. The Core Machine

Core is the small institute's private file, email, cloud and whatnot server. It should have some serious horsepower (RAM, cores, GHz) and storage (hundreds of gigabytes). An old desktop system might be sufficient and if later it proves it is not, moving Core to new hardware is "easy" and good practice. It is also straightforward to move the heaviest workloads (storage, cloud, internal web sites) to additional machines.

Core need not have a desktop, and will probably be more reliable if it is not also playing games. It will run continuously 24/7 and will benefit from a UPS (uninterruptible power supply). It's file system and services are critical.

The following example prepared a new core on a PC with Debian 11 freshly installed. During installation, the machine was named core, no desktop or server software was installed, no root password was set, and a privileged account named sysadm was created (per the policy in The Administration Accounts).

New password: oingstramextedil
Retype new password: oingstramextedil
...
        Full Name []: System Administrator
...
Is the information correct? [Y/n] 

The password was generated by gpw, saved in the administrator's password keep, and later added to Secret/become.yml as shown below. (Producing a working Ansible configuration with Secret/become.yml file is described in The Ansible Configuration.)

notebook$ gpw 1 16
oingstramextedil
notebook$ echo -n "become_core: " >>Secret/become.yml
notebook$ ansible-vault encrypt_string oingstramextedil \
notebook_     >>Secret/become.yml

With Debian freshly installed, Core needed several additional software packages. The administrator temporarily plugged Core into a cable modem and installed them as shown below.

$ sudo apt install wireguard systemd-resolved unattended-upgrades \
_                  chrony isc-dhcp-server bind9 apache2 postfix \
_                  dovecot-imapd fetchmail rsync gnupg

Manual installation of Postfix prompted for configuration type and mail name. The answers given are listed here.

  • General type of mail configuration: Internet Site
  • System mail name: core.small.private

The host then needed to be rebooted to get its name service working again after systemd-resolved was installed. (Any help with this will be welcome!) After rebooting and re-logging in, yet more software packages were installed.

The Nextcloud configuration required Apache2, MariaDB and a number of PHP modules. Installing them while Core was on a cable modem sped up final configuration "in position" (on a frontier).

$ sudo apt install mariadb-server php php-{apcu,bcmath,curl,gd,gmp}\
_                  php-{json,mysql,mbstring,intl,imagick,xml,zip} \
_                  imagemagick libapache2-mod-php

Similarly, the NAGIOS configuration required a handful of packages that were pre-loaded via cable modem (to test a frontier deployment).

$ sudo apt install nagios4 monitoring-plugins-basic lm-sensors \
_                  nagios-nrpe-plugin

Next, the administrator concatenated a personal public ssh key and the key found in Secret/ssh_admin/ (created by The CA Command) into an admin_keys file, copied it to Core, and installed it as the authorized_keys for sysadm.

notebook$ cat ~/.ssh/id_rsa.pub Secret/ssh_admin/id_rsa.pub \
notebook_     > admin_keys
notebook$ scp admin_keys sysadm@core.lan:
The authenticity of host 'core.lan' can't be established.
....
Are you sure you want to continue connecting (...)? yes
...
sysadm@core.lan's password: oingstramextedil
notebook$ ssh sysadm@core.lan
sysadm@core.lan's password: oingstramextedil
sysadm@core$ ( umask 077; mkdir .ssh; \
sysadm@core_   cp admin_keys .ssh/authorized_keys )
sysadm@core$ rm admin_keys
sysadm@core$ logout
notebook$ rm admin_keys
notebook$

Note that the name core.lan should be known to the cable modem's DNS service. An IP address might be used instead, discovered with an ip -4 a command on Core.

Now Core no longer needed the Internets so it was disconnected from the cable modem and connected to the campus Ethernet switch. Its primary Ethernet interface was temporarily (manually) configured with a new, private IP address and a default route.

In the example command lines below, the address 10.227.248.1 was generated by the random subnet address picking procedure described in Subnets, and is named core_addr in the Ansible code. The second address, 10.227.248.2, is the corresponding address for Gate's Ethernet interface, and is named gate_addr in the Ansible code.

sysadm@core$ sudo ip address add 10.227.248.1 dev enp82s0
sysadm@core$ sudo ip route add default via 10.227.248.2 dev enp82s0

At this point Core was ready for provisioning with Ansible.

5.3. The Gate Machine

Gate is the small institute's route to the Internet, and the campus Wi-Fi's route to the private Ethernet. It has three network interfaces.

  1. lan is its main Ethernet interface, connected to the campus's private Ethernet switch.
  2. wild is its second Ethernet interface, connected to the untrusted network of campus IoT appliances and Wi-Fi access point(s).
  3. isp is its third network interface, connected to the campus ISP. This could be an Ethernet device connected to a cable modem, a USB port tethered to a phone, a wireless adapter connected to a campground Wi-Fi access point, etc.
=============== | ==================================================
                |                                           Premises
          (Campus ISP)                                              
                |            +----Member's notebook on campus       
                |            |                                      
                | +----(Campus Wi-Fi)                               
                | |                                                 
============== Gate ================================================
                |                                            Private
                +----Ethernet switch                                

5.3.1. Alternate Gate Topology

While Gate and Core really need to be separate machines for security reasons, the campus Wi-Fi and the ISP's Wi-Fi can be the same machine. This avoids the need for a second Wi-Fi access point and leads to the following topology.

=============== | ==================================================
                |                                           Premises
           (House ISP)                                              
          (House Wi-Fi)-----------Member's notebook on campus       
          (House Ethernet)                                          
                |                                                   
============== Gate ================================================
                |                                            Private
                +----Ethernet switch                                

In this case Gate has two interfaces and there is no wild subnet other than the Internets themselves.

Support for this "alternate" topology is planned but not yet implemented. Like the original topology, it should require no changes to a standard cable modem's default configuration (assuming its Ethernet and Wi-Fi clients are allowed to communicate).

5.3.2. Original Gate Topology

The Ansible code in this document is somewhat dependent on the physical network shown in the Overview wherein Gate has three network interfaces.

The following example prepared a new gate on a PC with Debian 11 freshly installed. During installation, the machine was named gate, no desktop or server software was installed, no root password was set, and a privileged account named sysadm was created (per the policy in The Administration Accounts).

New password: icismassssadestm
Retype new password: icismassssadestm
...
        Full Name []: System Administrator
...
Is the information correct? [Y/n] 

The password was generated by gpw, saved in the administrator's password keep, and later added to Secret/become.yml as shown below. (Producing a working Ansible configuration with Secret/become.yml file is described in The Ansible Configuration.)

notebook$ gpw 1 16
icismassssadestm
notebook$ echo -n "become_gate: " >>Secret/become.yml
notebook$ ansible-vault encrypt_string icismassssadestm \
notebook_     >>Secret/become.yml

With Debian freshly installed, Gate needed a couple additional software packages. The administrator temporarily plugged Gate into a cable modem and installed them as shown below.

$ sudo apt install systemd-resolved unattended-upgrades \
_                  ufw postfix wireguard lm-sensors \
_                  nagios-nrpe-server

The host then needed to be rebooted to get its name service working again after systemd-resolved was installed. (Any help with this will be welcome!) After rebooting and re-logging in, the administrator was ready to proceed.

Next, the administrator concatenated a personal public ssh key and the key found in Secret/ssh_admin/ (created by The CA Command) into an admin_keys file, copied it to Gate, and installed it as the authorized_keys for sysadm.

notebook$ cat ~/.ssh/id_rsa.pub Secret/ssh_admin/id_rsa.pub \
notebook_     > admin_keys
notebook$ scp admin_keys sysadm@gate.lan:
The authenticity of host 'gate.lan' can't be established.
....
Are you sure you want to continue connecting (...)? yes
...
sysadm@gate.lan's password: icismassssadestm
notebook$ ssh sysadm@gate.lan
sysadm@gate.lan's password: icismassssadestm
sysadm@gate$ ( umask 077; mkdir .ssh; \
sysadm@gate_   cp admin_keys .ssh/authorized_keys )
sysadm@core$ rm admin_keys
sysadm@core$ logout
notebook$ rm admin_keys
notebook$

Note that the name gate.lan should be known to the cable modem's DNS service. An IP address might be used instead, discovered with an ip a command on Gate.

Now Gate no longer needed the Internets so it was disconnected from the cable modem and connected to the campus Ethernet switch. Its primary Ethernet interface was temporarily (manually) configured with a new, private IP address.

In the example command lines below, the address 10.227.248.2 was generated by the random subnet address picking procedure described in Subnets, and is named gate_addr in the Ansible code.

$ sudo ip address add 10.227.248.2 dev eth0

Gate was also connected to the USB Ethernet dongles cabled to the campus Wi-Fi access point and the campus ISP and the values of three variables (gate_lan_mac, gate_wild_mac, and gate_isp_mac in private/vars.yml) match the actual hardware MAC addresses of the dongles. (For more information, see the tasks in section 9.3.)

At this point Gate was ready for provisioning with Ansible.

6. The All Role

The all role contains tasks that are executed on all of the institute's servers. At the moment there is just the one.

6.1. Include Particulars

The all role's task contains a reference to a common institute particular, the institute's domain_name, a variable found in the public/vars.yml file. Thus the first task of the all role is to include the variables defined in this file (described in The Particulars). The code block below is the first to tangle into roles/all/tasks/main.yml.

roles_t/all/tasks/main.yml
---
- name: Include public variables.
  include_vars: ../public/vars.yml

6.2. Enable Systemd Resolved

The systemd-networkd and systemd-resolved service units are not enabled by default in Debian, but are the default in Ubuntu. The institute attempts to make use of their link-local name resolution, so they are enabled on all institute hosts.

The /usr/share/doc/systemd/README.Debian.gz file recommends both services be enabled and /etc/resolv.conf be replaced with a symbolic link to /run/systemd/resolve/resolv.conf. The institute follows these recommendations (and not the suggestion to enable "persistent logging", yet). In Debian 12 there is a systemd-resolved package that symbolically links /etc/resolv.conf (and provides /lib/systemd/systemd-resolved, formerly part of the systemd package).

roles_t/all/tasks/main.yml

- name: Install systemd-resolved.
  become: yes
  apt: pkg=systemd-resolved
  when:
  - ansible_distribution == 'Debian'
  - 11 < ansible_distribution_major_version|int

- name: Start systemd-networkd.
  become: yes
  systemd:
    service: systemd-networkd
    state: started
  tags: actualizer

- name: Enable systemd-networkd.
  become: yes
  systemd:
    service: systemd-networkd
    enabled: yes

- name: Start systemd-resolved.
  become: yes
  systemd:
    service: systemd-resolved
    state: started
  tags: actualizer

- name: Enable systemd-resolved.
  become: yes
  systemd:
    service: systemd-resolved
    enabled: yes

- name: Link /etc/resolv.conf.
  become: yes
  file:
    path: /etc/resolv.conf
    src: /run/systemd/resolve/resolv.conf
    state: link
    force: yes
  when:
  - ansible_distribution == 'Debian'
  - 12 > ansible_distribution_major_version|int

6.3. Trust Institute Certificate Authority

All servers should recognize the institute's Certificate Authority as trustworthy, so its certificate is added to the set of trusted CAs on each host. More information about how the small institute manages its X.509 certificates is available in Keys.

roles_t/all/tasks/main.yml

- name: Trust the institute CA.
  become: yes
  copy:
    src: ../Secret/CA/pki/ca.crt
    dest: /usr/local/share/ca-certificates/{{ domain_name }}.crt
    mode: u=r,g=r,o=r
    owner: root
    group: root
  notify: Update CAs.
roles_t/all/handlers/main.yml
---
- name: Update CAs.
  become: yes
  command: update-ca-certificates

7. The Front Role

The front role installs and configures the services expected on the institute's publicly accessible "front door": email, web, VPN. The virtual machine is prepared with an Ubuntu Server install and remote access to a privileged, administrator's account. (For details, see The Front Machine.)

Front initially presents the same self-signed, "snake oil" server certificate for its HTTP, SMTP and IMAP services, created by the institute's certificate authority but "snake oil" all the same (assuming the small institute is not a well recognized CA). The HTTP, SMTP and IMAP servers are configured to use the certificate (and private key) in /etc/server.crt (and /etc/server.key), so replacing the "snake oil" is as easy as replacing these two files, perhaps with symbolic links to, for example, /etc/letsencrypt/live/small.example.org/fullchain.pem.

7.1. Role Defaults

The front role sets a number of variables to default values in its defaults/main.yml file.

roles_t/front/defaults/main.yml
---
<<network-vars>>
<<address-vars>>
<<membership-rolls>>

The membership-rolls reference defines membership_rolls which is used to select an empty membership roll if one has not been written yet. (See section 12.7.)

7.2. Include Particulars

The first task, as in The All Role, is to include the institute particulars. The front role refers to private variables and the membership roll, so these are included was well.

roles_t/front/tasks/main.yml
---
- name: Include public variables.
  include_vars: ../public/vars.yml

- name: Include private variables.
  include_vars: ../private/vars.yml

- name: Include members.
  include_vars: "{{ lookup('first_found', membership_rolls) }}"
  tags: accounts

7.3. Configure Hostname

This task ensures that Front's /etc/hostname and /etc/mailname are correct. The correct /etc/mailname is essential to proper email delivery.

roles_t/front/tasks/main.yml

- name: Configure hostname.
  become: yes
  copy:
    content: "{{ domain_name }}\n"
    dest: "{{ item }}"
  loop:
  - /etc/hostname
  - /etc/mailname

- name: Update hostname.
  become: yes
  command: hostname -F /etc/hostname
  when: domain_name != ansible_fqdn
  tags: actualizer

7.4. Add Administrator to System Groups

The administrator often needs to read (directories of) log files owned by groups root and adm. Adding the administrator's account to these groups speeds up debugging.

roles_t/front/tasks/main.yml

- name: Add {{ ansible_user }} to system groups.
  become: yes
  user:
    name: "{{ ansible_user }}"
    append: yes
    groups: root,adm

7.5. Configure SSH

The SSH service on Front needs to be known to Monkey. The following tasks ensure this by replacing the automatically generated keys with those stored in Secret/ssh_front/etc/ssh/ and restarting the server.

roles_t/front/tasks/main.yml

- name: Install SSH host keys.
  become: yes
  copy:
    src: ../Secret/ssh_front/etc/ssh/{{ item.name }}
    dest: /etc/ssh/{{ item.name }}
    mode: "{{ item.mode }}"
  loop:
  - { name: ssh_host_ecdsa_key,       mode: "u=rw,g=,o=" }
  - { name: ssh_host_ecdsa_key.pub,   mode: "u=rw,g=r,o=r" }
  - { name: ssh_host_ed25519_key,     mode: "u=rw,g=,o=" }
  - { name: ssh_host_ed25519_key.pub, mode: "u=rw,g=r,o=r" }
  - { name: ssh_host_rsa_key,         mode: "u=rw,g=,o=" }
  - { name: ssh_host_rsa_key.pub,     mode: "u=rw,g=r,o=r" }
  notify: Reload SSH server.
roles_t/front/handlers/main.yml
---
- name: Reload SSH server.
  become: yes
  systemd:
    service: ssh
    state: reloaded
  tags: actualizer

7.6. Configure Monkey

The small institute runs cron jobs and web scripts that generate reports and perform checks. The un-privileged jobs are run by a system account named monkey. One of Monkey's more important jobs on Core is to run rsync to update the public web site on Front. Monkey on Core will login as monkey on Front to synchronize the files (as described in *Configure Apache2). To do that without needing a password, the monkey account on Front should authorize Monkey's SSH key on Core.

roles_t/front/tasks/main.yml

- name: Create monkey.
  become: yes
  user:
    name: monkey
    password: "!"

- name: Authorize monkey@core.
  become: yes
  vars:
    pubkeyfile: ../Secret/ssh_monkey/id_rsa.pub
  authorized_key:
    user: monkey
    key: "{{ lookup('file', pubkeyfile) }}"
    manage_dir: yes

- name: Add {{ ansible_user }} to monkey group.
  become: yes
  user:
    name: "{{ ansible_user }}"
    append: yes
    groups: monkey

7.7. Install Rsync

Monkey uses Rsync to keep the institute's public web site up-to-date.

roles_t/front/tasks/main.yml

- name: Install rsync.
  become: yes
  apt: pkg=rsync

7.8. Install Unattended Upgrades

The institute prefers to install security updates as soon as possible.

roles_t/front/tasks/main.yml

- name: Install basic software.
  become: yes
  apt: pkg=unattended-upgrades

7.9. Configure User Accounts

User accounts are created immediately so that Postfix and Dovecot can start delivering email immediately, without returning "no such recipient" replies. The Account Management chapter describes the members and usernames variables used below.

roles_t/front/tasks/main.yml

- name: Create user accounts.
  become: yes
  user:
    name: "{{ item }}"
    password: "{{ members[item].password_front }}"
    update_password: always
    home: /home/{{ item }}
  loop: "{{ usernames }}"
  when: members[item].status == 'current'
  tags: accounts

- name: Disable former users.
  become: yes
  user:
    name: "{{ item }}"
    password: "!"
  loop: "{{ usernames }}"
  when: members[item].status != 'current'
  tags: accounts

- name: Revoke former user authorized_keys.
  become: yes
  file:
    path: /home/{{ item }}/.ssh/authorized_keys
    state: absent
  loop: "{{ usernames }}"
  when: members[item].status != 'current'
  tags: accounts

7.10. Install Server Certificate

The servers on Front use the same certificate (and key) to authenticate themselves to institute clients. They share the /etc/server.crt and /etc/server.key files, the latter only readable by root.

roles_t/front/tasks/main.yml

- name: Install server certificate/key.
  become: yes
  copy:
    src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }}
    dest: /etc/server.{{ item.typ }}
    mode: "{{ item.mode }}"
    force: no
  loop:
  - { path: "issued/{{ domain_name }}", typ: crt,
      mode: "u=r,g=r,o=r" }
  - { path: "private/{{ domain_name }}", typ: key,
      mode: "u=r,g=,o=" }
  notify:
  - Restart Postfix.
  - Restart Dovecot.

7.11. Configure Postfix on Front

Front uses Postfix to provide the institute's public SMTP service, and uses the institute's domain name for its host name. The default Debian configuration (for an "Internet Site") is nearly sufficient. Manual installation may prompt for configuration type and mail name. The appropriate answers are listed here but will be checked (corrected) by Ansible tasks below.

  • General type of mail configuration: Internet Site
  • System mail name: small.example.org

As discussed in The Email Service above, Front's Postfix configuration includes site-wide support for larger message sizes, shorter queue times, the relaying configuration, and the common path to incoming emails. These and a few Front-specific Postfix configurations settings make up the complete configuration (below).

Front relays messages from the institute's public WireGuard™ subnet via which Core relays messages from the campus.

postfix-front-networks
- p: mynetworks
  v: >-
     {{ public_wg_net_cidr }}
     127.0.0.0/8
     [::ffff:127.0.0.0]/104
     [::1]/128

Front uses one recipient restriction to make things difficult for spammers, with permit_mynetworks at the start to not make things difficult for internal hosts, who do not have (public) domain names.

postfix-front-restrictions
- p: smtpd_recipient_restrictions
  v: >-
     permit_mynetworks
     reject_unauth_pipelining
     reject_unauth_destination
     reject_unknown_sender_domain

Front uses Postfix header checks to strip Received headers from outgoing messages. These headers contain campus host and network names and addresses in the clear (un-encrypted). Stripping them improves network privacy and security. Front also strips User-Agent headers just to make it harder to target the program(s) members use to open their email. These headers should be stripped only from outgoing messages; incoming messages are delivered locally, without smtp_header_checks.

postfix-header-checks
- p: smtp_header_checks
  v: regexp:/etc/postfix/header_checks.cf
postfix-header-checks-content
/^Received:/    IGNORE
/^User-Agent:/  IGNORE

The complete Postfix configuration for Front follows. In addition to the options already discussed, it must override the loopback-only Debian default for inet_interfaces.

postfix-front
- { p: smtpd_tls_cert_file, v: /etc/server.crt }
- { p: smtpd_tls_key_file, v: /etc/server.key }
<<postfix-front-networks>>
<<postfix-front-restrictions>>
<<postfix-relaying>>
<<postfix-message-size>>
<<postfix-queue-times>>
<<postfix-maildir>>
<<postfix-header-checks>>

The following Ansible tasks install Postfix, modify /etc/postfix/main.cf according to the settings given above, and start and enable the service.

roles_t/front/tasks/main.yml

- name: Install Postfix.
  become: yes
  apt: pkg=postfix

- name: Configure Postfix.
  become: yes
  lineinfile:
    path: /etc/postfix/main.cf
    regexp: "^ *{{ item.p }} *="
    line: "{{ item.p }} = {{ item.v }}"
  loop:
  <<postfix-front>>
  notify: Restart Postfix.

- name: Install Postfix header_checks.
  become: yes
  copy:
    content: |
      <<postfix-header-checks-content>>
    dest: /etc/postfix/header_checks.cf
  notify: Postmap header checks.

- name: Start Postfix.
  become: yes
  systemd:
    service: postfix
    state: started
  tags: actualizer

- name: Enable Postfix.
  become: yes
  systemd:
    service: postfix
    enabled: yes
roles_t/front/handlers/main.yml

- name: Restart Postfix.
  become: yes
  systemd:
    service: postfix
    state: restarted
  tags: actualizer

- name: Postmap header checks.
  become: yes
  command:
    chdir: /etc/postfix/
    cmd: postmap header_checks.cf
  notify: Restart Postfix.

7.12. Configure Public Email Aliases

The institute's Front needs to deliver email addressed to a number of common aliases as well as those advertised on the web site. System daemons like cron(8) may also send email to system accounts like monkey. The following aliases make these customary mailboxes available. The aliases are installed in /etc/aliases in a block with a special marker so that additional blocks can be installed by other Ansible roles. Note that the postmaster alias forwards to root in the default Debian configuration, and the following aliases do not include the crucial root alias that forwards to the administrator. It could be included here or in a separate block created by a more specialized role.

roles_t/front/tasks/main.yml
- name: Install institute email aliases.
  become: yes
  blockinfile:
    block: |
        abuse:          root
        webmaster:      root
        admin:          root
        monkey:         monkey@{{ front_wg_addr }}
        root:           {{ ansible_user }}
    path: /etc/aliases
    marker: "# {mark} INSTITUTE MANAGED BLOCK"
  notify: New aliases.
roles_t/front/handlers/main.yml

- name: New aliases.
  become: yes
  command: newaliases
  tags: actualizer

7.13. Configure Dovecot IMAPd

Front uses Dovecot's IMAPd to allow user Fetchmail jobs on Core to pick up messages. Front's Dovecot configuration is largely the Debian default with POP and IMAP (without TLS) support disabled. This is a bit "over the top" given that Core accesses Front via VPN, but helps to ensure privacy even when members must, in extremis, access recent email directly from their accounts on Front. For more information about Front's role in the institute's email services, see The Email Service.

The institute follows the recommendation in the package README.Debian (in /usr/share/dovecot-core/). Note that the default "snake oil" certificate can be replaced with one signed by a recognized authority (e.g. Let's Encrypt) so that email apps will not ask about trusting the self-signed certificate.

The following Ansible tasks install Dovecot's IMAP daemon and its /etc/dovecot/local.conf configuration file, then starts the service and enables it to start at every reboot.

roles_t/front/tasks/main.yml

- name: Install Dovecot IMAPd.
  become: yes
  apt: pkg=dovecot-imapd

- name: Configure Dovecot IMAPd.
  become: yes
  copy:
    content: |
      <<dovecot-tls>>
      ssl_cert = </etc/server.crt
      ssl_key = </etc/server.key
      <<dovecot-ports>>
      <<dovecot-maildir>>
    dest: /etc/dovecot/local.conf
  notify: Restart Dovecot.

- name: Start Dovecot.
  become: yes
  systemd:
    service: dovecot
    state: started
  tags: actualizer

- name: Enable Dovecot.
  become: yes
  systemd:
    service: dovecot
    enabled: yes
roles_t/front/handlers/main.yml

- name: Restart Dovecot.
  become: yes
  systemd:
    service: dovecot
    state: restarted
  tags: actualizer

7.14. Configure Apache2

This is the small institute's public web site. It is simple, static, and thus (hopefully) difficult to subvert. There are no server-side scripts to run. The standard Debian install runs the server under the www-data account, which does not need any permissions. It will serve only world-readable files.

The server's document root, /home/www/, is separate from the Debian default /var/www/html/ and (presumably) on the largest disk partition. The directory tree, from the document root to the leaf HTML files, should be owned by monkey, and only writable by its owner. It should not be writable by the Apache2 server (running as www-data).

The institute uses several SSL directives to trim protocol and cipher suite compatibility down, eliminating old and insecure methods and providing for forward secrecy. Along with an up-to-date Let's Encrypt certificate, these settings win the institute's web site an A rating from Qualys SSL Labs (https://www.ssllabs.com/).

The apache-ciphers block below is included last in the Apache2 configuration, so that its SSLCipherSuite directive can override (narrow) any list of ciphers set earlier (e.g. by Let's Encrypt!2). The protocols and cipher suites specified here were taken from https://www.ssllabs.com/projects/best-practices in 2022.

apache-ciphers
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLHonorCipherOrder on
SSLCipherSuite {{ [ 'ECDHE-ECDSA-AES128-GCM-SHA256',
                    'ECDHE-ECDSA-AES256-GCM-SHA384',
                    'ECDHE-ECDSA-AES128-SHA',
                    'ECDHE-ECDSA-AES256-SHA',
                    'ECDHE-ECDSA-AES128-SHA256',
                    'ECDHE-ECDSA-AES256-SHA384',
                    'ECDHE-RSA-AES128-GCM-SHA256',
                    'ECDHE-RSA-AES256-GCM-SHA384',
                    'ECDHE-RSA-AES128-SHA',
                    'ECDHE-RSA-AES256-SHA',
                    'ECDHE-RSA-AES128-SHA256',
                    'ECDHE-RSA-AES256-SHA384',
                    'DHE-RSA-AES128-GCM-SHA256',
                    'DHE-RSA-AES256-GCM-SHA384',
                    'DHE-RSA-AES128-SHA',
                    'DHE-RSA-AES256-SHA',
                    'DHE-RSA-AES128-SHA256',
                    'DHE-RSA-AES256-SHA256',
                    '!aNULL',
                    '!eNULL',
                    '!LOW',
                    '!3DES',
                    '!MD5',
                    '!EXP',
                    '!PSK',
                    '!SRP',
                    '!DSS',
                    '!RC4' ] |join(":") }}

The institute supports public member (static) web pages. A member can put an index.html file in their ~/Public/HTML/ directory on Front and it will be served as https://small.example.org/~member/ (if the member's account name is member and the file is world readable).

On Front, a member's web pages are available only when they appear in /home/www-users/ (via a symbolic link), giving the administration more control over what appears on the public web site. The tasks below create or remove the symbolic links.

The following are the necessary Apache2 directives: a UserDir directive naming /home/www-users/ and matching Directory block that includes the standard Require and AllowOverride directives used on all of the institute's web sites.

apache-userdir-front
UserDir /home/www-users
<Directory /home/www-users/>
        Require all granted
        AllowOverride None
</Directory>

The institute requires the use of HTTPS on Front, so its default HTTP virtual host permanently redirects requests to their corresponding HTTPS URLs.

apache-redirect-front
<VirtualHost *:80>
        Redirect permanent / https://{{ domain_name }}/
</VirtualHost>

The complete Apache2 configuration for Front is given below. It is installed in /etc/apache2/sites-available/{{ domain_name }}.conf (as expected by Let's Encrypt's Certbot). It includes the fragments described above and adds a VirtualHost block for the HTTPS service (also as expected by Certbot). The VirtualHost optionally includes an additional configuration file to allow other Ansible roles to specialize this configuration without disturbing the institute file.

The DocumentRoot directive is accompanied by a Directory block that authorizes access to the tree, and ensures .htaccess files within the tree are disabled for speed and security. This and most of Front's Apache2 directives (below) are intended for the top level, not the inside of a VirtualHost block. They should apply globally.

apache-front
ServerName {{ domain_name }}
ServerAdmin webmaster@{{ domain_name }}

DocumentRoot /home/www
<Directory /home/www/>
        Require all granted
        AllowOverride None
</Directory>

<<apache-userdir-front>>

ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined

<<apache-redirect-front>>

<VirtualHost *:443>
        SSLEngine on
        SSLCertificateFile /etc/server.crt
        SSLCertificateKeyFile /etc/server.key
        IncludeOptional \
            /etc/apache2/sites-available/{{ domain_name }}-vhost.conf
</VirtualHost>

<<apache-ciphers>>

Ansible installs the configuration above in e.g. /etc/apache2/sites-available/small.example.org.conf and runs a2ensite -q small.example.org to enable it.

roles_t/front/tasks/main.yml

- name: Install Apache2.
  become: yes
  apt: pkg=apache2

- name: Enable Apache2 modules.
  become: yes
  apache2_module:
    name: "{{ item }}"
  loop: [ ssl, userdir ]
  notify: Restart Apache2.

- name: Create DocumentRoot.
  become: yes
  file:
    path: /home/www
    state: directory
    owner: monkey
    group: monkey

- name: Configure web site.
  become: yes
  copy:
    content: |
      <<apache-front>>
    dest: /etc/apache2/sites-available/{{ domain_name }}.conf
  notify: Restart Apache2.

- name: Enable web site.
  become: yes
  command:
    cmd: a2ensite -q {{ domain_name }}
    creates: /etc/apache2/sites-enabled/{{ domain_name }}.conf
  notify: Restart Apache2.

- name: Start Apache2.
  become: yes
  systemd:
    service: apache2
    state: started
  tags: actualizer

- name: Enable Apache2.
  become: yes
  systemd:
    service: apache2
    enabled: yes
roles_t/front/handlers/main.yml

- name: Restart Apache2.
  become: yes
  systemd:
    service: apache2
    state: restarted
  tags: actualizer

Furthermore, the default web site and its HTTPS version is disabled so that it does not interfere with its replacement.

roles_t/front/tasks/main.yml

- name: Disable default vhosts.
  become: yes
  file:
    path: /etc/apache2/sites-enabled/{{ item }}
    state: absent
  loop: [ 000-default.conf, default-ssl.conf ]
  notify: Restart Apache2.

The redundant default other-vhosts-access-log configuration option is also disabled. There are no other virtual hosts, and it stores the same records as access.log.

roles_t/front/tasks/main.yml

- name: Disable other-vhosts-access-log option.
  become: yes
  file:
    path: /etc/apache2/conf-enabled/other-vhosts-access-log.conf
    state: absent
  notify: Restart Apache2.

Finally, the UserDir is created and populated with symbolic links to the users' ~/Public/HTML/ directories.

roles_t/front/tasks/main.yml

- name: Create UserDir.
  become: yes
  file:
    path: /home/www-users/
    state: directory

- name: Create UserDir links.
  become: yes
  file:
    path: /home/www-users/{{ item }}
    src: /home/{{ item }}/Public/HTML
    state: link
    force: yes
    follow: false
  loop: "{{ usernames }}"
  when: members[item].status == 'current'
  tags: accounts

- name: Disable former UserDir links.
  become: yes
  file:
    path: /home/www-users/{{ item }}
    state: absent
  loop: "{{ usernames }}"
  when: members[item].status != 'current'
  tags: accounts

7.15. Configure Public WireGuard™ Subnet

Front uses WireGuard™ to provide a public (Internet accessible) VPN service. Core has an interface on this VPN and is expected to forward packets between it and the institute's other private networks.

The following tasks install WireGuard™, configure it with private/front-wg0.conf (or private/front-wg0-empty.conf if it does not exist), and enable the service.

roles_t/front/tasks/main.yml

- name: Enable IP forwarding.
  become: yes
  sysctl:
    name: net.ipv4.ip_forward
    value: "1"
    state: present

- name: Install WireGuard™.
  become: yes
  apt: pkg=wireguard

- name: Configure WireGuard™.
  become: yes
  vars:
    srcs:
      - ../private/front-wg0.conf
      - ../private/front-wg0-empty.conf
  copy:
    src: "{{ lookup('first_found', srcs) }}"
    dest: /etc/wireguard/wg0.conf
    mode: u=r,g=,o=
    owner: root
    group: root
  notify: Restart WireGuard™.
  tags: accounts

- name: Start WireGuard™.
  become: yes
  systemd:
    service: wg-quick@wg0
    state: started
  tags: actualizer

- name: Enable WireGuard™.
  become: yes
  systemd:
    service: wg-quick@wg0
    enabled: yes
roles_t/front/handlers/main.yml

- name: Restart WireGuard™.
  become: yes
  systemd:
    service: wg-quick@wg0
    state: restarted
  tags: actualizer

The "empty" WireGuard™ configuration file (below) is used until the ./inst client command adds the first client, and generates an actual private/front-wg0.conf.

private/front-wg0-empty.conf
[Interface]
Address = 10.177.87.1/24
ListenPort = 39608
PostUp = wg set %i private-key /etc/wireguard/private-key
PostUp = resolvectl dns %i 192.168.56.1
PostUp = resolvectl domain %i small.private

7.15.1. Example private/front-wg0.conf

The example private/front-wg0.conf below recognizes Core by its public key and routes the institute's private networks to it. It also recognizes Dick's notebook and his (replacement) phone, assigning them host numbers 4 and 6 on the VPN.

This is just an example. The actual file is edited by the ./inst client command and so is not tangled from the following block.

Example private/front-wg0.conf
[Interface]
Address = 10.177.87.1/24
ListenPort = 39608
PostUp = wg set %i private-key /etc/wireguard/private-key
PostUp = resolvectl dns %i 192.168.56.1
PostUp = resolvectl domain %i small.private

# Core
[Peer]
PublicKey = lGhC51IBgZtlq4H2bsYFuKvPtV0VAEwUvVIn5fW7D0c=
AllowedIPs = 10.177.87.2
AllowedIPs = 192.168.56.0/24
AllowedIPs = 192.168.57.0/24
AllowedIPs = 10.84.139.0/24

# dick
[Peer]
PublicKey = 4qd4xdRztZBKhFrX9jI/b4fnMzpKQ5qhg691hwYSsX8=
AllowedIPs = 10.177.87.4

# dicks-razr
[Peer]
PublicKey = zho0qMxoLclJSQu4GeJEcMkk0hx4Q047OcNc8vOejVw=
AllowedIPs = 10.177.87.6

The configuration used on Dick's notebook when it is abroad looks like this:

WireGuard™ tunnel on Dick's notebook, used abroad
[Interface]
Address = 10.177.87.3
PostUp = wg set %i private-key /etc/wireguard/private-key
PostUp = resolvectl dns %i 192.168.56.1
PostUp = resolvectl domain %i small.private

[Peer]
EndPoint = 192.168.15.4:39608
PublicKey = S+6HaTnOwwhWgUGXjSBcPAvifKw+j8BDTRfq534gNW4=
AllowedIPs = 10.177.87.1
AllowedIPs = 192.168.56.0/24
AllowedIPs = 192.168.57.0/24
AllowedIPs = 10.177.87.0/24
AllowedIPs = 10.84.139.0/24

7.16. Configure Kamailio

Front uses Kamailio to provide a SIP service on the public VPN so that members abroad can chat privately. This is a connection-less UDP service that can be used with or without encryption. The VPN's encryption can be relied upon or an extra layer can be used when necessary. (Apps cannot tell if a network is secure and often assume the luser is an idiot, so they insist on doing some encryption.)

Kamailio listens on all network interfaces by default, but the institute expects its SIP traffic to be aggregated and encrypted via the public VPN. To enforce this expectation, Kamailio is instructed to listen only on Front's public VPN. The private name sip.small.private resolves to this address for the convenience of members configuring SIP clients. The server configuration specifies the actual IP, known here as front_wg_addr.

kamailio
listen=udp:{{ front_wg_addr }}:5060

The Ansible tasks that install and configure Kamailio follow, but before Kamailio is configured (thus started), the service is tweaked by a configuration drop (which must notify Systemd before the service starts).

The first step is to install Kamailio.

roles_t/front/tasks/main.yml

- name: Install Kamailio.
  become: yes
  apt: pkg=kamailio

Now the configuration drop concerns the network device on which Kamailio will be listening, the wg0 device created by WireGuard™. The added configuration settings inform Systemd that Kamailio should not be started before the wg0 device has appeared.

roles_t/front/tasks/main.yml

- name: Create Kamailio/Systemd configuration drop.
  become: yes
  file:
    path: /etc/systemd/system/kamailio.service.d
    state: directory

- name: Create Kamailio dependence on WireGuard™ interface.
  become: yes
  copy:
    content: |
      [Unit]
      After=wg-quick@wg0.service
      Requires=sys-devices-virtual-net-wg0.device
    dest: /etc/systemd/system/kamailio.service.d/depend.conf
  notify: Reload Systemd.
roles_t/front/handlers/main.yml

- name: Reload Systemd.
  become: yes
  systemd:
    daemon-reload: yes
  tags: actualizer

Finally, Kamailio can be configured and started.

roles_t/front/tasks/main.yml

- name: Configure Kamailio.
  become: yes
  copy:
    content: |
      <<kamailio>>
    dest: /etc/kamailio/kamailio-local.cfg
  notify: Restart Kamailio.

- name: Start Kamailio.
  become: yes
  systemd:
    service: kamailio
    state: started
  tags: actualizer

- name: Enable Kamailio.
  become: yes
  systemd:
    service: kamailio
    enabled: yes
roles_t/front/handlers/main.yml

- name: Restart Kamailio.
  become: yes
  systemd:
    service: kamailio
    state: restarted
  tags: actualizer

8. The Core Role

The core role configures many essential campus network services as well as the institute's private cloud, so the core machine has horsepower (CPUs and RAM) and large disks and is prepared with a Debian install and remote access to a privileged, administrator's account. (For details, see The Core Machine.)

8.1. Role Defaults

As in The Front Role, the core role sets a number of variables to default values in its defaults/main.yml file.

roles_t/core/defaults/main.yml
---
<<network-vars>>
<<address-vars>>
<<membership-rolls>>

8.2. Include Particulars

The first task, as in The Front Role, is to include the institute particulars and membership roll.

roles_t/core/tasks/main.yml
---
- name: Include public variables.
  include_vars: ../public/vars.yml
  tags: accounts

- name: Include private variables.
  include_vars: ../private/vars.yml
  tags: accounts

- name: Include members.
  include_vars: "{{ lookup('first_found', membership_rolls) }}"
  tags: accounts

8.3. Configure Hostname

This task ensures that Core's /etc/hostname and /etc/mailname are correct. Core accepts email addressed to the institute's public or private domain names, e.g. to dick@small.example.org as well as dick@small.private. The correct /etc/mailname is essential to proper email delivery.

roles_t/core/tasks/main.yml

- name: Configure hostname.
  become: yes
  copy:
    content: "{{ item.name }}\n"
    dest: "{{ item.file }}"
  loop:
  - { name: "core.{{ domain_priv }}", file: /etc/mailname }
  - { name: "{{ inventory_hostname }}", file: /etc/hostname }

- name: Update hostname.
  become: yes
  command: hostname -F /etc/hostname
  when: inventory_hostname != ansible_hostname
  tags: actualizer

8.4. Configure Systemd Resolved

Core runs the campus name server, so Resolved is configured to use it (or dns.google), to include the institute's domain in its search list, and to disable its cache and stub listener.

roles_t/core/tasks/main.yml

- name: Configure resolved.
  become: yes
  lineinfile:
    path: /etc/systemd/resolved.conf
    regexp: "{{ item.regexp }}"
    line: "{{ item.line }}"
  loop:
  - { regexp: '^ *DNS *=', line: "DNS=127.0.0.1" }
  - { regexp: '^ *FallbackDNS *=', line: "FallbackDNS=8.8.8.8" }
  - { regexp: '^ *Domains *=', line: "Domains={{ domain_priv }}" }
  - { regexp: '^ *Cache *=', line: "Cache=no" }
  - { regexp: '^ *DNSStubListener *=', line: "DNSStubListener=no" }
  notify:
  - Reload Systemd.
  - Restart Systemd resolved.
roles_t/core/handlers/main.yml
---
- name: Reload Systemd.
  become: yes
  systemd:
    daemon-reload: yes
  tags: actualizer

- name: Restart Systemd resolved.
  become: yes
  systemd:
    service: systemd-resolved
    state: restarted
  tags: actualizer

8.5. Configure Core NetworkD

Core's network interface is statically configured using the systemd-networkd configuration files 10-lan.link and 10-lan.network installed in /etc/systemd/network/. Those files statically assign Core's IP address (as well as the campus name server and search domain), and its default route through Gate. A second route, through Core itself to Front, is advertised to other hosts, and is routed through a WireGuard™ interface connected to Front's public WireGuard™ VPN.

Note that the [Match] sections of the .network files should specify only a MACAddress. Getting systemd-udevd to rename interfaces has thusfar been futile (short of a reboot), so specifying a Name means the interface does not match, leaving it un-configured (until the next reboot).

The configuration needs the MAC address of the primary (only) NIC, an example of which is given here. (A clever way to extract that name from ansible_facts would be appreciated. The ansible_default_ipv4 fact was an empty hash at first boot on a simulated campus Ethernet.)

private/vars.yml
core_lan_mac:               08:00:27:b3:e5:5f
roles_t/core/tasks/main.yml

- name: Install 10-lan.link.
  become: yes
  copy:
    content: |
      [Match]
      MACAddress={{ core_lan_mac }}

      [Link]
      Name=lan
    dest: /etc/systemd/network/10-lan.link

- name: Install 10-lan.network.
  become: yes
  copy:
    content: |
      [Match]
      MACAddress={{ core_lan_mac }}

      [Network]
      Address={{ core_addr_cidr }}
      Gateway={{ gate_addr }}
      DNS={{ core_addr }}
      Domains={{ domain_priv }}
    dest: /etc/systemd/network/10-lan.network
  notify: Reload networkd.
roles_t/core/handlers/main.yml

- name: Reload networkd.
  become: yes
  command: networkctl reload
  tags: actualizer

8.6. Configure DHCP For the Private Ethernet

Core speaks DHCP (Dynamic Host Configuration Protocol) using the Internet Software Consortium's DHCP server. The server assigns unique network addresses to hosts plugged into the private Ethernet as well as advertising local net services, especially the local Domain Name Service.

The example configuration file, private/core-dhcpd.conf, uses RFC3442's extension to encode a second (non-default) static route. The default route is through the campus ISP at Gate. A second route directs campus traffic to the Front VPN through Core. This is just an example file, with MAC addresses chosen to match VirtualBox test machines. In actual use private/core-dhcpd.conf refers to a replacement file.

private/core-dhcpd.conf
option domain-name "small.private";
option domain-name-servers 192.168.56.1;

default-lease-time 3600;
max-lease-time 7200;

ddns-update-style none;

authoritative;

log-facility daemon;

option rfc3442-routes code 121 = array of integer 8;

subnet 192.168.56.0 netmask 255.255.255.0 {
  option subnet-mask 255.255.255.0;
  option broadcast-address 192.168.56.255;
  option routers 192.168.56.2;
  option ntp-servers 192.168.56.1;
  option rfc3442-routes 24, 10,177,87, 192,168,56,1,
                        0,             192,168,56,2;
}

host dick {
  hardware ethernet 08:00:27:dc:54:b5; fixed-address 192.168.56.4; }

The following tasks install ISC's DHCP server and configure it with the real private/core-dhcpd.conf (not the example above).

roles_t/core/tasks/main.yml

- name: Install DHCP server.
  become: yes
  apt: pkg=isc-dhcp-server

- name: Configure DHCP interface.
  become: yes
  lineinfile:
    path: /etc/default/isc-dhcp-server
    regexp: "^INTERFACESv4="
    line: "INTERFACESv4=\"lan\""
  notify: Restart DHCP server.

- name: Configure DHCP subnet.
  become: yes
  copy:
    src: ../private/core-dhcpd.conf
    dest: /etc/dhcp/dhcpd.conf
  notify: Restart DHCP server.

- name: Start DHCP server.
  become: yes
  systemd:
    service: isc-dhcp-server
    state: started
  tags: actualizer

- name: Enable DHCP server.
  become: yes
  systemd:
    service: isc-dhcp-server
    enabled: yes
roles_t/core/handlers/main.yml

- name: Restart DHCP server.
  become: yes
  systemd:
    service: isc-dhcp-server
    state: restarted
  tags: actualizer

8.7. Configure BIND9

Core uses BIND9 to provide name service for the institute as described in The Name Service. The configuration supports reverse name lookups, resolving many private network addresses to private domain names.

The following tasks install and configure BIND9 on Core.

roles_t/core/tasks/main.yml

- name: Install BIND9.
  become: yes
  apt: pkg=bind9

- name: Configure BIND9 with named.conf.options.
  become: yes
  copy:
    content: |
      <<bind-options>>
    dest: /etc/bind/named.conf.options
  notify: Reload BIND9.

- name: Configure BIND9 with named.conf.local.
  become: yes
  copy:
    content: |
      <<bind-local>>
    dest: /etc/bind/named.conf.local
  notify: Reload BIND9.

- name: Install BIND9 zonefiles.
  become: yes
  copy:
    src: ../private/db.{{ item }}
    dest: /etc/bind/db.{{ item }}
  loop: [ domain, private, public_vpn, campus_vpn ]
  notify: Reload BIND9.

- name: Start BIND9.
  become: yes
  systemd:
    service: bind9
    state: started
  tags: actualizer

- name: Enable BIND9.
  become: yes
  systemd:
    service: bind9
    enabled: yes
roles_t/core/handlers/main.yml

- name: Reload BIND9.
  become: yes
  systemd:
    service: bind9
    state: reloaded
  tags: actualizer

Examples of the necessary zone files, for the "Install BIND9 zonefiles." task above, are given below. If the campus ISP provided one or more IP addresses for stable name servers, those should probably be used as forwarders rather than Google.

bind-options
acl "trusted" {
        {{ private_net_cidr }};
        {{ wild_net_cidr }};
        {{ public_wg_net_cidr }};
        {{ campus_wg_net_cidr }};
        localhost;
};

options {
        directory "/var/cache/bind";

        forwarders {
                8.8.4.4;
                8.8.8.8;
        };

        allow-query { any; };
        allow-recursion { trusted; };
        allow-query-cache { trusted; };

        dnssec-validation yes;

        listen-on {
                {{ core_addr }};
                localhost;
        };
};
bind-local
include "/etc/bind/zones.rfc1918";

zone "{{ domain_priv }}." {
        type master;
        file "/etc/bind/db.domain";
};

zone "{{ private_net_cidr | ansible.utils.ipaddr('revdns')
         | regex_replace('^0\.','') }}" {
        type master;
        file "/etc/bind/db.private";
};

zone "{{ public_wg_net_cidr | ansible.utils.ipaddr('revdns')
         | regex_replace('^0\.','') }}" {
        type master;
        file "/etc/bind/db.public_vpn";
};

zone "{{ campus_wg_net_cidr | ansible.utils.ipaddr('revdns')
         | regex_replace('^0\.','') }}" {
        type master;
        file "/etc/bind/db.campus_vpn";
};
private/db.domain
;
; BIND data file for a small institute's PRIVATE domain names.
;
$TTL    604800
@       IN      SOA     small.private. root.small.private. (
                              1         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
;
@       IN      NS      core.small.private.
$TTL    7200
mail    IN      CNAME   core.small.private.
smtp    IN      CNAME   core.small.private.
ns      IN      CNAME   core.small.private.
www     IN      CNAME   core.small.private.
test    IN      CNAME   core.small.private.
live    IN      CNAME   core.small.private.
ntp     IN      CNAME   core.small.private.
sip     IN      A       10.177.87.1
;
core    IN      A       192.168.56.1
gate    IN      A       192.168.56.2
private/db.private
;
; BIND reverse data file for a small institute's private Ethernet.
;
$TTL    604800
@       IN      SOA     small.private. root.small.private. (
                              1         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
;
@       IN      NS      core.small.private.
$TTL    7200
1       IN      PTR     core.small.private.
2       IN      PTR     gate.small.private.
private/db.public_vpn
;
; BIND reverse data file for a small institute's public VPN.
;
$TTL    604800
@       IN      SOA     small.private. root.small.private. (
                              1         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
;
@       IN      NS      core.small.private.
$TTL    7200
1       IN      PTR     front-p.small.private.
2       IN      PTR     core-p.small.private.
private/db.campus_vpn
;
; BIND reverse data file for a small institute's campus VPN.
;
$TTL    604800
@       IN      SOA     small.private. root.small.private. (
                              1         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
;
@       IN      NS      core.small.private.
$TTL    7200
1       IN      PTR     gate-c.small.private.

8.8. Add Administrator to System Groups

The administrator often needs to read (directories of) log files owned by groups root and adm. Adding the administrator's account to these groups speeds up debugging.

roles_t/core/tasks/main.yml

- name: Add {{ ansible_user }} to system groups.
  become: yes
  user:
    name: "{{ ansible_user }}"
    append: yes
    groups: root,adm

8.9. Configure Monkey

The small institute runs cron jobs and web scripts that generate reports and perform checks. The un-privileged jobs are run by a system account named monkey. One of Monkey's more important jobs on Core is to run rsync to update the public web site on Front (as described in *Configure Apache2).

roles_t/core/tasks/main.yml

- name: Create monkey.
  become: yes
  user:
    name: monkey
    password: "!"
    append: yes
    groups: staff

- name: Add {{ ansible_user }} to staff groups.
  become: yes
  user:
    name: "{{ ansible_user }}"
    append: yes
    groups: monkey,staff

- name: Create /home/monkey/.ssh/.
  become: yes
  file:
    path: /home/monkey/.ssh
    state: directory
    mode: u=rwx,g=,o=
    owner: monkey
    group: monkey

- name: Configure monkey@core.
  become: yes
  copy:
    src: ../Secret/ssh_monkey/{{ item.name }}
    dest: /home/monkey/.ssh/{{ item.name }}
    mode: "{{ item.mode }}"
    owner: monkey
    group: monkey
  loop:
  - { name: config,      mode: "u=rw,g=r,o=" }
  - { name: id_rsa.pub,  mode: "u=rw,g=r,o=r" }
  - { name: id_rsa,      mode: "u=rw,g=,o=" }

- name: Configure Monkey SSH known hosts.
  become: yes
  vars:
    pubkeypath: ../Secret/ssh_front/etc/ssh
    pubkeyfile: "{{ pubkeypath }}/ssh_host_ecdsa_key.pub"
    pubkey: "{{ lookup('file', pubkeyfile) }}"
  lineinfile:
    regexp: "^{{ domain_name }},{{ front_addr }} ecdsa-sha2-nistp256 "
    line: "{{ domain_name }},{{ front_addr }} {{ pubkey }}"
    path: /home/monkey/.ssh/known_hosts
    create: yes
    owner: monkey
    group: monkey
    mode: "u=rw,g=,o="

8.10. Install Unattended Upgrades

The institute prefers to install security updates as soon as possible.

roles_t/core/tasks/main.yml

- name: Install basic software.
  become: yes
  apt: pkg=unattended-upgrades

8.11. Configure User Accounts

User accounts are created immediately so that backups can begin restoring as soon as possible. The Account Management chapter describes the members and usernames variables.

roles_t/core/tasks/main.yml

- name: Create user accounts.
  become: yes
  user:
    name: "{{ item }}"
    password: "{{ members[item].password_core }}"
    update_password: always
    home: /home/{{ item }}
  loop: "{{ usernames }}"
  when: members[item].status == 'current'
  tags: accounts

- name: Disable former users.
  become: yes
  user:
    name: "{{ item }}"
    password: "!"
  loop: "{{ usernames }}"
  when: members[item].status != 'current'
  tags: accounts

- name: Revoke former user authorized_keys.
  become: yes
  file:
    path: /home/{{ item }}/.ssh/authorized_keys
    state: absent
  loop: "{{ usernames }}"
  when: members[item].status != 'current'
  tags: accounts

8.12. Install Server Certificate

The servers on Core use the same certificate (and key) to authenticate themselves to institute clients. They share the /etc/server.crt and /etc/server.key files, the latter only readable by root.

roles_t/core/tasks/main.yml

- name: Install server certificate/key.
  become: yes
  copy:
    src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }}
    dest: /etc/server.{{ item.typ }}
    mode: "{{ item.mode }}"
  loop:
  - { path: "issued/core.{{ domain_priv }}", typ: crt,
      mode: "u=r,g=r,o=r" }
  - { path: "private/core.{{ domain_priv }}", typ: key,
      mode: "u=r,g=,o=" }
  notify:
  - Restart Postfix.
  - Restart Dovecot.

8.13. Install Chrony

Core uses Chrony to provide a time synchronization service to the campus. The default daemon's default configuration is fine.

roles_t/core/tasks/main.yml

- name: Install Chrony.
  become: yes
  apt: pkg=chrony

- name: Configure NTP service.
  become: yes
  copy:
    content: |
      allow {{ private_net_cidr }}
      allow {{ public_wg_net_cidr }}
      allow {{ campus_wg_net_cidr }}
    dest: /etc/chrony/conf.d/institute.conf
  notify: Restart Chrony.
roles_t/core/handlers/main.yml

- name: Restart Chrony.
  become: yes
  systemd:
    service: chrony
    state: restarted

8.14. Configure Postfix on Core

Core uses Postfix to provide SMTP service to the campus. The default Debian configuration (for an "Internet Site") is nearly sufficient. Manual installation may prompt for configuration type and mail name. The appropriate answers are listed here but will be checked (corrected) by Ansible tasks below.

  • General type of mail configuration: Internet Site
  • System mail name: core.small.private

As discussed in The Email Service above, Core delivers email addressed to any internal domain name locally, and uses its smarthost Front to relay the rest. Core is reachable only on institute networks, so there is little benefit in enabling TLS, but it does need to handle larger messages and respect the institute's expectation of shortened queue times.

Core relays messages from any institute network.

postfix-core-networks
- p: mynetworks
  v: >-
     {{ private_net_cidr }}
     {{ public_wg_net_cidr }}
     {{ campus_wg_net_cidr }}
     127.0.0.0/8
     [::ffff:127.0.0.0]/104
     [::1]/128

Core uses Front to relay messages to the Internet.

postfix-core-relayhost
- { p: relayhost, v: "[{{ front_wg_addr }}]" }

Core uses a Postfix transport file, /etc/postfix/transport, to specify local delivery for email addressed to any internal domain name. Note the leading dot at the beginning of the sole line in the file.

postfix-transport
.{{ domain_name }}      local:$myhostname
.{{ domain_priv }}      local:$myhostname

The complete list of Core's Postfix settings for /etc/postfix/main.cf follow.

postfix-core
<<postfix-relaying>>
- { p: smtpd_tls_security_level, v: none }
- { p: smtp_tls_security_level, v: none }
<<postfix-message-size>>
<<postfix-queue-times>>
<<postfix-maildir>>
<<postfix-core-networks>>
<<postfix-core-relayhost>>
- { p: inet_interfaces, v: "127.0.0.1 {{ core_addr }}" }

The following Ansible tasks install Postfix, modify /etc/postfix/main.cf, create /etc/postfix/transport, and start and enable the service. Whenever /etc/postfix/transport is changed, the postmap transport command must also be run.

roles_t/core/tasks/main.yml

- name: Install Postfix.
  become: yes
  apt: pkg=postfix

- name: Configure Postfix.
  become: yes
  lineinfile:
    path: /etc/postfix/main.cf
    regexp: "^ *{{ item.p }} *="
    line: "{{ item.p }} = {{ item.v }}"
  loop:
  <<postfix-core>>
  - { p: transport_maps, v: "hash:/etc/postfix/transport" }
  notify: Restart Postfix.

- name: Configure Postfix transport.
  become: yes
  copy:
    content: |
      <<postfix-transport>>
    dest: /etc/postfix/transport
  notify: Postmap transport.

- name: Start Postfix.
  become: yes
  systemd:
    service: postfix
    state: started
  tags: actualizer

- name: Enable Postfix.
  become: yes
  systemd:
    service: postfix
    enabled: yes
roles_t/core/handlers/main.yml

- name: Restart Postfix.
  become: yes
  systemd:
    service: postfix
    state: restarted
  tags: actualizer

- name: Postmap transport.
  become: yes
  command:
    chdir: /etc/postfix/
    cmd: postmap transport
  notify: Restart Postfix.

8.15. Configure Private Email Aliases

The institute's Core needs to deliver email addressed to institute aliases including those advertised on the campus web site, in X.509 certificates, etc. System daemons like cron(8) may also send email to e.g. monkey. The following aliases are installed in /etc/aliases with a special marker so that additional blocks can be installed by more specialized roles.

roles_t/core/tasks/main.yml

- name: Install institute email aliases.
  become: yes
  blockinfile:
    block: |
        admin:          root
        www-data:       root
        monkey:         root
        root:           {{ ansible_user }}
    path: /etc/aliases
    marker: "# {mark} INSTITUTE MANAGED BLOCK"
  notify: New aliases.
roles_t/core/handlers/main.yml

- name: New aliases.
  become: yes
  command: newaliases
  tags: actualizer

8.16. Configure Dovecot IMAPd

Core uses Dovecot's IMAPd to store and serve member emails. As on Front, Core's Dovecot configuration is largely the Debian default with POP and IMAP (without TLS) support disabled. This is a bit "over the top" given that Core is only accessed from private (encrypted) networks, but helps to ensure privacy even when members accidentally attempt connections from outside the private networks. For more information about Core's role in the institute's email services, see The Email Service.

The institute follows the recommendation in the package README.Debian (in /usr/share/dovecot-core/) but replaces the default "snake oil" certificate with another, signed by the institute. (For more information about the institute's X.509 certificates, see Keys.)

The following Ansible tasks install Dovecot's IMAP daemon and its /etc/dovecot/local.conf configuration file, then starts the service and enables it to start at every reboot.

roles_t/core/tasks/main.yml

- name: Install Dovecot IMAPd.
  become: yes
  apt: pkg=dovecot-imapd

- name: Configure Dovecot IMAPd.
  become: yes
  copy:
    content: |
      <<dovecot-tls>>
      ssl_cert = </etc/server.crt
      ssl_key = </etc/server.key
      <<dovecot-maildir>>
    dest: /etc/dovecot/local.conf
  notify: Restart Dovecot.

- name: Start Dovecot.
  become: yes
  systemd:
    service: dovecot
    state: started
  tags: actualizer

- name: Enable Dovecot.
  become: yes
  systemd:
    service: dovecot
    enabled: yes
roles_t/core/handlers/main.yml

- name: Restart Dovecot.
  become: yes
  systemd:
    service: dovecot
    state: restarted
  tags: actualizer

8.17. Configure Fetchmail

Core runs a fetchmail for each member of the institute. Individual fetchmail jobs can run with the --idle option and thus can download new messages instantly. The jobs run as Systemd services and so are monitored and started at boot.

In the ~/.fetchmailrc template below, the item variable is a username, and members[item] is the membership record associated with the username. The template is only used when the record has a password_fetchmail key providing the member's plain-text password.

fetchmail-config
# Permissions on this file may be no greater than 0600.

set no bouncemail
set no spambounce
set no syslog
#set logfile /home/{{ item }}/.fetchmail.log

poll {{ front_wg_addr }} protocol imap timeout 15
    username {{ item }}
    password "{{ members[item].password_fetchmail }}" fetchall
    ssl sslproto tls1.2+ sslcertck sslcommonname {{ domain_name }}

The Systemd service description.

fetchmail-service
[Unit]
Description=Fetchmail --idle task for {{ item }}.
AssertPathExists=/home/{{ item }}/.fetchmailrc
After=wg-quick@wg0.service
Wants=sys-devices-virtual-net-wg0.device

[Service]
User={{ item }}
ExecStart=/usr/bin/fetchmail --idle
Restart=always
RestartSec=1m
NoNewPrivileges=true

[Install]
WantedBy=default.target

The following tasks install fetchmail, a ~/.fetchmailrc and Systemd .service file for each current member, start the services, and enable them to start on boot. To accommodate any member of the institute who may wish to run their own fetchmail job on their notebook, only members with a fetchmail_password key will be provided the Core service.

roles_t/core/tasks/main.yml

- name: Install fetchmail.
  become: yes
  apt: pkg=fetchmail

- name: Configure user fetchmails.
  become: yes
  copy:
    content: |
      <<fetchmail-config>>
    dest: /home/{{ item }}/.fetchmailrc
    owner: "{{ item }}"
    group: "{{ item }}"
    mode: u=rw,g=,o=
  loop: "{{ usernames }}"
  when:
  - members[item].status == 'current'
  - members[item].password_fetchmail is defined
  tags: accounts

- name: Create user fetchmail services.
  become: yes
  copy:
    content: |
      <<fetchmail-service>>
    dest: /etc/systemd/system/fetchmail-{{ item }}.service
  loop: "{{ usernames }}"
  when:
  - members[item].status == 'current'
  - members[item].password_fetchmail is defined
  tags: accounts

- name: Enable/Start user fetchmail services.
  become: yes
  systemd:
    service: fetchmail-{{ item }}.service
    enabled: yes
    state: started
  loop: "{{ usernames }}"
  when:
  - members[item].status == 'current'
  - members[item].password_fetchmail is defined
  tags: accounts, actualizer

Finally, any former member's Fetchmail service on Core should be stopped and disabled from restarting at boot, deleted even.

roles_t/core/tasks/main.yml

- name: Stop former user fetchmail services.
  become: yes
  systemd:
    service: fetchmail-{{ item }}
    state: stopped
    enabled: no
  loop: "{{ usernames }}"
  when:
  - members[item].status != 'current'
  - members[item].password_fetchmail is defined
  tags: accounts

If the .service file is deleted, then Ansible cannot use the systemd module to stop it, nor check that it is still stopped. Otherwise the following task might be appropriate.


- name: Delete former user fetchmail services.
  become: yes
  file:
    path: /etc/systemd/system/fetchmail-{{ item }}.service
    state: absent
  loop: "{{ usernames }}"
  when:
  - members[item].status != 'current'
  - members[item].password_fetchmail is defined
  tags: accounts

8.18. Configure Apache2

This is the small institute's campus web server. It hosts several web sites as described in The Web Services.

URL Doc.Root Description
http://live/ /WWW/live/ The live, public site.
http://test/ /WWW/test/ The next public site.
http://www/ /WWW/campus/ Campus home page.
http://core/ /var/www/ whatnot, e.g. Nextcloud

The live (and test) web site content (eventually) is intended to be copied to Front, so the live and test sites are configured as identically to Front's as possible. The directories and files are owned by monkey but are world readable, thus readable by www-data, the account running Apache2.

The campus web site is much more permissive. Its directories are owned by root but writable by the staff group. It runs CGI scripts found in any of its directories, any executable with a .cgi file name. It runs them as www-data so CGI scripts that need access to private data must Set-UID to the appropriate account.

The UserDir directives for all of Core's web sites are the same, and punt the indirection through a /home/www-users/ directory, simply naming a sub-directory in the member's home directory on Core. The <Directory> block is the same as the one used on Front.

apache-userdir-core
UserDir Public/HTML
<Directory /home/*/Public/HTML/>
        Require all granted
        AllowOverride None
</Directory>

The virtual host for the live web site is given below. It should look like Front's top-level web configuration without the permanent redirect, the encryption ciphers and certificates.

apache-live
<VirtualHost *:80>
        ServerName live
        ServerAlias live.{{ domain_priv }}
        ServerAdmin webmaster@core.{{ domain_priv }}

        DocumentRoot /WWW/live
        <Directory /WWW/live/>
                Require all granted
                AllowOverride None
        </Directory>

        <<apache-userdir-core>>

        ErrorLog ${APACHE_LOG_DIR}/live-error.log
        CustomLog ${APACHE_LOG_DIR}/live-access.log combined

        IncludeOptional /etc/apache2/sites-available/live-vhost.conf
</VirtualHost>

The virtual host for the test web site is given below. It should look familiar.

apache-test
<VirtualHost *:80>
        ServerName test
        ServerAlias test.{{ domain_priv }}
        ServerAdmin webmaster@core.{{ domain_priv }}

        DocumentRoot /WWW/test
        <Directory /WWW/test/>
                Require all granted
                AllowOverride None
        </Directory>

        <<apache-userdir-core>>

        ErrorLog ${APACHE_LOG_DIR}/test-error.log
        CustomLog ${APACHE_LOG_DIR}/test-access.log combined

        IncludeOptional /etc/apache2/sites-available/test-vhost.conf
</VirtualHost>

The virtual host for the campus web site is given below. It too should look familiar, but with a notably loose Directory directive. It assumes /WWW/campus/ is secure, writable only by properly trained staffers, monitored by a revision control system, etc.

apache-campus
<VirtualHost *:80>
        ServerName www
        ServerAlias www.{{ domain_priv }}
        ServerAdmin webmaster@core.{{ domain_priv }}

        DocumentRoot /WWW/campus
        <Directory /WWW/campus/>
                Options Indexes FollowSymLinks MultiViews ExecCGI
                AddHandler cgi-script .cgi
                Require all granted
                AllowOverride None
        </Directory>

        <<apache-userdir-core>>

        ErrorLog ${APACHE_LOG_DIR}/campus-error.log
        CustomLog ${APACHE_LOG_DIR}/campus-access.log combined

        IncludeOptional /etc/apache2/sites-available/www-vhost.conf
</VirtualHost>

The tasks below install Apache2 and edit its default configuration.

roles_t/core/tasks/main.yml

- name: Install Apache2.
  become: yes
  apt: pkg=apache2

- name: Enable Apache2 modules.
  become: yes
  apache2_module:
    name: "{{ item }}"
  loop: [ userdir, cgid, ssl ]
  notify: Restart Apache2.

- name: Configure Apache2 SSL certificate.
  become: yes
  lineinfile:
    path: /etc/apache2/sites-available/default-ssl.conf
    regexp: "^([\t ]*){{ item.p }}"
    line: "\\1{{ item.p }}\t{{ item.v }}"
    backrefs: yes
  loop:
    - { p: SSLCertificateFile, v: "/etc/server.crt" }
    - { p: SSLCertificateKeyFile, v: "/etc/server.key" }
  notify: Restart Apache2.

With Apache installed there is a /etc/apache/sites-available/ directory into which the above site configurations can be installed. The a2ensite command enables them.

roles_t/core/tasks/main.yml

- name: Install live web site.
  become: yes
  copy:
    content: |
      <<apache-live>>
    dest: /etc/apache2/sites-available/live.conf
    mode: u=rw,g=r,o=r
  notify: Restart Apache2.

- name: Install test web site.
  become: yes
  copy:
    content: |
      <<apache-test>>
    dest: /etc/apache2/sites-available/test.conf
    mode: u=rw,g=r,o=r
  notify: Restart Apache2.

- name: Install campus web site.
  become: yes
  copy:
    content: |
      <<apache-campus>>
    dest: /etc/apache2/sites-available/www.conf
    mode: u=rw,g=r,o=r
  notify: Restart Apache2.

- name: Enable web sites.
  become: yes
  command:
    cmd: a2ensite -q {{ item }}
    creates: /etc/apache2/sites-enabled/{{ item }}.conf
  loop: [ live, test, www, default-ssl ]
  notify: Restart Apache2.

- name: Start Apache2.
  become: yes
  systemd:
    service: apache2
    state: started
  tags: actualizer

- name: Enable Apache2.
  become: yes
  systemd:
    service: apache2
    enabled: yes
roles_t/core/handlers/main.yml

- name: Restart Apache2.
  become: yes
  systemd:
    service: apache2
    state: restarted
  tags: actualizer

8.19. Configure Website Updates

Monkey on Core runs /usr/local/sbin/webupdate every 15 minutes via a cron job. The example script mirrors /WWW/live/ on Core to /home/www/ on Front.

private/webupdate
#!/bin/bash -e
#
# DO NOT EDIT.
#
# This file was tangled from a small institute's README.org.

cd /WWW/live/

rsync -avz --delete --chmod=g-w         \
        --filter='exclude *~'           \
        --filter='exclude .git*'        \
        ./ 192.168.15.4:/home/www/

The following tasks install the webupdate script from private/, and create Monkey's cron job. An example webupdate script is provided here.

roles_t/core/tasks/main.yml

- name: "Install Monkey's webupdate script."
  become: yes
  copy:
    src: ../private/webupdate
    dest: /usr/local/sbin/webupdate
    mode: u=rx,g=rx,o=
    owner: monkey
    group: staff

- name: "Create Monkey's webupdate job."
  become: yes
  cron:
    minute: "*/15"
    job: "[ -d /WWW/live ] && /usr/local/sbin/webupdate"
    name: webupdate
    user: monkey

8.20. Configure Core WireGuard™ Interface

Core connects to Front's WireGuard™ service to provide members abroad with a route to the campus networks. As described in Configure Public WireGuard™ Subnet for Front, Core is expected to forward packets from/to the private networks.

The following tasks install WireGuard™, configure it and enable the service.

roles_t/core/tasks/main.yml

- name: Enable IP forwarding.
  become: yes
  sysctl:
    name: net.ipv4.ip_forward
    value: "1"
    state: present

- name: Install WireGuard™.
  become: yes
  apt: pkg=wireguard

- name: Configure WireGuard™.
  become: yes
  copy:
    content: |
      [Interface]
      Address = {{ core_wg_addr }}
      PostUp = wg set %i private-key /etc/wireguard/private-key

      # Front
      [Peer]
      EndPoint = {{ front_addr }}:{{ public_wg_port }}
      PublicKey = {{ front_wg_pubkey }}
      AllowedIPs = {{ front_wg_addr }}
      AllowedIPs = {{ public_wg_net_cidr }}
    dest: /etc/wireguard/wg0.conf
    mode: u=r,g=,o=
    owner: root
    group: root
  notify: Restart WireGuard™.

- name: Start WireGuard™.
  become: yes
  systemd:
    service: wg-quick@wg0
    state: started
  tags: actualizer

- name: Enable WireGuard™.
  become: yes
  systemd:
    service: wg-quick@wg0
    enabled: yes
roles_t/core/handlers/main.yml

- name: Restart WireGuard™.
  become: yes
  systemd:
    service: wg-quick@wg0
    state: restarted
  tags: actualizer

8.21. Configure NAGIOS

Core runs a nagios4 server to monitor "services" on institute hosts. The following tasks install the necessary packages and configure the server via edits to /etc/nagios4/nagios.cfg. The monitors are installed in /etc/nagios4/conf.d/institute.cfg which is tangled from code blocks described in the following subsections.

The institute NAGIOS configuration includes a customized version of the check_sensors plugin named inst_sensors. Both versions rely on the sensors command (from the lm-sensors package). The custom version (below) is installed in /usr/local/sbin/inst_sensors on both Core and Campus (and thus Gate) machines.

roles_t/core/tasks/main.yml

- name: Install NAGIOS4.
  become: yes
  apt:
    pkg: [ nagios4, monitoring-plugins-basic, nagios-nrpe-plugin,
           lm-sensors ]

- name: Install inst_sensors NAGIOS plugin.
  become: yes
  copy:
    src: inst_sensors
    dest: /usr/local/sbin/inst_sensors
    mode: u=rwx,g=rx,o=rx

- name: Configure NAGIOS4.
  become: yes
  lineinfile:
    path: /etc/nagios4/nagios.cfg
    regexp: "{{ item.regexp }}"
    line: "{{ item.line }}"
    backrefs: yes
  loop:
  - regexp: "^( *cfg_file *=.*/localhost.cfg)"
    line: "#\\1"
  - regexp: "^( *admin_email *= *)"
    line: "\\1{{ ansible_user }}@localhost"
  notify: Reload NAGIOS4.

- name: Configure NAGIOS4 contacts.
  become: yes
  lineinfile:
    path: /etc/nagios4/objects/contacts.cfg
    regexp: "^( *email +)"
    line: "\\1sysadm@localhost"
    backrefs: yes
  notify: Reload NAGIOS4.

- name: Configure NAGIOS4 monitors.
  become: yes
  template:
    src: nagios.cfg
    dest: /etc/nagios4/conf.d/institute.cfg
  notify: Reload NAGIOS4.

- name: Start NAGIOS4.
  become: yes
  systemd:
    service: nagios4
    state: started
  tags: actualizer

- name: Enable NAGIOS4.
  become: yes
  systemd:
    service: nagios4
    enabled: yes
roles_t/core/handlers/main.yml

- name: Reload NAGIOS4.
  become: yes
  systemd:
    service: nagios4
    state: reloaded
  tags: actualizer

8.21.1. Configure NAGIOS Monitors for Core

The first block in nagios.cfg specifies monitors for services on Core. The monitors are simple, local plugins, and the block is very similar to the default objects/localhost.cfg file. The commands used here may specify plugin arguments.

roles_t/core/templates/nagios.cfg
define host {
    use                     linux-server
    host_name               core
    address                 127.0.0.1
}

define service {
    use                     local-service
    host_name               core
    service_description     Root Partition
    check_command           check_local_disk!20%!10%!/
}

define service {
    use                     local-service
    host_name               core
    service_description     Current Users
    check_command           check_local_users!20!50
}

define service {
    use                     local-service
    host_name               core
    service_description     Zombie Processes
    check_command           check_local_procs!5!10!Z
}

define service {
    use                     local-service
    host_name               core
    service_description     Total Processes
    check_command           check_local_procs!150!200!RSZDT
}

define service {
    use                     local-service
    host_name               core
    service_description     Current Load
    check_command           check_local_load!5.0,4.0,3.0!10.0,6.0,4.0
}

define service {
    use                     local-service
    host_name               core
    service_description     Swap Usage
    check_command           check_local_swap!20%!10%
}

define service {
    use                     local-service
    host_name               core
    service_description     SSH
    check_command           check_ssh
}

define service {
    use                     local-service
    host_name               core
    service_description     HTTP
    check_command           check_http
}

8.21.2. Custom NAGIOS Monitor inst_sensors

The check_sensors plugin is included in the package monitoring-plugins-basic, but it does not report any readings. The small institute substitutes a slightly modified version, inst_sensors, that reports core CPU temperatures.

roles_t/core/files/inst_sensors
#!/bin/sh

PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin"
export PATH
PROGNAME=`basename $0`
REVISION="2.3.1"

. /usr/lib/nagios/plugins/utils.sh

print_usage() {
        echo "Usage: $PROGNAME" [--ignore-fault]
}

print_help() {
        print_revision $PROGNAME $REVISION
        echo ""
        print_usage
        echo ""
        echo -n "This plugin checks hardware status"
        echo " using the lm_sensors package."
        echo ""
        support
        exit $STATE_OK
}

brief_data() {
    echo "$1" | sed -n -E -e '
  /^ *Core [0-9]+:/ { s/^ *Core [0-9]+: +([-+]?[0-9.]+).*/ \1/; H }
  $ { x; s/\n//g; p }'
}

case "$1" in
        --help)
                print_help
                exit $STATE_OK
                ;;
        -h)
                print_help
                exit $STATE_OK
                ;;
        --version)
                print_revision $PROGNAME $REVISION
                exit $STATE_OK
                ;;
        -V)
                print_revision $PROGNAME $REVISION
                exit $STATE_OK
                ;;
        *)
                sensordata=`sensors 2>&1`
                status=$?
                if test ${status} -eq 127; then
                        text="SENSORS UNKNOWN - command not found"
                        text="$text (did you install lmsensors?)"
                        exit=$STATE_UNKNOWN
                elif test ${status} -ne 0; then
                        text="WARNING - sensors returned state $status"
                        exit=$STATE_WARNING
                elif echo ${sensordata} | egrep ALARM > /dev/null; then
                        text="SENSOR CRITICAL -`brief_data "${sensordata}"`"
                        exit=$STATE_CRITICAL
                elif echo ${sensordata} | egrep FAULT > /dev/null \
                    && test "$1" != "-i" -a "$1" != "--ignore-fault"; then
                        text="SENSOR UNKNOWN - Sensor reported fault"
                        exit=$STATE_UNKNOWN
                else
                        text="SENSORS OK -`brief_data "${sensordata}"`"
                        exit=$STATE_OK
                fi

                echo "$text"
                if test "$1" = "-v" -o "$1" = "--verbose"; then
                        echo ${sensordata}
                fi
                exit $exit
                ;;
esac

The following block defines the command and monitors it (locally) on Core.

roles_t/core/templates/nagios.cfg

define command {
    command_name            inst_sensors
    command_line            /usr/local/sbin/inst_sensors
}

define service {
    use                     local-service
    host_name               core
    service_description     Temperature Sensors
    check_command           inst_sensors
}

8.21.3. Configure NAGIOS Monitors for Remote Hosts

The following sections contain code blocks specifying monitors for services on other campus hosts. The NAGIOS server on Core will contact the NAGIOS Remote Plugin Executor (NRPE) servers on the other campus hosts and request the results of several commands. For security reasons, the NRPE servers do not accept command arguments.

The institute defines several NRPE commands, using a inst_ prefix to distinguish their names. The commands take no arguments but execute a plugin with pre-defined arguments appropriate for the institute. The commands are defined in code blocks interleaved with the blocks that monitor them. The command blocks are appended to nrpe.cfg and the monitoring blocks to nagios.cfg. The nrpe.cfg file is installed on each campus host by the campus role's Configure NRPE tasks.

8.21.4. Configure NAGIOS Monitors for Gate

Define the monitored host, gate. Monitor its response to network pings.

roles_t/core/templates/nagios.cfg

define host {
    use                     linux-server
    host_name               gate
    address                 {{ gate_addr }}
}

For all campus NRPE servers: an inst_root command to check the free space on the root partition.

roles_t/campus/files/nrpe.cfg
command[inst_root]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /

Monitor inst_root on Gate.

roles_t/core/templates/nagios.cfg

define service {
    use                     generic-service
    host_name               gate
    service_description     Root Partition
    check_command           check_nrpe!inst_root
}

Monitor check_load on Gate.

roles_t/core/templates/nagios.cfg

define service {
    use                     generic-service
    host_name               gate
    service_description     Current Load
    check_command           check_nrpe!check_load
}

Monitor check_zombie_procs and check_total_procs on Gate.

roles_t/core/templates/nagios.cfg

define service {
    use                     generic-service
    host_name               gate
    service_description     Zombie Processes
    check_command           check_nrpe!check_zombie_procs
}

define service {
    use                     generic-service
    host_name               gate
    service_description     Total Processes
    check_command           check_nrpe!check_total_procs
}

For all campus NRPE servers: an inst_swap command to check the swap usage.

roles_t/campus/files/nrpe.cfg
command[inst_swap]=/usr/lib/nagios/plugins/check_swap -w 20% -c 10%

Monitor inst_swap on Gate.

roles_t/core/templates/nagios.cfg

define service {
    use                     generic-service
    host_name               gate
    service_description     Swap Usage
    check_command           check_nrpe!inst_swap
}

For all campus NRPE servers: an inst_sensors command to report core CPU temperatures.

roles_t/campus/files/nrpe.cfg
command[inst_sensors]=/usr/local/sbin/inst_sensors

Monitor inst_sensors on Gate.

roles_t/core/templates/nagios.cfg

define service {
    use                     generic-service
    host_name               gate
    service_description     Temperature Sensors
    check_command           check_nrpe!inst_sensors
}

8.22. Configure Backups

The following task installs the backup script from private/. An example script is provided in here.

roles_t/core/tasks/main.yml

- name: Install backup script.
  become: yes
  copy:
    src: ../private/backup
    dest: /usr/local/sbin/backup
    mode: u=rx,g=r,o=

8.23. Configure Nextcloud

Core runs Nextcloud to provide a private institute cloud, as described in The Cloud Service. Installing, restoring (from backup), and upgrading Nextcloud are manual processes documented in The Nextcloud Admin Manual, Maintenance. However Ansible can help prepare Core before an install or restore, and perform basic security checks afterwards.

8.23.1. Prepare Core For Nextcloud

The Ansible code contained herein prepares Core to run Nextcloud by installing required software packages, configuring the web server, and installing a cron job.

roles_t/core/tasks/main.yml

- name: Install packages required by Nextcloud.
  become: yes
  apt:
    pkg: [ apache2, mariadb-server, php, php-apcu, php-bcmath,
           php-curl, php-gd, php-gmp, php-json, php-mysql,
           php-mbstring, php-intl, php-imagick, php-xml, php-zip,
           imagemagick, libapache2-mod-php ]

Next, a number of Apache2 modules are enabled.

roles_t/core/tasks/main.yml

- name: Enable Apache2 modules for Nextcloud.
  become: yes
  apache2_module:
    name: "{{ item }}"
  loop: [ rewrite, headers, env, dir, mime ]

The Apache2 configuration is then extended with the following /etc/apache2/sites-available/nextcloud.conf file, which is installed and enabled with a2ensite. The same configuration lines are given in the "Installation on Linux" section of the Nextcloud Server Administration Guide (sub-section Apache Web server configuration).

roles_t/core/files/nextcloud.conf
Alias /nextcloud "/var/www/nextcloud/"

<Directory /var/www/nextcloud/>
    Require all granted
    AllowOverride All
    Options FollowSymlinks MultiViews

    <IfModule mod_dav.c>
        Dav off
    </IfModule>
</Directory>
roles_t/core/tasks/main.yml

- name: Install Nextcloud web configuration.
  become: yes
  copy:
    src: nextcloud.conf
    dest: /etc/apache2/sites-available/nextcloud.conf
  notify: Restart Apache2.

- name: Enable Nextcloud web configuration.
  become: yes
  command:
    cmd: a2ensite nextcloud
    creates: /etc/apache2/sites-enabled/nextcloud.conf
  notify: Restart Apache2.

The institute supports "Service discovery" as recommended at the end of the "Apache Web server configuration" subsection. The prescribed rewrite rules are included in a Directory block for the default virtual host's document root.

roles_t/core/files/nextcloud.conf

<Directory /var/www/html/>
    <IfModule mod_rewrite.c>
        RewriteEngine on
        # LogLevel alert rewrite:trace3
        RewriteRule ^\.well-known/carddav \
            /nextcloud/remote.php/dav [R=301,L]
        RewriteRule ^\.well-known/caldav \
            /nextcloud/remote.php/dav [R=301,L]
        RewriteRule ^\.well-known/webfinger \
            /nextcloud/index.php/.well-known/webfinger [R=301,L]
        RewriteRule ^\.well-known/nodeinfo \
            /nextcloud/index.php/.well-known/nodeinfo [R=301,L]
      </IfModule>
</Directory>

The institute also includes additional Apache2 configuration recommended by Nextcloud 20's Settings > Administration > Overview web page. The following portion of nextcloud.conf sets a Strict-Transport-Security header with a max-age of 6 months.

roles_t/core/files/nextcloud.conf

<IfModule mod_headers.c>
    Header always set \
        Strict-Transport-Security "max-age=15552000; includeSubDomains"
</IfModule>

Nextcloud's directories and files are typically readable only by the web server's user www-data and the www-data group. The administrator is added to this group to ease (speed) the debugging of cloud FUBARs.

roles_t/core/tasks/main.yml

- name: Add {{ ansible_user }} to web server group.
  become: yes
  user:
    name: "{{ ansible_user }}"
    append: yes
    groups: www-data

Nextcloud is configured with a cron job to run periodic background jobs.

roles_t/core/tasks/main.yml

- name: Create Nextcloud cron job.
  become: yes
  cron:
    minute: 11,26,41,56
    job: >-
      [ -r /var/www/nextcloud/cron.php ]
      && /usr/bin/php -f /var/www/nextcloud/cron.php
    name: Nextcloud
    user: www-data

Nextcloud's MariaDB database (and user) are created by the following tasks. The user's password is taken from the nextcloud_dbpass variable, kept in private/vars.yml, and generated e.g. with the apg -n 1 -x 12 -m 12 command.

private/vars.yml
nextcloud_dbpass:           ippAgmaygyobwyt5

When the mysql_db Ansible module supports check_implicit_admin, the following task can create Nextcloud's DB.


- name: Create Nextcloud DB.
  become: yes
  mysql_db:
    check_implicit_admin: yes
    name: nextcloud
    collation: utf8mb4_general_ci
    encoding: utf8mb4

Unfortunately it does not currently, yet the institute prefers the more secure Unix socket authentication method. Rather than create such a user, the nextcloud database and nextclouduser user are created manually.


- name: Create Nextcloud DB user.
  become: yes
  mysql_user:
    check_implicit_admin: yes
    name: nextclouduser
    password: "{{ nextcloud_dbpass }}"
    update_password: always
    priv: 'nextcloud.*:all'

The task above would work (mysql_user supports check_implicit_admin) but the nextcloud database was not created first. Thus both database and user are created manually, with the following SQL, before occ maintenance:install can run.

create database nextcloud
    character set utf8mb4
    collate utf8mb4_general_ci;
grant all on nextcloud.*
    to 'nextclouduser'@'localhost'
    identified by 'ippAgmaygyobwyt5';
flush privileges;

Finally, a symbolic link positions /Nextcloud/nextcloud/ at /var/www/nextcloud/ as expected by the Apache2 configuration above. Nextcloud itself should always believe that /var/www/nextcloud/ is its document root.

roles_t/core/tasks/main.yml

- name: Link /var/www/nextcloud.
  become: yes
  file:
    path: /var/www/nextcloud
    src: /Nextcloud/nextcloud
    state: link
    force: yes
    follow: no

8.23.2. Configure PHP

The following tasks set a number of PHP parameters for better performance, as recommended by Nextcloud.

roles_t/core/tasks/main.yml

- name: Set PHP memory_limit for Nextcloud.
  become: yes
  lineinfile:
    path: /etc/php/8.2/apache2/php.ini
    regexp: "memory_limit *="
    line: "memory_limit = 768M"

- name: Include PHP parameters for Nextcloud.
  become: yes
  copy:
    content: |
      ; priority=20
      apc.enable_cli=1
      opcache.enable=1
      opcache.enable_cli=1
      opcache.interned_strings_buffer=12
      opcache.max_accelerated_files=10000
      opcache.memory_consumption=128
      opcache.save_comments=1
      opcache.revalidate_freq=1
    dest: /etc/php/8.2/mods-available/nextcloud.ini
  notify: Restart Apache2.

- name: Enable Nextcloud PHP modules.
  become: yes
  command:
    cmd: phpenmod {{ item }}
    creates: /etc/php/8.2/apache2/conf.d/20-{{ item }}.ini
  loop: [ nextcloud, apcu ]
  notify: Restart Apache2.

8.23.3. Create /Nextcloud/

The Ansible tasks up to this point have completed Core's LAMP stack and made Core ready to run Nextcloud, but they have not installed Nextcloud. Nextcloud must be manually installed or restored from a backup copy. Until then, attempts to access the institute cloud will just produce errors.

Installing or restoring Nextcloud starts by creating the /Nextcloud/ directory. It may be a separate disk or just a new directory on an existing partition. The commands involved will vary greatly depending on circumstances, but the following examples might be helpful.

The following command line creates /Nextcloud/ in the root partition. This is appropriate for one-partition machines like the test machines.

sudo mkdir /Nextcloud
sudo chmod 775 /Nextcloud

The following command lines create /Nextcloud/ on an existing, large, separate (from the root) partition. A popular choice for a second partition is mounted at /home/.

sudo mkdir /home/nextcloud
sudo chmod 775 /home/nextcloud
sudo ln -s /home/nextcloud /Nextcloud

These commands create /Nextcloud/ on an entire (without partitioning) second hard drive, /dev/sdb.

sudo mkfs -t ext4 /dev/sdb
sudo mkdir /Nextcloud
echo "/dev/sdb  /Nextcloud  ext4  errors=remount-ro  0  2" \
| sudo tee -a /etc/fstab >/dev/null
sudo mount /Nextcloud

8.23.4. Restore Nextcloud

Restoring Nextcloud in the newly created /Nextcloud/ presumably starts with plugging in the portable backup drive and unlocking it so that it is automounted at /media/sysadm/Backup per its drive label: Backup. Assuming this, the following command restores /Nextcloud/ from the backup (and can be repeated as many times as necessary to get a successful, complete copy).

rsync -a /media/sysadm/Backup/Nextcloud/ /Nextcloud/

Mirroring a backup onto a new server may cause UID/GID mismatches. All of the files in /Nextcloud/nextcloud/ must be owned by user www-data and group www-data. If not, the following command will make it so.

sudo chown -R www-data:www-data /Nextcloud/nextcloud/

The database is restored with the following commands, which assume the last dump was made February 20th 2022 and thus was saved in /Nextcloud/20220220.bak. The database will need to be created first as when installing Nextcloud.

cd /Nextcloud/
sudo mysql
create database nextcloud
    character set utf8mb4
    collate utf8mb4_general_ci;
grant all on nextcloud.*
    to 'nextclouduser'@'localhost'
    identified by 'ippAgmaygyobwyt5';
flush privileges;
exit;
sudo mysql --defaults-file=dbbackup.cnf nextcloud < 20220220.bak
cd nextcloud/
sudo -u www-data php occ maintenance:data-fingerprint

Finally the administrator surfs to http://core/nextcloud/, authenticates, and addresses any warnings on the Administration > Overview web page.

8.23.5. Install Nextcloud

Installing Nextcloud in the newly created /Nextcloud/ starts with downloading and verifying a recent release tarball. The following example command lines unpacked Nextcloud 31 in nextcloud/ in /Nextcloud/ and set the ownerships and permissions of the new directories and files.

cd /Nextcloud/
tar xjf ~/Downloads/nextcloud-31.0.2.tar.bz2
sudo chown -R www-data:www-data nextcloud
sudo find nextcloud -type d -exec chmod 750 {} \;
sudo find nextcloud -type f -exec chmod 640 {} \;

According to the latest installation instructions in the Admin Manual for version 31 (section "Installation and server configuration", subsection "Installing from command line", here), after unpacking and setting file permissions, the following occ command takes care of everything. This command currently expects Nextcloud's database and user to exist. The following SQL commands create the database and user (entered at the SQL prompt of the sudo mysql command). The shell command then runs occ.

create database nextcloud
    character set utf8mb4
    collate utf8mb4_general_ci;
grant all on nextcloud.*
    to 'nextclouduser'@'localhost'
    identified by 'ippAgmaygyobwyt5';
flush privileges;
cd /var/www/nextcloud/
sudo -u www-data php occ maintenance:install \
--database='mysql' --database-name='nextcloud' \
--database-user='nextclouduser' --database-pass='ippAgmaygyobwyt5' \
--admin-user='sysadm' --admin-pass='fubar'

The nextcloud/config/config.php is created by the above command, but gets the trusted_domains and overwrite.cli.url settings wrong, using localhost where core.small.private is wanted. The only way the institute cloud should be accessed is by that name, so adjusting the config.php file is straightforward. The settings should be corrected by hand for immediate testing, but the "Afterwards" tasks (below) will check (or update) these settings when Core is next checked (or updated) e.g. with ./inst config -n core.

Before calling Nextcloud "configured", the administrator runs ./inst config core, surfs to https://core.small.private/nextcloud/, logins in as sysadm, and follows any reasonable instructions on the Administration > Overview page.

8.23.6. Afterwards

Whether Nextcloud was restored or installed, there are a few things Ansible can do to bolster reliability and security (aka privacy). These Nextcloud "Afterwards" tasks would fail if they executed before Nextcloud was installed, so the first "afterwards" task probes for /Nextcloud/nextcloud and registers the file status with the nextcloud variable. The nextcloud.stat.exists condition on the afterwards tasks causes them to skip rather than fail.

roles_t/core/tasks/main.yml

- name: Test for /Nextcloud/nextcloud/.
  stat:
    path: /Nextcloud/nextcloud
  register: nextcloud
- debug:
    msg: "/Nextcloud/ does not yet exist"
  when: not nextcloud.stat.exists

The institute installed Nextcloud with the occ maintenance:install command, which produced a simple nextcloud/config/config.php with incorrect trusted_domains and overwrite.cli.url settings. These are fixed during installation, but the institute may also have restored Nextcloud, including the config.php file. (This file is edited by the web scripts and so is saved/restored in the backup copy.) The restored settings may be different from those Ansible used to create the database user.

The following task checks (or updates) the trusted_domains and dbpassword settings, to ensure they are consistent with the Ansible variables domain_priv and nextcloud_dbpass. The overwrite.cli.url setting is fixed by the tasks that implement Pretty URLs (below).

roles_t/core/tasks/main.yml

- name: Configure Nextcloud trusted domains.
  become: yes
  replace:
    path: /var/www/nextcloud/config/config.php
    regexp: "^( *)'trusted_domains' *=>[^)]*[)],$"
    replace: |-
      \1'trusted_domains' => 
      \1array (
      \1  0 => 'core.{{ domain_priv }}',
      \1),
  when: nextcloud.stat.exists

- name: Configure Nextcloud dbpasswd.
  become: yes
  lineinfile:
    path: /var/www/nextcloud/config/config.php
    regexp: "^ *'dbpassword' *=> *'.*', *$"
    line: "  'dbpassword' => '{{ nextcloud_dbpass }}',"
    insertbefore: "^[)];"
    firstmatch: yes
  when: nextcloud.stat.exists

The institute uses the php-apcu package to provide Nextcloud with a local memory cache. The following memcache.local Nextcloud setting enables it.

roles_t/core/tasks/main.yml

- name: Configure Nextcloud memcache.
  become: yes
  lineinfile:
    path: /var/www/nextcloud/config/config.php
    regexp: "^ *'memcache.local' *=> *'.*', *$"
    line: "  'memcache.local' => '\\\\OC\\\\Memcache\\\\APCu',"
    insertbefore: "^[)];"
    firstmatch: yes
  when: nextcloud.stat.exists

The institute implements Pretty URLs as described in the Pretty URLs subsection of the "Installation on Linux" section of the "Installation and server configuration" chapter in the Nextcloud 22 Server Administration Guide. Two settings are updated: overwrite.cli.url and htaccess.RewriteBase.

roles_t/core/tasks/main.yml

- name: Configure Nextcloud for Pretty URLs.
  become: yes
  lineinfile:
    path: /var/www/nextcloud/config/config.php
    regexp: "{{ item.regexp }}"
    line: "{{ item.line }}"
    insertbefore: "^[)];"
    firstmatch: yes
  vars:
    url: http://core.{{ domain_priv }}/nextcloud
  loop:
  - regexp: "^ *'overwrite.cli.url' *=>"
    line: "  'overwrite.cli.url' => '{{ url }}',"
  - regexp: "^ *'htaccess.RewriteBase' *=>"
    line: "  'htaccess.RewriteBase' => '/nextcloud',"
  when: nextcloud.stat.exists

The institute sets Nextcloud's default_phone_region mainly to avoid a complaint on the Settings > Administration > Overview web page.

private/vars.yml
nextcloud_region:           US

It sets Nextcloud's "maintenance window" to start at 02:00MST (09:00UTC). The interval is 4 hours, so ends at 06:00MST. The documentation for the setting was found here.

It also configures Nextcloud to send email with /usr/sbin/sendmail From: webmaster@core.small.private. The documentation for the settings was found here though just two parameters are set here, not the 9 suggested in sub-sub-subsection "Sendmail", of sub-subsection "Setting mail server parameters in config.php", seemed a simple, unedited copy of the parameters SMTP and not by Sendmail nor Qmail.

roles_t/core/tasks/main.yml

- name: Configure Nextcloud settings.
  become: yes
  lineinfile:
    path: /var/www/nextcloud/config/config.php
    regexp: "{{ item.regexp }}"
    line: "{{ item.line }}"
    insertbefore: "^[)];"
    firstmatch: yes
  loop:
  - regexp: "^ *'default_phone_region' *=> *'.*', *$"
    line: "  'default_phone_region' => '{{ nextcloud_region }}',"

  - regexp: "^ *'maintenance_window_start' *=> "
    line: "  'maintenance_window_start' => 9,"

  - regexp: "^ *'mail_smtpmode' *=>"
    line: "  'mail_smtpmode' => 'sendmail',"
  - regexp: "^ *'mail_sendmailmode' *=>"
    line: "  'mail_sendmailmode' => 'pipe',"
  - regexp: "^ *'mail_from_address' *=>"
    line: "  'mail_from_address' => 'webmaster',"
  - regexp: "^ *'mail_domain' *=>"
    line: "  'mail_domain' => 'core.small.private',"
  when: nextcloud.stat.exists

The next two tasks create /Nextcloud/dbbackup.cnf if it does not exist, and checks the password setting in it when it does. It should never be world readable (and probably shouldn't be group readable). This file is needed by the institute's backup command, so ./inst config and in particular these next two tasks need to run before the next backup.

roles_t/core/tasks/main.yml

- name: Create /Nextcloud/dbbackup.cnf.
  no_log: yes
  become: yes
  copy:
    content: |
      [mysqldump]
      no-tablespaces
      single-transaction
      host=localhost
      user=nextclouduser
      password={{ nextcloud_dbpass }}
    dest: /Nextcloud/dbbackup.cnf
    mode: g=,o=
    force: no
  when: nextcloud.stat.exists

- name: Update /Nextcloud/dbbackup.cnf password.
  become: yes
  lineinfile:
    path: /Nextcloud/dbbackup.cnf
    regexp: password=
    line: password={{ nextcloud_dbpass }}
  when: nextcloud.stat.exists

9. The Gate Role

The gate role configures the services expected at the campus gate: access to the private Ethernet from the untrusted Ethernet (e.g. a campus Wi-Fi AP) via VPN, and access to the Internet via NAT. The gate machine uses three network interfaces (see The Gate Machine) configured with persistent names used in its firewall rules.

lan
The campus Ethernet.
wild
The campus IoT (Wi-Fi APs).
isp
The campus ISP.

Requiring a VPN to access the campus network from the untrusted Ethernet (a campus Wi-Fi AP) bolsters the native Wi-Fi encryption and frustrates non-RYF (Respects Your Freedom) wireless equipment.

Gate is also a campus machine, so the more generic campus role is applied first, by which Gate gets a campus machine's DNS and Postfix configurations, etc.

9.1. Role Defaults

As in The Core Role, the gate role sets a number of variables to default values in its defaults/main.yml file.

roles_t/gate/defaults/main.yml
---
<<network-vars>>
<<address-vars>>

9.2. Include Particulars

The following should be familiar boilerplate by now.

roles_t/gate/tasks/main.yml
---
- name: Include public variables.
  include_vars: ../public/vars.yml

- name: Include private variables.
  include_vars: ../private/vars.yml

9.3. Configure Gate NetworkD

Gate's network interfaces are configured using SystemD NetworkD configuration files that specify their MAC addresses. (One or more might be plug-and-play USB dongles.) These addresses are provided by the private/vars.yml file as in the example code here.

private/vars.yml
gate_lan_mac:               08:00:27:f3:16:79
gate_wild_mac:              08:00:27:4a:de:d2
gate_isp_mac:               08:00:27:3d:42:e5

The tasks in the following sections install the necessary configuration files.

9.3.1. Gate's lan Interface

The campus Ethernet interface is named lan and configured by 10-lan.link and 10-lan.network files in /etc/systemd/network/.

roles_t/gate/tasks/main.yml

- name: Install 10-lan.link.
  become: yes
  copy:
    content: |
      [Match]
      MACAddress={{ gate_lan_mac }}

      [Link]
      Name=lan
    dest: /etc/systemd/network/10-lan.link
  notify: Reload networkd.

- name: Install 10-lan.network.
  become: yes
  copy:
    content: |
      [Match]
      MACAddress={{ gate_lan_mac }}

      [Network]
      Address={{ gate_addr_cidr }}
      DNS={{ core_addr }}
      Domains={{ domain_priv }}

      [Route]
      Destination={{ public_wg_net_cidr }}
      Gateway={{ core_addr }}
    dest: /etc/systemd/network/10-lan.network
  notify: Reload networkd.
roles_t/gate/handlers/main.yml
---
- name: Reload networkd.
  become: yes
  command: networkctl reload
  tags: actualizer

9.3.2. Gate's wild Interface

The institute keeps the wild ones off the campus Ethernet. Its wild subnet is connected to Gate via a separate physical interface. To accommodate the wild ones without re-configuring them, the institute attempts to look like an up-link, e.g. a cable modem. A wild one is expected to chirp for DHCP service and use the private subnet address in its lease. Thus Gate's wild interface configuration enables the built-in DHCP server and lists the authorized lessees.

The wild ones are not expected to number in the dozens, so they are simply a list of hashes in private/vars.yml, as in the example code here. Note that host number 1 is Gate. Wild ones are assigned unique host numbers greater than 1.

private/vars.yml
wild_ones:
- { MAC: "08:00:27:dc:54:b5", num: 2, name: wifi-ap }
- { MAC: "94:83:c4:19:7d:58", num: 3, name: appliance }

As with the lan interface, this interface is named wild and configured by 10-wild.link and 10-wild.network files in /etc/systemd/network/. The latter is generated from the hashes in wild_ones and the wild.network template file.

roles_t/gate/tasks/main.yml

- name: Install 10-wild.link.
  become: yes
  copy:
    content: |
      [Match]
      MACAddress={{ gate_wild_mac }}

      [Link]
      Name=wild
    dest: /etc/systemd/network/10-wild.link
  notify: Reload networkd.

- name: Install 10-wild.network.
  become: yes
  template:
    src: wild.network
    dest: /etc/systemd/network/10-wild.network
  notify: Reload networkd.
roles_t/gate/templates/wild.network
[Match]
MACAddress={{ gate_wild_mac }}

[Network]
Address={{ gate_wild_addr_cidr }}
DHCPServer=yes

[DHCPServer]
EmitDNS=yes
EmitNTP=yes
NTP={{ core_addr }}
EmitSMTP=yes
SMTP={{ core_addr }}
{% for wild in wild_ones %}

# {{ wild.name }}
[DHCPServerStaticLease]
MACAddress={{ wild.MAC }}
Address={{ wild_net_cidr |ansible.utils.ipaddr(wild.num) }}
{% endfor %}

9.3.3. Gate's isp Interface

The interface to the campus ISP is named isp and configured by 10-isp.link and 10-isp.network files in /etc/systemd/network/. The latter is not automatically generated, as it varies quite a bit depending on the connection to the ISP: Ethernet interface, USB tether, Wi-Fi connection, etc.

roles_t/gate/tasks/main.yml

- name: Install 10-isp.link.
  become: yes
  copy:
    content: |
      [Match]
      MACAddress={{ gate_isp_mac }}

      [Link]
      Name=isp
    dest: /etc/systemd/network/10-isp.link
  notify: Reload networkd.

- name: Install 10-isp.network.
  become: yes
  copy:
    src: ../private/gate-isp.network
    dest: /etc/systemd/network/10-isp.network
    force: no
  notify: Reload networkd.

Note that the 60-isp.yaml file is only updated (created) if it does not already exist so that it can be easily modified to debug a new campus ISP without interference from Ansible.

The following example gate-isp.network file recognizes an Ethernet interface by its MAC address.

private/gate-isp.network
[Match]
MACAddress=08:00:27:3d:42:e5

[Network]
DHCP=ipv4

[DHCP]
RouteMetric=100
UseMTU=true
UseDNS=false

9.4. Configure Gate ResolveD

Gate provides name service on the wild Ethernet by having its "stub listener" listen there. That stub should not read /etc/hosts lest gate resolve to 127.0.1.1, nonsense to the wild.

roles_t/gate/tasks/main.yml

- name: Configure resolved.
  become: yes
  lineinfile:
    path: /etc/systemd/resolved.conf
    regexp: "{{ item.regexp }}"
    line: "{{ item.line }}"
  loop:
  - regexp: '^ *DNSStubListenerExtra *='
    line: "DNSStubListenerExtra={{ gate_wild_addr }}"
  - regexp: '^ *ReadEtcHosts *='
    line: "ReadEtcHosts=no"
  notify:
  - Reload Systemd.
  - Restart Systemd resolved.
roles_t/gate/handlers/main.yml

- name: Reload Systemd.
  become: yes
  systemd:
    daemon-reload: yes
  tags: actualizer

- name: Restart Systemd resolved.
  become: yes
  systemd:
    service: systemd-resolved
    state: restarted
  tags: actualizer

9.5. UFW Rules

Gate uses the Uncomplicated FireWall (UFW) to install its packet filters at boot-time. The institute does not use a firewall except to configure Network Address Translation (NAT) and forwarding. Members expect to be able to exercise experimental services on random ports. The default policy settings in /etc/default/ufw are ACCEPT and ACCEPT for input and output, and DROP for forwarded packets. Forwarding was enabled in the kernel previously (when configuring WireGuard™) using Ansible's sysctl module. It does not need to be set in /etc/ufw/sysctl.conf.

NAT is enabled per the ufw-framework(8) manual page, by introducing nat table rules in a block at the end of /etc/ufw/before.rules. They translate packets going to the ISP. These can come from the private Ethernet or the untrusted Ethernet (campus IoT, including Wi-Fi APs). Hosts on the other institute networks (the two VPNs) should not be routing their Internet traffic through their VPN.

ufw-nat
-A POSTROUTING -s {{ private_net_cidr }} -o isp -j MASQUERADE
-A POSTROUTING -s {{    wild_net_cidr }} -o isp -j MASQUERADE

Forwarding rules are also needed. The nat table is a post routing rule set, so the default routing policy (DENY) will drop packets before NAT can translate them. The following rules are added to allow packets to be forwarded from the campus Ethernet or its wild subnet to an ISP on the isp interface. A generic routing rule in UFW accepts any related or established packet (according to the kernel's connection tracking).

ufw-forward-nat
-A ufw-before-forward -i lan  -o isp -j ACCEPT
-A ufw-before-forward -i wild -o isp -j ACCEPT

Forwarding rules are also needed to route packets from the campus VPN (the wg0 WireGuard™ tunnel device) to the institute's LAN and back. The public VPN on Front will also be included since its packets arrive at Gate's lan interface, coming from Core. Thus forwarding between public and campus VPNs is also allowed.

ufw-forward-private
-A ufw-before-forward -i lan  -o wg0 -j ACCEPT
-A ufw-before-forward -i wg0  -o lan -j ACCEPT
-A ufw-before-forward -i wg0  -o wg0 -j ACCEPT

The third rule above may seem curious; it is. It short circuits filters in subsequent chains (e.g. ufw-reject-forward) that, by default, log and reject packets, even those from subnet to the same subnet (if it is a WireGuard™ subnet?).

Note that there are no forwarding rules to allow packets to pass from the wild device to the lan device, just the wg0 device.

9.6. Configure UFW

The following tasks install the Uncomplicated Firewall (UFW), set its policy in /etc/default/ufw, and install the institute's rules in /etc/ufw/before.rules.

roles_t/gate/tasks/main.yml

- name: Install UFW.
  become: yes
  apt: pkg=ufw

- name: Configure UFW policy.
  become: yes
  lineinfile:
    path: /etc/default/ufw
    line: "{{ item.line }}"
    regexp: "{{ item.regexp }}"
  loop:
  - line: "DEFAULT_INPUT_POLICY=\"ACCEPT\""
    regexp: "^DEFAULT_INPUT_POLICY="
  - line: "DEFAULT_OUTPUT_POLICY=\"ACCEPT\""
    regexp: "^DEFAULT_OUTPUT_POLICY="
  - line: "DEFAULT_FORWARD_POLICY=\"DROP\""
    regexp: "^DEFAULT_FORWARD_POLICY="

- name: Configure UFW rules.
  become: yes
  blockinfile:
    block: |
      *nat
      :POSTROUTING ACCEPT [0:0]
      <<ufw-nat>>
      COMMIT
      *filter
      <<ufw-forward-nat>>
      <<ufw-forward-private>>
      COMMIT
    dest: /etc/ufw/before.rules
    insertafter: EOF
    prepend_newline: yes

- name: Enable UFW.
  become: yes
  ufw: state=enabled
  tags: actualizer

9.7. Configure Campus WireGuard™ Subnet

Gate uses WireGuard™ to provide a campus VPN service. Gate's routes and firewall rules allow packets to be forwarded to/from the institute's private networks: the private Ethernet and the public VPN. (It should not forward packets to/from the wild Ethernet.) The only additional route Gate needs is to the public VPN via Core. The rest (private Ethernet and campus VPN) are directly connected.

The following tasks install WireGuard™, configure it with private/gate-wg0.conf (or private/gate-wg0-empty.conf if it does not exist), and enable the service.

roles_t/gate/tasks/main.yml

- name: Enable IP forwarding.
  become: yes
  sysctl:
    name: net.ipv4.ip_forward
    value: "1"
    state: present

- name: Install WireGuard™.
  become: yes
  apt: pkg=wireguard

- name: Configure WireGuard™.
  become: yes
  vars:
    srcs:
      - ../private/gate-wg0.conf
      - ../private/gate-wg0-empty.conf
  copy:
    src: "{{ lookup('first_found', srcs) }}"
    dest: /etc/wireguard/wg0.conf
    mode: u=r,g=,o=
    owner: root
    group: root
  notify: Restart WireGuard™.
  tags: accounts

- name: Start WireGuard™.
  become: yes
  systemd:
    service: wg-quick@wg0
    state: started
  tags: actualizer

- name: Enable WireGuard™.
  become: yes
  systemd:
    service: wg-quick@wg0
    enabled: yes
roles_t/gate/handlers/main.yml

- name: Restart WireGuard™.
  become: yes
  systemd:
    service: wg-quick@wg0
    state: restarted
  tags: actualizer

The "empty" WireGuard™ configuration file (below) is used until the ./inst client command adds the first client, and generates an actual private/gate-wg0.conf.

private/gate-wg0-empty.conf
[Interface]
Address = 10.84.139.1/24
ListenPort = 51820
PostUp = wg set %i private-key /etc/wireguard/private-key

9.7.1. Example private/gate-wg0.conf

The example private/gate-wg0.conf below recognizes a wired IoT appliance, Dick's notebook and his replacement phone, assigning them the host numbers 3, 4 and 6 respectively.

This is just an example. The actual file is edited by the ./inst client command and so should not be tangled from the following block.

Example private/gate-wg0.conf
[Interface]
Address = 10.84.139.1/24
ListenPort = 51820
PostUp = wg set %i private-key /etc/wireguard/private-key

# thing
[Peer]
PublicKey = LdsCsgfjKCfd5+VKS+Q/dQhWO8NRNygByDO2VxbXlSQ=
AllowedIPs = 10.84.139.3

# dick
[Peer]
PublicKey = 4qd4xdRztZBKhFrX9jI/b4fnMzpKQ5qhg691hwYSsX8=
AllowedIPs = 10.84.139.4

# dicks-razr
[Peer]
PublicKey = zho0qMxoLclJSQu4GeJEcMkk0hx4Q047OcNc8vOejVw=
AllowedIPs = 10.84.139.6

The configuration used on thing, the IoT appliance, looks like this:

WireGuard™ tunnel on an IoT appliance
[Interface]
Address = 10.84.139.2
PrivateKey = <hidden>
PublicKey = LdsCsgfjKCfd5+VKS+Q/dQhWO8NRNygByDO2VxbXlSQ=
DNS = 192.168.56.1
Domain = small.private

# Gate
[Peer]
EndPoint = 192.168.57.1:51820
PublicKey = y3cjFnvQbylmH4lGTujpqc8rusIElmJ4Gu9hh6iR7QI=
AllowedIPs = 10.84.139.1
AllowedIPs = 192.168.56.0/24
AllowedIPs = 10.177.87.0/24
AllowedIPs = 10.84.139.0/24

And the configuration used on Dick's notebook when it is on campus looks like this:

WireGuard™ tunnel on Dick's notebook, used on campus
[Interface]
Address = 10.84.139.3
PostUp = wg set %i private-key /etc/wireguard/private-key
PostUp = resolvectl dns wg0 192.168.56.1
PostUp = resolvectl domain wg0 small.private

# Gate
[Peer]
EndPoint = 192.168.57.1:51820
PublicKey = y3cjFnvQbylmH4lGTujpqc8rusIElmJ4Gu9hh6iR7QI=
AllowedIPs = 10.84.139.1
AllowedIPs = 192.168.56.0/24
AllowedIPs = 10.177.87.0/24
AllowedIPs = 10.84.139.0/24

10. The Campus Role

The campus role configures generic campus server machines: network NAS, DVRs, wireless sensors, etc. These are simple Debian machines administered remotely via Ansible. They should use the campus name server, sync with the campus time server, trust the institute certificate authority, and deliver email addressed to root to the system administrator's account on Core.

Wireless campus devices register their public keys using the ./inst client command which updates the WireGuard™ configuration on Gate.

10.1. Role Defaults

As in The Gate Role, the campus role sets a number of variables to default values in its defaults/main.yml file.

roles_t/campus/defaults/main.yml
---
<<network-vars>>
<<address-vars>>

10.2. Include Particulars

The following should be familiar boilerplate by now.

roles_t/campus/tasks/main.yml
---
- name: Include public variables.
  include_vars: ../public/vars.yml

- name: Include private variables.
  include_vars: ../private/vars.yml

10.3. Configure Hostname

Clients should be using the expected host name.

roles_t/campus/tasks/main.yml

- name: Configure hostname.
  become: yes
  copy:
    content: "{{ item.content }}"
    dest: "{{ item.file }}"
  loop:
  - { file: /etc/hostname,
      content: "{{ inventory_hostname }}\n" }
  - { file: /etc/mailname,
      content: "{{ inventory_hostname }}.{{ domain_priv }}\n" }

- name: Update hostname.
  become: yes
  command: hostname -F /etc/hostname
  when: inventory_hostname != ansible_hostname
  tags: actualizer

10.4. Configure Systemd Timesyncd

The institute uses a common time reference throughout the campus. This is essential to campus security, improving the accuracy of log and file timestamps.

roles_t/campus/tasks/main.yml

- name: Configure timesyncd.
  become: yes
  lineinfile:
    path: /etc/systemd/timesyncd.conf
    line: NTP=ntp.{{ domain_priv }}
  notify: Restart systemd-timesyncd.
roles_t/campus/handlers/main.yml
---
- name: Restart systemd-timesyncd.
  become: yes
  systemd:
    service: systemd-timesyncd
    state: restarted
  tags: actualizer

10.5. Add Administrator to System Groups

The administrator often needs to read (directories of) log files owned by groups root and adm. Adding the administrator's account to these groups speeds up debugging.

roles_t/campus/tasks/main.yml

- name: Add {{ ansible_user }} to system groups.
  become: yes
  user:
    name: "{{ ansible_user }}"
    append: yes
    groups: root,adm

10.6. Install Unattended Upgrades

The institute prefers to install security updates as soon as possible.

roles_t/campus/tasks/main.yml

- name: Install basic software.
  become: yes
  apt: pkg=unattended-upgrades

10.7. Configure Postfix on Campus

The Postfix settings used by the campus include message size, queue times, and the relayhost Core. The default Debian configuration (for an "Internet Site") is otherwise sufficient. Manual installation may prompt for configuration type and mail name. The appropriate answers are listed here but will be checked (corrected) by Ansible tasks below.

  • General type of mail configuration: Internet Site
  • System mail name: new.small.private
roles_t/campus/tasks/main.yml

- name: Install Postfix.
  become: yes
  apt: pkg=postfix

- name: Configure Postfix.
  become: yes
  lineinfile:
    path: /etc/postfix/main.cf
    regexp: "^ *{{ item.p }} *="
    line: "{{ item.p }} = {{ item.v }}"
  loop:
  <<postfix-relaying>>
  <<postfix-message-size>>
  <<postfix-queue-times>>
  <<postfix-maildir>>
  - { p: myhostname,
      v: "{{ inventory_hostname }}.{{ domain_priv }}" }
  - { p: mydestination,
      v: "{{ postfix_mydestination | default('') }}" }
  - { p: relayhost, v: "[smtp.{{ domain_priv }}]" }
  - { p: inet_interfaces, v: loopback-only }
  notify: Restart Postfix.

- name: Start Postfix.
  become: yes
  systemd:
    service: postfix
    state: started
  tags: actualizer

- name: Enable Postfix.
  become: yes
  systemd:
    service: postfix
    enabled: yes
roles_t/campus/handlers/main.yml

- name: Restart Postfix.
  become: yes
  systemd:
    service: postfix
    state: restarted
  tags: actualizer

10.8. Set Domain Name

The host's fully qualified (private) domain name (FQDN) is set by an alias in its /etc/hosts file, as is customary on Debian. (See "The "recommended method of setting the FQDN" in the hostname(1) manpage.)

roles_t/campus/tasks/main.yml

- name: Set domain name.
  become: yes
  vars:
    name: "{{ inventory_hostname }}"
  lineinfile:
    path: /etc/hosts
    regexp: "^127.0.1.1[        ].*"
    line: "127.0.1.1    {{ name }}.{{ domain_priv }} {{ name }}"

10.9. Configure NRPE

Each campus host runs an NRPE (a NAGIOS Remote Plugin Executor) server so that the NAGIOS4 server on Core can collect statistics. The NAGIOS service is discussed in the Configure NRPE section of The Core Role.

roles_t/campus/tasks/main.yml

- name: Install NRPE.
  become: yes
  apt:
    pkg: [ nagios-nrpe-server, lm-sensors ]

- name: Install inst_sensors NAGIOS plugin.
  become: yes
  copy:
    src: ../core/files/inst_sensors
    dest: /usr/local/sbin/inst_sensors
    mode: u=rwx,g=rx,o=rx

- name: Configure NRPE server.
  become: yes
  copy:
    content: |
      allowed_hosts=127.0.0.1,::1,{{ core_addr }}
    dest: /etc/nagios/nrpe_local.cfg
  notify: Reload NRPE server.

- name: Configure NRPE commands.
  become: yes
  copy:
    src: nrpe.cfg
    dest: /etc/nagios/nrpe.d/institute.cfg
  notify: Reload NRPE server.

- name: Start NRPE server.
  become: yes
  systemd:
    service: nagios-nrpe-server
    state: started
  tags: actualizer

- name: Enable NRPE server.
  become: yes
  systemd:
    service: nagios-nrpe-server
    enabled: yes
roles_t/campus/handlers/main.yml

- name: Reload NRPE server.
  become: yes
  systemd:
    service: nagios-nrpe-server
    state: reloaded
  tags: actualizer

11. The Ansible Configuration

The small institute uses Ansible to maintain the configuration of its servers. The administrator keeps an Ansible inventory in hosts, and runs playbook site.yml to apply the appropriate institutional role(s) to each host. Examples of these files are included here, and are used to test the roles. The example configuration applies the institutional roles to VirtualBox machines prepared according to chapter Testing.

The actual Ansible configuration is kept in a Git "superproject" containing replacements for the example hosts inventory and site.yml playbook, as well as the public/ and private/ particulars. Thus changes to this document and its tangle are easily merged with git pull --recurse-submodules or git submodule update, while changes to the institute's particulars are committed to a separate revision history.

11.1. ansible.cfg

The Ansible configuration file ansible.cfg contains just a handful of settings, some included just to create a test jig as described in Testing.

  • interpreter_python is set to suppress a warning from Ansible's "automatic interpreter discovery" (described here). It declares that Python 3 can be expected on all institute hosts.
  • vault_password_file is set to suppress prompts for the vault password. The institute keeps its vault password in Secret/ (as described in Keys) and thus sets this parameter to Secret/vault-password.
  • inventory is set to avoid specifying it on the command line.
  • roles_path is set to the recently tangled roles files in roles_t/ which are preferred in the test configuration.
ansible.cfg
[defaults]
interpreter_python=/usr/bin/python3
vault_password_file=Secret/vault-password
inventory=hosts
roles_path=roles_t

11.2. hosts

The Ansible inventory file hosts describes all of the institute's machines starting with the main servers Front, Core and Gate. It provides the IP addresses, administrator account names and passwords for each machine. The IP addresses are all private, campus network addresses except Front's public IP. The following example host file describes three test servers named front, core and gate.

hosts
all:
  vars:
    ansible_user: sysadm
    ansible_ssh_extra_args: -i Secret/ssh_admin/id_rsa
  hosts:
    front:
      ansible_host: 192.168.58.3
      ansible_become_password: "{{ become_front }}"
    core:
      ansible_host: 192.168.56.1
      ansible_become_password: "{{ become_core }}"
    gate:
      ansible_host: 192.168.56.2
      ansible_become_password: "{{ become_gate }}"
  children:
    campus:
      hosts:
        gate:

The values of the ansible_become_password key are references to variables defined in Secret/become.yml, which is loaded as "extra" variables by a -e option on the ansible-playbook command line.

Secret/become.yml
become_front: !vault |
        $ANSIBLE_VAULT;1.1;AES256
        3563626131333733666466393166323135383838666338666131336335326
        3656437663032653333623461633866653462636664623938356563306264
        3438660a35396630353065383430643039383239623730623861363961373
        3376663366566326137386566623164313635303532393335363063333632
        363163316436380a336562323739306231653561613837313435383230313
        1653565653431356362
become_core: !vault |
        $ANSIBLE_VAULT;1.1;AES256
        3464643665363937393937633432323039653530326465346238656530303
        8633066663935316365376438353439333034666366363739616130643261
        3232380a66356462303034636332356330373465623337393938616161386
        4653864653934373766656265613636343334356361396537343135393663
        313562613133380a373334393963623635653264663538656163613433383
        5353439633234666134
become_gate: !vault |
        $ANSIBLE_VAULT;1.1;AES256
        3138306434313739626461303736666236336666316535356561343566643
        6613733353434333962393034613863353330623761623664333632303839
        3838350a37396462343738303331356134373634306238633030303831623
        0636537633139366333373933396637633034383132373064393939363231
        636264323132370a393135666335303361326330623438613630333638393
        1303632663738306634

The passwords are individually encrypted just to make it difficult to acquire a list of all institute privileged account passwords in one glance. The multi-line values are generated by the ansible-vault encrypt_string command, which uses the ansible.cfg file and thus the Secret/vault-password file.

11.3. playbooks/site.yml

The example playbooks/site.yml playbook (below) applies the appropriate institutional role(s) to the hosts and groups defined in the example inventory: hosts.

playbooks/site.yml
---
- name: Configure All
  hosts: all
  roles: [ all ]

- name: Configure Front
  hosts: front
  roles: [ front ]

- name: Configure Gate
  hosts: gate
  roles: [ gate ]

- name: Configure Core
  hosts: core
  roles: [ core ]

- name: Configure Campus
  hosts: campus
  roles: [ campus ]

11.4. Secret/vault-password

As already mentioned, the small institute keeps its Ansible vault password, a "master secret", on the encrypted partition mounted at Secret/ in a file named vault-password. The administrator generated a 16 character pronounceable password with gpw 1 16 and saved it like so: gpw 1 16 >Secret/vault-password. The following example password matches the example encryptions above.

Secret/vault-password
alitysortstagess

11.5. Creating A Working Ansible Configuration

A working Ansible configuration can be "tangled" from this document to produce the test configuration described in the Testing chapter. The tangling is done by Emacs's org-babel-tangle function and has already been performed with the resulting tangle included in the distribution with this document.

An institution using the Ansible configuration herein can include this document and its tangle as a Git submodule, e.g. in institute/, and thus safely merge updates while keeping public and private particulars separate, in sibling subdirectories public/ and private/. The following example commands create a new Git repo in ~/network/ and add an institute/ submodule.

cd
mkdir network
cd network
git init
git submodule add git://birchwood-abbey.net/~puck/institute
git add institute

An institute administrator would then need to add several more files.

  • A top-level Ansible configuration file, ansible.cfg, would be created by copying institute/ansible.cfg and changing the roles_path to roles:institute/roles.
  • A host inventory, hosts, would be created, perhaps by copying institute/hosts and changing its IP addresses.
  • A site playbook, site.yml, would be created in a new playbooks/ subdirectory by copying institute/playbooks/site.yml with appropriate changes.
  • All of the files in institute/public/ and institute/private/ would be copied, with appropriate changes, into new subdirectories public/ and private/.
  • ~/network/Secret would be a symbolic link to the (auto-mounted?) location of the administrator's encrypted USB drive, as described in section Keys.

The files in institute/roles_t/ were "tangled" from this document and must be copied to institute/roles/ for reasons discussed in the next section. This document does not "tangle" directly into roles/ to avoid clobbering changes to a working (debugged!) configuration.

The playbooks/ directory must include the institutional playbooks, which find their settings and templates relative to this directory, e.g. in ../private/vars.yml. Running institutional playbooks from ~/network/playbooks/ means they will use ~/network/private/ rather than the example ~/network/institute/private/.

cp -r institute/roles_t institute/roles
( cd playbooks; ln -s ../institute/playbooks/* . )

Given these preparations, the inst script should work in the super-project's directory.

./institute/inst config -n

11.6. Maintaining A Working Ansible Configuration

The Ansible roles currently tangle into the roles_t/ directory to ensure that debugged Ansible code in roles/ is not clobbered by code tangled from this document. Comparing roles_t/ with roles/ will reveal any changes made to roles/ during debugging that need to be reconciled with this document as well as any policy changes in this document that require changes to the current roles/.

When debugging literate programs becomes A Thing, then this document can tangle directly into roles/, and literate debuggers can find their way back to the code block in this document.

12. The Institute Commands

The institute's administrator uses a convenience script to reliably execute standard procedures. The script is run with the command name ./inst because it is intended to run "in" the same directory as the Ansible configuration. The Ansible commands it executes are expected to get their defaults from ./ansible.cfg.

12.1. Sub-command Blocks

The code blocks in this chapter tangle into the inst script. Each block examines the script's command line arguments to determine whether its sub-command was intended to run, and exits with an appropriate code when it is done.

The first code block is the header of the ./inst script.

inst
#!/usr/bin/perl -w
#
# DO NOT EDIT.
#
# This file was tangled from a small institute's README.org.

use strict;
use IO::File;

12.2. Sanity Check

The next code block does not implement a sub-command; it implements part of all ./inst sub-commands. It performs a "sanity check" on the current directory, warning of missing files or directories, and especially checking that all files in private/ have appropriate permissions. It probes past the Secret/ mount point (probing for Secret/become.yml) to ensure the volume is mounted.

inst

sub note_missing_file_p ($);
sub note_missing_directory_p ($);

{
  my $missing = 0;
  if (note_missing_file_p "ansible.cfg") { $missing += 1; }
  if (note_missing_file_p "hosts") { $missing += 1; }
  if (note_missing_directory_p "Secret") { $missing += 1; }
  if (note_missing_file_p "Secret/become.yml") { $missing += 1; }
  if (note_missing_directory_p "playbooks") { $missing += 1; }
  if (note_missing_file_p "playbooks/site.yml") { $missing += 1; }
  if (note_missing_directory_p "roles") { $missing += 1; }
  if (note_missing_directory_p "public") { $missing += 1; }
  if (note_missing_directory_p "private") { $missing += 1; }

  for my $filename (glob "private/*") {
    my $perm = (stat $filename)[2];
    if ($perm & 077) {
      print "$filename: not private\n";
    }
  }
  die "$missing missing files\n" if $missing != 0;
}

sub note_missing_file_p ($) {
  my ($filename) = @_;
  if (! -f $filename) {
    print "$filename: missing\n";
    return 1;
  } else {
    return 0;
  }
}

sub note_missing_directory_p ($) {
  my ($dirname) = @_;
  if (! -d $dirname) {
    print "$dirname: missing\n";
    return 1;
  } else {
    return 0;
  }
}

12.3. Importing Ansible Variables

To ensure that Ansible and ./inst are sympatico vis-a-vi certain variable values (esp. private values like network addresses), a check-inst-vars.yml playbook is used to update the Perl syntax file private/vars.pl before ./inst loads it. The Perl code in inst declares the necessary global variables and private/vars.pl sets them.

inst

sub mysystem (@) {
  my $line = join (" ", @_);
  print "$line\n";
  my $status = system $line;
  die "status: $status\nCould not run $line: $!\n" if $status != 0;
}

mysystem "ansible-playbook playbooks/check-inst-vars.yml >/dev/null";

our ($domain_name, $domain_priv,
     $front_addr, $front_wg_pubkey,
     $public_wg_net_cidr, $public_wg_port,
     $private_net_cidr, $wild_net_cidr,
     $gate_wild_addr, $gate_wg_pubkey,
     $campus_wg_net_cidr, $campus_wg_port,
     $core_addr, $core_wg_pubkey);
do "./private/vars.pl";

The playbook that updates private/vars.pl:

playbooks/check-inst-vars.yml
- hosts: localhost
  gather_facts: no
  roles: [ check-inst-vars ]

12.4. The check-inst-vars Role

This role is executed by playbooks/check-inst-vars.yml and is not just a playbook because it needs a copy of the role defaults.

roles_t/check-inst-vars/defaults/main.yml
---
<<network-vars>>
<<address-vars>>
roles_t/check-inst-vars/tasks/main.yml
---
- include_vars: ../public/vars.yml
- include_vars: ../private/vars.yml
- copy:
    content: |
      $domain_name = "{{ domain_name }}";
      $domain_priv = "{{ domain_priv }}";

      $front_addr = "{{ front_addr }}";
      $front_wg_pubkey = "{{ front_wg_pubkey }}";

      $public_wg_net_cidr = "{{ public_wg_net_cidr }}";
      $public_wg_port = "{{ public_wg_port }}";

      $private_net_cidr = "{{ private_net_cidr }}";
      $wild_net_cidr = "{{ wild_net_cidr }}";

      $gate_wild_addr = "{{ gate_wild_addr }}";
      $gate_wg_pubkey = "{{ gate_wg_pubkey }}";

      $campus_wg_net_cidr = "{{ campus_wg_net_cidr }}";
      $campus_wg_port = "{{ campus_wg_port }}";

      $core_addr = "{{ core_addr }}";
      $core_wg_pubkey = "{{ core_wg_pubkey }}";
    dest: ../private/vars.pl
    mode: u=rw,g=,o=

Most of these settings are already in private/vars.yml. The following few provide the servers' public keys and ports.

private/vars.yml
front_wg_pubkey: S+6HaTnOwwhWgUGXjSBcPAvifKw+j8BDTRfq534gNW4=
public_wg_port:  39608

gate_wg_pubkey:  y3cjFnvQbylmH4lGTujpqc8rusIElmJ4Gu9hh6iR7QI=
campus_wg_port:  51820

core_wg_pubkey:  lGhC51IBgZtlq4H2bsYFuKvPtV0VAEwUvVIn5fW7D0c=

12.5. The CA Command

The next code block implements the CA sub-command, which creates a new CA (certificate authority) in Secret/CA/ as well as SSH and PGP keys for the administrator, Monkey, Front and root, also in sub-directories of Secret/. The CA is created with the "common name" provided by the full_name variable. An example is given here.

public/vars.yml
full_name: Small Institute LLC

The Secret/ directory is on an off-line, encrypted volume plugged in just for the duration of ./inst commands, so Secret/ is actually a symbolic link to a volume's automount location.

ln -s /media/sysadm/ADE7-F866/ Secret

The Secret/CA/ directory is prepared using Easy RSA's make-cadir command. The Secret/CA/vars file thus created is edited to contain the appropriate names (or just to set EASYRSA_DN to cn_only).

sudo apt install easy-rsa
( cd Secret/; make-cadir CA )
./inst CA

Running ./inst CA creates the new CA and keys. The command prompts for the Common Name (or several levels of Organizational names) of the certificate authority. The full_name is given: Small Institute LLC. The CA is used to issue certificates for front, gate and core, which are installed on the servers during the next ./inst config.

inst

if (defined $ARGV[0] && $ARGV[0] eq "CA") {
  die "usage: $0 CA" if @ARGV != 1;
  die "Secret/CA/easyrsa: not an executable\n"
    if ! -x "Secret/CA/easyrsa";
  die "Secret/CA/pki/: already exists\n" if -e "Secret/CA/pki";

  umask 077;
  mysystem "cd Secret/CA; ./easyrsa init-pki";
  mysystem "cd Secret/CA; ./easyrsa build-ca nopass";
  # Common Name: small.example.org

  my $dom = $domain_name;
  my $pvt = $domain_priv;
  mysystem "cd Secret/CA; ./easyrsa build-server-full $dom nopass";
  mysystem "cd Secret/CA; ./easyrsa build-server-full core.$pvt nopass";
  umask 077;

  mysystem "mkdir --mode=700 Secret/root.gnupg";
  mysystem ("gpg --homedir Secret/root.gnupg",
            "--batch --quick-generate-key --passphrase ''",
            "root\@core.$pvt");
  mysystem ("gpg --homedir Secret/root.gnupg",
            "--export --armor --output Secret/root-pub.pem",
            "root\@core.$pvt");
  chmod 0440, "root-pub.pem";
  mysystem ("gpg --homedir Secret/root.gnupg",
            "--export-secret-key --armor --output Secret/root-sec.pem",
            "root\@core.$pvt");
  chmod 0400, "root-sec.pem";

  mysystem "mkdir Secret/ssh_admin";
  chmod 0700, "Secret/ssh_admin";
  mysystem ("ssh-keygen -q -t rsa",
            "-C A\\ Small\\ Institute\\ Administrator",
            "-N '' -f Secret/ssh_admin/id_rsa");

  mysystem "mkdir Secret/ssh_monkey";
  chmod 0700, "Secret/ssh_monkey";
  mysystem "echo 'HashKnownHosts  no' >Secret/ssh_monkey/config";
  mysystem ("ssh-keygen -q -t rsa -C monkey\@core",
            "-N '' -f Secret/ssh_monkey/id_rsa");

  mysystem "mkdir Secret/ssh_front";
  chmod 0700, "Secret/ssh_front";
  mysystem "ssh-keygen -A -f Secret/ssh_front -C $dom";
  exit;
}

12.6. The Config Command

The next code block implements the config sub-command, which provisions network services by running the site.yml playbook described in playbooks/site.yml. It recognizes an optional -n flag indicating that the service configurations should just be checked. Given an optional host name, it provisions (or checks) just the named host.

Example command lines:

./inst config
./inst config -n
./inst config HOST
./inst config -n HOST
inst

if (defined $ARGV[0] && $ARGV[0] eq "config") {
  die "Secret/CA/easyrsa: not executable\n"
    if ! -x "Secret/CA/easyrsa";
  shift;
  my $cmd = "ansible-playbook -e \@Secret/become.yml";
  if (defined $ARGV[0] && $ARGV[0] eq "-n") {
    shift;
    $cmd .= " --check --diff"
  }
  if (@ARGV == 0) {
    ;
  } elsif (defined $ARGV[0]) {
    my $hosts = lc $ARGV[0];
    die "$hosts: contains illegal characters"
      if $hosts !~ /^!?[a-z][-a-z0-9,!]+$/;
    $cmd .= " -l $hosts";
  } else {
    die "usage: $0 config [-n] [HOSTS]\n";
  }
  $cmd .= " playbooks/site.yml";
  mysystem $cmd;
  exit;
}

12.7. Account Management

For general information about members and their Unix accounts, see Accounts. The account management sub-commands maintain a mapping associating member "usernames" (Unix account names) with their records. The mapping is stored among other things in private/members.yml as the value associated with the key members.

A new member's record in the members mapping will have the status key value current. That key gets value former when the member leaves.3 Access by former members is revoked by invalidating the Unix account passwords, removing any authorized SSH keys from Front and Core, and removing their public keys from the WireGuard™ configurations.

The example file (below) contains a membership roll with one membership record, for an account named dick, which registered the public keys of devices named dick, dicks-phone and dicks-razr. dicks-razr is presumably a replacement for dicks-phone, which was lost and its key invalidated. Lastly, Dick's membership record includes a vault-encrypted password (for Fetchmail) and the two password hashes installed on Front and Core. (The example hashes are truncated versions.)

private/members.yml
---
members:
  dick:
    status: current
    clients:
    - dick 4 4qd4xdRztZBKhFrX9jI/b4fnMzpKQ5qhg691hwYSsX8=
    - dicks-phone 5 --WFbTSff17QiYObXoU+7mjaEUCqKjgvLqA49pAxqVeWg=
    - dicks-razr 6 zho0qMxoLclJSQu4GeJEcMkk0hx4Q047OcNc8vOejVw=
    password_front:
      $6$17h49U76$c7TsH6eMVmoKElNANJU1F1LrRrqzYVDreNu.QarpCoSt9u0gTHgiQ
    password_core:
      $6$E9se3BoSilq$T.W8IUb/uSlhrVEWUQsAVBweiWB4xb3ebQ0tguVxJaeUkqzVmZ
    password_fetchmail: !vault |
      $ANSIBLE_VAULT;1.1;AES256
      38323138396431323564366136343431346562633965323864633938613363336
      4333334333966363136613264636365383031376466393432623039653230390a
      39366232633563646361616632346238333863376335633639383162356661326
      4363936393530633631616630653032343465383032623734653461323331310a
      6535633263656434393030333032343533626235653332626330666166613833
usernames:
- dick
clients:
- thing 3 LdsCsgfjKCfd5+VKS+Q/dQhWO8NRNygByDO2VxbXlSQ=

The members.yml file will be modified during testing, and should not be overwritten by a re-tangle during testing, so it not tangled from this file. Thus in the fresh built (e.g. test) system private/members.yml does not exist, not until a ./inst new command creates the first member. Until then, Ansible includes the private/members-empty.yml file. It does that using the first_found lookup plugin and a list of the two files with members.yml first and members-empty.yml last. That list is the value of membership_rolls.

membership-rolls

membership_rolls:
- "../private/members.yml"
- "../private/members-empty.yml"
private/members-empty.yml
---
members: {}
usernames: []
clients: []

Using the standard Perl library YAML::XS, the subroutine for reading the membership roll is simple, returning the top-level hash read from the file. The dump subroutine is another story (below).

inst

use YAML::XS qw(LoadFile DumpFile);

sub read_members_yaml () {
  my $path;
  $path = "private/members.yml";
  if (-e $path) { return LoadFile ($path); }
  $path = "private/members-empty.yml";
  if (-e $path) { return LoadFile ($path); }
  die "private/members.yml: not found\n";
}

sub write_members_yaml ($) {
  my ($yaml) = @_;
  my $old_umask = umask 077;
  my $path = "private/members.yml";
  print "$path: "; STDOUT->flush;
  eval { #DumpFile ("$path.tmp", $yaml);
         dump_members_yaml ("$path.tmp", $yaml);
         rename ("$path.tmp", $path)
           or die "Could not rename $path.tmp: $!\n"; };
  my $err = $@;
  umask $old_umask;
  if ($err) {
    print "ERROR\n";
  } else {
    print "updated\n";
  }
  die $err if $err;
}

sub dump_members_yaml ($$) {
  my ($pathname, $yaml) = @_;
  my $O = new IO::File;
  open ($O, ">$pathname") or die "Could not open $pathname: $!\n";
  print $O "---\n";
  if (keys %{$yaml->{"members"}}) {
    print $O "members:\n";
    for my $user (sort keys %{$yaml->{"members"}}) {
      print_member ($O, $user, $yaml->{"members"}->{$user});
    }
    print $O "usernames:\n";
    for my $user (sort keys %{$yaml->{"members"}}) {
      print $O "- $user\n";
    }
  } else {
    print $O "members: {}\n";
    print $O "usernames: []\n";
  }
  if (@{$yaml->{"clients"}}) {
    print $O "clients:\n";
    for my $name (@{$yaml->{"clients"}}) {
      print $O "- $name\n";
    }
  } else {
    print $O "clients: []\n";
  }
  close $O or die "Could not close $pathname: $!\n";
}

The first implementation using YAML::Tiny balked at the !vault data type. The current version using YAML::XS (Simonov's libyaml) does not support local data types neither, but does not abort. It just produces a multi-line string. Luckily the structure of members.yml is relatively simple and fixed, so a purpose-built printer can add back the !vault data types at appropriate points. YAML::XS thus provides only a borked parser. Also luckily, the YAML produced by the for-the-purpose printer makes the resulting membership roll easier to read, with the username and status at the top of each record.

inst

sub print_member ($$$) {
  my ($out, $username, $member) = @_;
  print $out "  ", $username, ":\n";
  print $out "    status: ", $member->{"status"}, "\n";
  if (@{$member->{"clients"} || []}) {
    print $out "    clients:\n";
    for my $name (@{$member->{"clients"} || []}) {
      print $out "    - ", $name, "\n";
    }
  } else {
    print $out "    clients: []\n";
  }
  print $out "    password_front: ", $member->{"password_front"}, "\n";
  print $out "    password_core: ", $member->{"password_core"}, "\n";
  if (defined $member->{"password_fetchmail"}) {
    print $out "    password_fetchmail: !vault |\n";
    for my $line (split /\n/, $member->{"password_fetchmail"}) {
      print $out "      $line\n";
    }
  }
  my @standard_keys = ( "status", "clients",
                        "password_front", "password_core",
                        "password_fetchmail" );
  my @other_keys = (sort
                    grep { my $k = $_;
                           ! grep { $_ eq $k } @standard_keys }
                    keys %$member);
  for my $key (@other_keys) {
    print $out "    $key: ", $member->{$key}, "\n";
  }
}

12.8. The New Command

The next code block implements the new sub-command. It adds a new member to the institute's membership roll. It runs an Ansible playbook to create the member's Nextcloud user, updates private/members.yml, and runs the site.yml playbook. The site playbook (re)creates the member's accounts on Core and Front, (re)installs the member's personal homepage on Front, and the member's Fetchmail service on Core. All services are configured with an initial, generated password.

inst

sub valid_username (@);
sub shell_escape ($);
sub strip_vault ($);

if (defined $ARGV[0] && $ARGV[0] eq "new") {
  my $user = valid_username (@ARGV);
  my $yaml = read_members_yaml ();
  my $members = $yaml->{"members"};
  die "$user: already exists\n" if defined $members->{$user};

  my $pass = `apg -n 1 -x 12 -m 12`; chomp $pass;
  print "Initial password: $pass\n";
  my $epass = shell_escape $pass;
  my $front = `mkpasswd -m sha-512 "$epass"`; chomp $front;
  my $core = `mkpasswd -m sha-512 "$epass"`; chomp $core;
  my $vault = strip_vault `ansible-vault encrypt_string "$epass"`;
  mysystem ("ansible-playbook -e \@Secret/become.yml",
            "playbooks/nextcloud-new.yml",
            "-e user=$user", "-e pass=\"$epass\"",
            ">/dev/null");
  $members->{$user} = { "status" => "current",
                        "password_front" => $front,
                        "password_core" => $core,
                        "password_fetchmail" => $vault };
  write_members_yaml $yaml;
  mysystem ("ansible-playbook -e \@Secret/become.yml",
            "-t accounts -l core,front playbooks/site.yml",
            ">/dev/null");
  exit;
}

sub valid_username (@) {
  my $sub = $_[0];
  die "usage: $0 $sub USER\n"
    if @_ != 2;
  my $username = lc $_[1];
  die "$username: does not begin with an alphabetic character\n"
    if $username !~ /^[a-z]/;
  die "$username: contains non-alphanumeric character(s)\n"
    if $username !~ /^[a-z0-9]+$/;
  return $username;
}

sub shell_escape ($) {
  my ($string) = @_;
  my $result = "$string";
  $result =~ s/([\$`"\\ ])/\\$1/g;
  return ($result);
}

sub strip_vault ($) {
  my ($string) = @_;
  die "Unexpected result from ansible-vault: $string\n"
    if $string !~ /^ *!vault [|]/;
  my @lines = split /^ */m, $string;
  return (join "", @lines[1..$#lines]);
}
playbooks/nextcloud-new.yml
- hosts: core
  tasks:
  - name: Run occ user:add.
    become: yes
    shell:
      chdir: /var/www/nextcloud/
      cmd: >
        sudo -u www-data sh -c
        "OC_PASS={{ pass }}
        php occ user:add {{ user }} --password-from-env"

12.9. The Pass Command

The institute's passwd command on Core securely emails root with a member's desired password (hashed). The command may update the servers immediately or let the administrator do that using the ./inst pass command. In either case, the administrator needs to update the membership roll, and so receives an encrypted email, which gets piped into ./inst pass. This command decrypts the message, parses the (YAML) content, updates private/members.yml, and runs the full Ansible site.yml playbook to update the servers. If all goes well a message is sent to member@core.

12.9.1. Less Aggressive passwd.

The next code block implements the less aggressive passwd command. It is less aggressive because it just emails root. It does not update the servers, so it does not need an SSH key and password to root (any privileged account) on Front, nor a set-UID root script (nor equivalent) on Core. It is a set-UID shadow script so it can read /etc/shadow. The member will need to wait for confirmation from the administrator, but all keys to root at the institute stay in Secret/.

roles_t/core/templates/passwd
#!/bin/perl -wT

use strict;

$ENV{PATH} = "/usr/sbin:/usr/bin:/bin";

my ($username) = getpwuid $<;
if ($username ne "{{ ansible_user }}") {
  { exec ("sudo", "-u", "{{ ansible_user }}",
          "/usr/local/bin/passwd", $username) };
  print STDERR "Could not exec sudo: $!\n";
  exit 1;
}

$username = $ARGV[0];
my $passwd;
{
  my $SHADOW = new IO::File;
  open $SHADOW, "</etc/shadow" or die "Cannot read /etc/shadow: $!\n";
  my ($line) = grep /^$username:/, <$SHADOW>;
  close $SHADOW;
  die "No /etc/shadow record found: $username\n" if ! defined $line;
  (undef, $passwd) = split ":", $line;
}

system "stty -echo";
END { system "stty echo"; }

print "Current password: ";
my $pass = <STDIN>; chomp $pass;
print "\n";
my $hash = crypt($pass, $passwd);
die "Sorry...\n" if $hash ne $passwd;

print "New password: ";
$pass = <STDIN>; chomp($pass);
die "Passwords must be at least 10 characters long.\n"
  if length $pass < 10;
print "\nRetype password: ";
my $pass2 = <STDIN>; chomp($pass2);
print "\n";
die "New passwords do not match!\n"
  if $pass2 ne $pass;

use MIME::Base64;
my $epass = encode_base64 $pass;

use File::Temp qw(tempfile);
my ($TMP, $tmp) = tempfile;
close $TMP;

my $O = new IO::File;
open $O, ("| gpg --encrypt --armor"
          ." --recipient-file /etc/root-pub.pem"
          ." > $tmp") or die "Error running gpg > $tmp: $!\n";
print $O <<EOD;
username: $username
password: $epass
EOD
close $O or die "Error closing pipe to gpg: $!\n";

use File::Copy;
open ($O, "| sendmail root");
print $O <<EOD;
From: root
To: root
Subject: New password.

EOD
$O->flush;
copy $tmp, $O;
#print $O `cat $tmp`;
close $O or die "Error closing pipe to sendmail: $!\n";

print "
Your request was sent to Root.  PLEASE WAIT for email confirmation
that the change was completed.\n";
exit;

12.9.2. Less Aggressive Pass Command

The following code block implements the ./inst pass command, used by the administrator to update private/members.yml before running playbooks/site.yml and emailing the concerned member.

inst

use MIME::Base64;
sub write_wireguard ($);

if (defined $ARGV[0] && $ARGV[0] eq "pass") {
  my $I = new IO::File;
  open $I, "gpg --homedir Secret/root.gnupg --quiet --decrypt |"
    or die "Error running gpg: $!\n";
  my $msg_yaml = LoadFile ($I);
  close $I or die "Error closing pipe from gpg: $!\n";

  my $user = $msg_yaml->{"username"};
  die "Could not find a username in the decrypted input.\n"
    if ! defined $user;
  my $pass64 = $msg_yaml->{"password"};
  die "Could not find a password in the decrypted input.\n"
    if ! defined $pass64;

  my $mem_yaml = read_members_yaml ();
  my $members = $mem_yaml->{"members"};
  my $member = $members->{$user};
  die "$user: does not exist\n" if ! defined $member;
  die "$user: no longer current\n" if $member->{"status"} ne "current";

  my $pass = decode_base64 $pass64;
  my $epass = shell_escape $pass;
  my $front = `mkpasswd -m sha-512 "$epass"`; chomp $front;
  my $core = `mkpasswd -m sha-512 "$epass"`; chomp $core;
  my $vault = strip_vault `ansible-vault encrypt_string "$epass"`;
  $member->{"password_front"} = $front;
  $member->{"password_core"} = $core;
  $member->{"password_fetchmail"} = $vault;

  mysystem ("ansible-playbook -e \@Secret/become.yml",
            "playbooks/nextcloud-pass.yml",
            "-e user=$user", "-e \"pass=$epass\"",
            ">/dev/null");
  write_members_yaml $mem_yaml;
  mysystem ("ansible-playbook -e \@Secret/become.yml",
            "-t accounts playbooks/site.yml",
            ">/dev/null");
  my $O = new IO::File;
  open ($O, "| sendmail $user\@$domain_priv")
    or die "Could not pipe to sendmail: $!\n";
  print $O "From: <root>
To: <$user>
Subject: Password change.

Your new password has been distributed to the servers.

As always: please email root with any questions or concerns.\n";
  close $O or die "pipe to sendmail failed: $!\n";
  exit;
}

And here is the playbook that runs Nextcloud's occ users:resetpassword command.

playbooks/nextcloud-pass.yml
- hosts: core
  no_log: yes
  tasks:
  - name: Run occ user:resetpassword.
    become: yes
    shell:
      chdir: /var/www/nextcloud/
      cmd: >
        sudo -u www-data sh -c
        "OC_PASS={{ pass }}
        php occ user:resetpassword {{ user }} --password-from-env"

12.9.3. Installing the Less Aggressive passwd

The following Ansible tasks install the less aggressive passwd script in /usr/local/bin/passwd on Core, and a sudo policy file declaring that any user can run the script as the admin user. The admin user is added to the shadow group so that the script can read /etc/shadow and verify a member's current password. The public PGP key for root@core is also imported into the admin user's GnuPG configuration so that the email to root can be encrypted.

roles_t/core/tasks/main.yml

- name: Install institute passwd command.
  become: yes
  template:
   src: passwd
   dest: /usr/local/bin/passwd
   mode: u=rwx,g=rx,o=rx

- name: Authorize institute passwd command as {{ ansible_user }}.
  become: yes
  copy:
    content: |
      ALL ALL=({{ ansible_user }}) NOPASSWD: /usr/local/bin/passwd
    dest: /etc/sudoers.d/01passwd
    mode: u=r,g=r,o=
    owner: root
    group: root

- name: Authorize {{ ansible_user }} to read /etc/shadow.
  become: yes
  user:
    name: "{{ ansible_user }}"
    append: yes
    groups: shadow

- name: Authorize {{ ansible_user }} to run /usr/bin/php as www-data.
  become: yes
  copy:
    content: |
      {{ ansible_user }} ALL=(www-data) NOPASSWD: /usr/bin/php
    dest: /etc/sudoers.d/01www-data-php
    mode: u=r,g=r,o=
    owner: root
    group: root

- name: Install root PGP key file.
  become: yes
  copy:
    src: ../Secret/root-pub.pem
    dest: /etc/root-pub.pem
    mode: u=r,g=r,o=r
  notify: Import root PGP key.
roles_t/core/handlers/main.yml

- name: Import root PGP key.
  become: no
  command: gpg --import /etc/root-pub.pem

12.10. The Old Command

The old command disables a member's account (and thus their clients).

inst

if (defined $ARGV[0] && $ARGV[0] eq "old") {
  my $user = valid_username (@ARGV);
  my $yaml = read_members_yaml ();
  my $members = $yaml->{"members"};
  my $member = $members->{$user};
  die "$user: does not exist\n" if ! defined $member;

  mysystem ("ansible-playbook -e \@Secret/become.yml",
            "playbooks/nextcloud-old.yml -e user=$user",
            ">/dev/null");
  $member->{"status"} = "former";
  umask 077;
  write_members_yaml $yaml;
  write_wireguard $yaml;
  mysystem ("ansible-playbook -e \@Secret/become.yml",
            "-t accounts playbooks/site.yml",
            ">/dev/null");
  exit;
}
playbooks/nextcloud-old.yml
- hosts: core
  tasks:
  - name: Run occ user:disable.
    become: yes
    shell:
      chdir: /var/www/nextcloud/
      cmd: >
        sudo -u www-data sh -c
        "php occ user:disable {{ user }}"

12.11. The Client Command

The client command registers the public key of a client wishing to connect to the institute's WireGuard™ subnets. The command allocates a host number, associates it with the provided public key, and updates the configuration files front-wg0.conf and gate-wg0.conf. These are distributed to the servers, which are then reset. Thereafter the servers recognize the new peer (and drop packets from any "peer" that is no longer authorized).

The client command also generates template WireGuard™ configuration files for the client. They contain the necessary parameters except the client's PrivateKey, which in most cases should be found in the local /etc/wireguard/private-key, not in the configuration files. Private keys (and corresponding public keys) should be generated on the client (i.e. by the WireGuard for Android™ app) and never revealed (i.e. sent in email, copied to a network drive, etc.).

The generated configurations vary depending on the type of client, which must be given as the first argument to the command. For most types, two configuration files are generated. campus.conf contains the client's campus VPN configuration, and public.conf the client's public VPN configuration.

  • ./inst client android NAME USER PUBKEY
    An android client runs WireGuard for Android™ or work-alike.
  • ./inst client debian NAME USER PUBKEY
    A debian client runs a Debian/Linux desktop with NetworkManager (though wg-quick is currently used).
  • ./inst client campus NAME PUBKEY
    A campus client is an institute machine (with or without desktop) that is used by the institute generally, is not the property of a member, never roams off campus, and so is remotely administered with Ansible. Just one configuration file is generated: campus.conf.

The administrator emails the template .conf files to new members. (They contain no secrets.) The members will have already installed the wireguard package in order to run the wg genkey and wg pubkey commands. After receiving the .conf templates, they paste in their private keys and install the resulting files in e.g. /etc/wireguard/wg0.conf and wg1.conf. To connect, members run a command like systemctl start wg-quick@wg0. (There may be better support in NetworkManager soon.)

inst
sub write_wg_server ($$$$$);
sub write_wg_client ($$$$$$);
sub hostnum_to_ipaddr ($$);
sub hostnum_to_ipaddr_cidr ($$);

if (defined $ARGV[0] && $ARGV[0] eq "client") {
  my $type = $ARGV[1]||"";
  my $name = $ARGV[2]||"";
  my $user = $ARGV[3]||"";
  my $pubkey = $ARGV[4]||"";
  if ($type eq "android" || $type eq "debian") {
    die "usage: $0 client $type NAME USER PUBKEY\n" if @ARGV != 5;
    die "$name: invalid host name\n" if $name !~ /^[a-z][-a-z0-9]+$/;
  } elsif ($type eq "campus") {
    die "usage: $0 client campus NAME PUBKEY\n" if @ARGV != 4;
    die "$name: invalid host name\n" if $name !~ /^[a-z][-a-z0-9]+$/;
    $pubkey = $user;
    $user = "";
  } else {
    die "usage: $0 client [debian|android|campus]\n";
  }
  my $yaml = read_members_yaml;
  my $members = $yaml->{"members"};
  my $member = $members->{$user};
  die "$user: does not exist\n"
    if !defined $member && $type ne "campus";
  die "$user: no longer current\n"
    if defined $member && $member->{"status"} ne "current";

  my @campus_peers # [ name, hostnum, type, pubkey, user|"" ]
     = map { [ (split / /), "" ] } @{$yaml->{"clients"}};

  my @member_peers = ();
  for my $u (sort keys %$members) {
    push @member_peers,
         map { [ (split / /), $u ] } @{$members->{$u}->{"clients"}};
  }

  my @all_peers = sort { $a->[1] <=> $b->[1] }
                       (@campus_peers, @member_peers);

  for my $p (@all_peers) {
    my ($n, $h, $t, $k, $u) = @$p;
    die "$n: name already in use by $u\n"
        if $name eq $n && $u ne "";
    die "$n: name already in use on campus\n"
        if $name eq $n && $u eq "";
  }

  my $hostnum = (@all_peers
                 ? 1 + $all_peers[$#all_peers][1]
                 : 3);

  push @{$type eq "campus"
         ? $yaml->{"clients"}
         : $member->{"clients"}},
       "$name $hostnum $type $pubkey";

  umask 077;
  write_members_yaml $yaml;
  write_wireguard $yaml;

  umask 033;
  write_wg_client ("public.conf",
                   hostnum_to_ipaddr ($hostnum, $public_wg_net_cidr),
                   $type,
                   $front_wg_pubkey,
                   "$front_addr:$public_wg_port",
                   hostnum_to_ipaddr (1, $public_wg_net_cidr))
    if $type ne "campus";
  write_wg_client ("campus.conf",
                   hostnum_to_ipaddr ($hostnum, $campus_wg_net_cidr),
                   $type,
                   $gate_wg_pubkey,
                   "$gate_wild_addr:$campus_wg_port",
                   hostnum_to_ipaddr (1, $campus_wg_net_cidr));

  mysystem ("ansible-playbook -e \@Secret/become.yml",
            "-l gate,front",
            "-t accounts playbooks/site.yml",
            ">/dev/null");
  exit;
}

sub write_wireguard ($) {
  my ($yaml) = @_;

  my @campus_peers # [ name, hostnum, type, pubkey, user|"" ]
     = map { [ (split / /), "" ] } @{$yaml->{"clients"}};

  my $members = $yaml->{"members"};
  my @member_peers = ();
  for my $u (sort keys %$members) {
    next if $members->{$u}->{"status"} ne "current";
    push @member_peers,
         map { [ (split / /), $u ] } @{$members->{$u}->{"clients"}};
  }

  my @all_peers = sort { $a->[1] <=> $b->[1] }
                       (@campus_peers, @member_peers);

  my $core_wg_addr = hostnum_to_ipaddr (2, $public_wg_net_cidr);
  my $extra_front_config = "
PostUp = resolvectl dns %i $core_addr
PostUp = resolvectl domain %i $domain_priv

# Core
[Peer]
PublicKey = $core_wg_pubkey
AllowedIPs = $core_wg_addr
AllowedIPs = $private_net_cidr
AllowedIPs = $wild_net_cidr
AllowedIPs = $campus_wg_net_cidr\n";

  write_wg_server ("private/front-wg0.conf", \@member_peers,
                   hostnum_to_ipaddr_cidr (1, $public_wg_net_cidr),
                   $public_wg_port, $extra_front_config);
  write_wg_server ("private/gate-wg0.conf", \@all_peers,
                   hostnum_to_ipaddr_cidr (1, $campus_wg_net_cidr),
                   $campus_wg_port, "\n");
}

sub write_wg_server ($$$$$) {
  my ($file, $peers, $addr_cidr, $port, $extra) = @_;
  my $O = new IO::File;
  open ($O, ">$file.tmp") or die "Could not open $file.tmp: $!\n";
  print $O "[Interface]
Address = $addr_cidr
ListenPort = $port
PostUp = wg set %i private-key /etc/wireguard/private-key$extra";
  for my $p (@$peers) {
    my ($n, $h, $t, $k, $u) = @$p;
    next if $k =~ /^-/;
    my $ip = hostnum_to_ipaddr ($h, $addr_cidr);
    print $O "
# $n
[Peer]
PublicKey = $k
AllowedIPs = $ip\n";
  }
  close $O or die "Could not close $file.tmp: $!\n";
  rename ("$file.tmp", $file)
    or die "Could not rename $file.tmp: $!\n";
}

sub write_wg_client ($$$$$$) {
  my ($file, $addr, $type, $pubkey, $endpt, $server_addr) = @_;

  my $O = new IO::File;
  open ($O, ">$file.tmp") or die "Could not open $file.tmp: $!\n";

  my $DNS = ($type eq "android"
             ? "
DNS = $core_addr
Domain = $domain_priv"
             : "
PostUp = wg set %i private-key /etc/wireguard/private-key
PostUp = resolvectl dns %i $core_addr
PostUp = resolvectl domain %i $domain_priv");

  my $WILD = ($file eq "public.conf"
              ? "
AllowedIPs = $wild_net_cidr"
              : "");

  print $O "[Interface]
Address = $addr$DNS

[Peer]
PublicKey = $pubkey
EndPoint = $endpt
AllowedIPs = $server_addr
AllowedIPs = $private_net_cidr$WILD
AllowedIPs = $public_wg_net_cidr
AllowedIPs = $campus_wg_net_cidr\n";
  close $O or die "Could not close $file.tmp: $!\n";
  rename ("$file.tmp", $file)
    or die "Could not rename $file.tmp: $!\n";
}

sub hostnum_to_ipaddr ($$)
{
  my ($hostnum, $net_cidr) = @_;

  # Assume 24bit subnet, 8bit hostnum.
  # Find a Perl library for more generality?
  die "$hostnum: hostnum too large\n" if $hostnum > 255;
  my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$";
  die if !$prefix;
  return "$prefix.$hostnum";
}

sub hostnum_to_ipaddr_cidr ($$)
{
  my ($hostnum, $net_cidr) = @_;

  # Assume 24bit subnet, 8bit hostnum.
  # Find a Perl library for more generality?
  die "$hostnum: hostnum too large\n" if $hostnum > 255;
  my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$";
  die if !$prefix;
  return "$prefix.$hostnum/24";
}

12.12. Institute Command Help

This should be the last block tangled into the inst script. It catches any command lines that were not handled by a sub-command above.

inst

die "usage: $0 [CA|config|new|pass|old|client] ...\n";

13. Testing

The example files in this document, ansible.cfg and hosts as well as those in public/ and private/, along with the matching EasyRSA certificate authority and GnuPG key-ring in Secret/ (included in the distribution), can be used to configure three VirtualBox VMs simulating Core, Gate and Front in test networks simulating a private Ethernet, a wild (untrusted) Ethernet, the campus ISP, and a commercial cloud. With the test networks up and running, a simulated member's notebook can be created and alternately attached to the wild Ethernet (as though it were on the campus Wi-Fi) or the Internet (as though it were abroad). The administrator's notebook in this simulation is the VirtualBox host.

The next two sections list the steps taken to create the simulated Core, Gate and Front machines, and connect them to their networks. The process is similar to that described in The (Actual) Hardware, but is covered in detail here where the VirtualBox hypervisor can be assumed and exact command lines can be given (and copied during re-testing). The remaining sections describe the manual testing process, simulating an administrator adding and removing member accounts and devices, a member's desktop sending and receiving email, etc.

For more information on the VirtualBox Hypervisor, the User Manual can be found off-line in file:///usr/share/doc/virtualbox/UserManual.pdf. An HTML version of the latest revision can be found on the official web site at https://www.virtualbox.org/manual/UserManual.html.

13.1. The Test Networks

The networks used in the test:

public
A NAT Network, simulating the cloud provider's and campus ISP's networks. This is the only network with DHCP and DNS services provided by the hypervisor. It is not the default NAT network because gate and front need to communicate.
vboxnet0
A Host-only network, simulating the institute's private Ethernet switch. It has no services, no DHCP, just the host machine at 192.168.56.10 pretending to be the administrator's notebook.
vboxnet1
Another Host-only network, simulating the wild Ethernet between Gate and the campus IoT (and Wi-Fi APs). It has no services, no DHCP, just the host at 192.168.57.2.
vboxnet2
A third Host-only network, used only to directly connect the host to front.

In this simulation the IP address for front is not a public address but a private address on the NAT network public. Thus front is not accessible by the host, by Ansible on the administrator's notebook. To work around this restriction, front gets a second network interface connected to the vboxnet2 network. The address of this second interface is used by Ansible to access front.4

The networks described above are created and "started" with the following VBoxManage commands.

VBoxManage natnetwork add --netname public \
                          --network 192.168.15.0/24 \
                          --enable --dhcp on --ipv6 off
VBoxManage natnetwork start --netname public
VBoxManage dhcpserver modify --network=public --lower-ip=192.168.15.5
VBoxManage hostonlyif create # vboxnet0
VBoxManage hostonlyif ipconfig vboxnet0 --ip=192.168.56.10
VBoxManage hostonlyif create # vboxnet1
VBoxManage hostonlyif ipconfig vboxnet1 --ip=192.168.57.2
VBoxManage hostonlyif create # vboxnet2
VBoxManage hostonlyif ipconfig vboxnet2 --ip=192.168.58.1

Note that only the NAT network public should have a DHCP server enabled (to simulate an ISP and cloud for gate and front respectively). Yet front is statically assigned an IP address outside the DHCP server's pool. This ensures it gets front_addr without more server configuration.

Note also that actual ISPs and clouds will provide Gate and Front with public network addresses. In this simulation "they" provide addresses in 192.168.15.0/24, on the NAT network public.

13.2. The Test Machines

The virtual machines are created by VBoxManage command lines in the following sub-sections. They each start with a recent Debian release (e.g. debian-12.5.0-amd64-netinst.iso) in their simulated DVD drives. Preparation of The Hardware installed additional software packages and keys while the machines had Internet access. They were then moved to the new campus network where Ansible completed the configuration without Internet access.

Preparation of the test machines is automated by "preparatory scripts" that install the same "additional software packages" and the same test keys given in the examples. The scripts are run on each VM while they are still attached to the host's NAT network and have Internet access. They prepare the machine to reboot on the simulated campus network without Internet access, ready for final configuration by Ansible and the first launch of services. The "move to campus" is simulated by shutting each VM down, executing a VBoxManage command line or two, and restarting.

13.2.1. The Test Wireguard™ Keys

All of the private keys used in the example/test configuration are listed here. The first three are copied to /etc/wireguard/private-key on the servers: front, gate and core. The rest are installed on the test client to give it different personae. In actual use, private keys are generated on the servers and clients, and stay there. Only the public keys are collected (and registered with the ./inst client command).

Test Host WireGuard™ Private Key
front AJkzVxfTm/KvRjzTN/9X2jYy+CAugiwZfN5F3JTegms=
gate yOBdLbXh6KBwYQvvb5mhiku8Fxkqc5Cdyz6gNgjc/2U=
core AI+KhwnsHzSPqyIyAObx7EBBTBXFZPiXb2/Qcts8zEI=
thing KIwQT5eGOl9w1qOa5I+2xx5kJH3z4xdpmirS/eGdsXY=
dick WAhrlGccPf/BaFS5bRtBE4hEyt3kDxCavmwZfVTsfGs=
dicks-phone oG/Kou9HOBCBwHAZGypPA1cZWUL6nR6WoxBiXc/OQWQ=
dicks-razr IGNcF0VpkIBcJQAcLZ9jgRmk0SYyUr/WwSNXZoXXUWQ=

13.2.2. Ansible Test Authorization

Part of each machine's preparation is to authorize password-less SSH connections from Ansible, which will be using the public key in Secret/ssh_admin/. This is common to all machines and so is provided here tagged with test-auth and used via noweb reference <<test-auth>> in each machine's preparatory script.

test-auth
( cd
  umask 077
  if [ ! -d .ssh ]; then mkdir .ssh; fi
  ( echo -n "ssh-rsa"
    echo -n " AAAAB3NzaC1yc2EAAAADAQABAAABgQDXxXnqFaUq3WAmmW/P8OMm3cf"
    echo -n "AGJoL1UC8yjbsRzt63RmusID2CvPTJfO/sbNAxDKHPBvYJqiwBY8Wh2V"
    echo -n "BDXoO2lWAK9JOSvXMZZRmBh7Yk6+NsPSbeZ6H3DgzdmKubs4E5XEdkmO"
    echo -n "iivyiGBWiwzDKAOqWvb60yWDDNEuHyGNznKjyL+nAOzul1hP5f23vX3e"
    echo -n "VhTxV0zdClksvIppGsYY3EvhMxasnjvGOhECz1Pq/9PPxakY1kBKMFj8"
    echo -n "yh75UfYJyRiUcFUVZD/dQyDMj7gtihv4ANiUAIgn94I4Gt9t8a2OiLyr"
    echo -n "KhJAwTQrs4CA+suY+3uDcp2FuQAvuzpa2moUufNetQn9YYCpCQaio8I3"
    echo -n "N9N5POqPGtNT/8Fv1wwWsl/T363NJma7lrtQXKgq52YYmaUNnHxPFqLP"
    echo -n "/9ELaAKbKrXTel0ew/LyVEO6QJ6fU7lE3LYMF5DngleOpuOHyQdIJKvS"
    echo -n "oCb7ilDuG8ekZd3ZEROhtyHlr7UcHrtmZMYjhlRc="
    echo " A Small Institute Administrator" ) \
  >>.ssh/authorized_keys )

13.2.3. A Test Machine

The following shell function contains most of the VBoxManage commands needed to create the test machines. The name of the machine is taken from the NAME shell variable and the quantity of RAM and disk space from the RAM and DISK variables. The function creates a DVD drive on each machine and loads it with a simulated CD of a recent Debian release. The path to the CD disk image (.iso file) is taken from the ISO shell variable.

function create_vm {
  VBoxManage createvm --name $NAME --ostype Debian_64 --register
  VBoxManage modifyvm $NAME --memory $RAM
  VBoxManage createhd --size $DISK \
                      --filename ~/VirtualBox\ VMs/$NAME/$NAME.vdi
  VBoxManage storagectl $NAME --name "SATA Controller" \
                        --add sata --controller IntelAHCI
  VBoxManage storageattach $NAME --storagectl "SATA Controller" \
                           --port 0 --device 0 --type hdd \
                           --medium ~/VirtualBox\ VMs/$NAME/$NAME.vdi

  VBoxManage storagectl $NAME --name "IDE Controller" --add ide
  VBoxManage storageattach $NAME --storagectl "IDE Controller" \
      --port 0 --device 0 --type dvddrive --medium $ISO
  VBoxManage modifyvm $NAME --boot1 dvd --boot2 disk
}

After this shell function creates a VM, its network interface is attached to the default NAT network, simulating the Internet connected network where actual hardware is prepared.

Here are the commands needed to create the test machine front with 512MiB of RAM and 4GiB of disk and the Debian 12.5.0 release in its CDROM drive.

NAME=front
RAM=512
DISK=4096
ISO=~/Downloads/debian-12.5.0-amd64-netinst.iso
create_vm

Soon after starting, the machine console shows the Debian GNU/Linux installer menu and the default "Graphical Install" is chosen. On the machines with only 512MB of RAM, front and gate, the installer switches to a text screen and warns it is using a "Low memory mode". The installation proceeds in English and its first prompt is for a location. The appropriate responses to this and subsequent prompts are given in the list below.

  • Select a language (unless in low memory mode!)
    • Language: English - English
  • Select your location
    • Continent or region: 9 (North America, if in low memory mode!)
    • Country, territory or area: 4 (United States)
  • Configure the keyboard
    • Keymap to use: 1 (American English)
  • Configure the network
    • Hostname: small (gate, core, etc.)
    • Domain name: example.org (small.private)
  • Set up users and passwords.
    • Root password: <blank>
    • Full name for the new user: System Administrator
    • Username for your account: sysadm
    • Choose a password for the new user: fubar
  • Configure the clock
    • Select your time zone: 3 (Mountain)
  • Partition disks
    • Partitioning method: 1 (Guided - use entire disk)
    • Select disk to partition: 1 (SCSI2 (0,0,0) (sda) - …)
    • Partitioning scheme: 1 (All files in one partition)
    • 12 (Finish partitioning and write changes to disk …)
    • Write the changes to disks? 1 (Yes)
  • Installing the base system
  • Configure the package manager
    • Scan extra installation media? 2 (No)
    • Debian archive mirror country: 62 (United States)
    • Debian archive mirror: 1 (deb.debian.org)
    • HTTP proxy information (blank for none): <localnet apt cache>
  • Configure popularity-contest
    • Participate in the package usage survey? No
  • Software selection
    • Choose software to install: SSH server, standard system utilities
  • Install the GRUB boot loader
    • Install the GRUB boot loader to your primary drive? Yes
    • Device for boot loader installation: /dev/sda (ata-VBOX…

After the reboot, the machine's console produces a login: prompt. The administrator logs in here, with username sysadm and password fubar, before continuing with the specific machine's preparation (below).

13.2.4. The Test Front Machine

The front machine is created with 512MiB of RAM, 4GiB of disk, and Debian 12.5.0 (recently downloaded) in its CDROM drive. The exact command lines were given in the previous section.

After Debian is installed (as detailed above) and the machine rebooted, the administrator copies the following script to the machine and executes it.

The script is copied through an intermediary, an account on the local network thus accessible to both the host and guests on the host's NAT networks. If USER@SERVER is such an account, the script would be copied and executed thusly:

notebook$ scp private/test-front-prep USER@SERVER:
notebook$ scp -r Secret/ssh_front/ USER@SERVER:
sysadm@front$ scp USER@SERVER:test-front-prep ./
sysadm@front$ scp -r USER@SERVER:ssh_front/ ./
sysadm@front$ ./test-front-prep

The script starts by installing additional software packages. The wireguard package is installed so that /etc/wireguard/ is created. The systemd-resolved package is installed because a reboot seems the only way to get name service working afterwards. As front will always have Internet access in the cloud, the rest of the packages are installed just to shorten Ansible's work later.

private/test-front-prep
#!/bin/bash -e

sudo apt install wireguard systemd-resolved \
    unattended-upgrades postfix dovecot-imapd rsync apache2 kamailio

The Postfix installation prompts for a couple settings. The defaults, listed below, are fine.

  • General type of mail configuration: Internet Site
  • System mail name: small.example.org

The script can now install the private WireGuard™ key, as well as Ansible's public SSH key.

private/test-front-prep

( umask 377
  echo "AJkzVxfTm/KvRjzTN/9X2jYy+CAugiwZfN5F3JTegms=" \
  | sudo tee /etc/wireguard/private-key >/dev/null )

<<test-auth>>

Next, the network interfaces are configured with static IP addresses. In actuality, Front gets no network configuration tweaks. The Debian 12 default is to broadcast for a DHCP lease on the primary NIC. This works in the cloud, which should respond with an offer, though it must offer the public, DNS-registered, hard-coded front_addr.

For testing purposes, the preparation of front replaces the default /etc/network/interfaces with a new configuration that statically assigns front_addr to the primary NIC and a testing subnet address to the second NIC.

private/test-front-prep

( cd /etc/network/; \
  [ -f interfaces~ ] || sudo mv interfaces interfaces~ )
cat <<EOF | sudo tee /etc/network/interfaces >/dev/null
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto enp0s3
iface enp0s3 inet static
    address 192.168.15.4/24
    gateway 192.168.15.1

# Testing interface
auto enp0s8
iface enp0s8 inet static
    address 192.168.58.3/24
EOF

Ansible expects front to use the SSH host keys in Secret/ssh_front/, so it is prepared with these keys in advance. (If Ansible installed them, front would change identities while Ansible was configuring it. Ansible would lose subsequent access until the administrator's ~/.ssh/known_hosts was updated!)

private/test-front-prep

( cd ssh_front/etc/ssh/
  chmod 600 ssh_host_*
  chmod 644 ssh_host_*.pub
  sudo cp -b ssh_host_* /etc/ssh/ )

With the preparatory script successfully executed, front is shut down and moved to the simulated cloud (from the default NAT network).

The following VBoxManage commands effect the move, connecting the primary NIC to public and a second NIC to the host-only network vboxnet2 (making it directly accessible to the administrator's notebook as described in The Test Networks).

VBoxManage modifyvm front --nic1 natnetwork --natnetwork1 public
VBoxManage modifyvm front --nic2 hostonly --hostonlyadapter2 vboxnet2

front is now prepared for configuration by Ansible.

13.2.5. The Test Gate Machine

The gate machine is created with the same amount of RAM and disk as front. Assuming the RAM, DISK, and ISO shell variables have not changed, gate can be created with one command.

NAME=gate create_vm

After Debian is installed (as detailed in A Test Machine) and the machine rebooted, the administrator copies the following script to the machine and executes it.

notebook$ scp private/test-gate-prep USER@SERVER:
sysadm@gate$ scp USER@SERVER:test-gate-prep ./
sysadm@gate$ ./test-gate-prep

The script starts by installing additional software packages.

private/test-gate-prep
#!/bin/bash -e

sudo apt install wireguard systemd-resolved unattended-upgrades \
                 postfix ufw lm-sensors nagios-nrpe-server

The Postfix installation prompts for a couple settings. The defaults, listed below, are fine.

  • General type of mail configuration: Internet Site
  • System mail name: gate.small.private

The script then installs the private WireGuard™ key, as well as Ansible's public SSH key.

private/test-gate-prep
( umask 377
  echo "yOBdLbXh6KBwYQvvb5mhiku8Fxkqc5Cdyz6gNgjc/2U=" \
  | sudo tee /etc/wireguard/private-key >/dev/null )

<<test-auth>>

Next, the script configures the primary NIC with 10-lan.link and 10-lan.network files installed in /etc/systemd/network/. (This is sufficient to allow remote access by Ansible.)

private/test-gate-prep

cat <<EOD | sudo tee /etc/systemd/network/10-lan.link >/dev/null
[Match]
MACAddress=08:00:27:f3:16:79

[Link]
Name=lan
EOD

cat <<EOD | sudo tee /etc/systemd/network/10-lan.network >/dev/null
[Match]
MACAddress=08:00:27:f3:16:79

[Network]
Address=192.168.56.2/24
DNS=192.168.56.1
Domains=small.private
EOD

sudo systemctl --quiet enable systemd-networkd

With the preparatory script successfully executed, gate is shut down and moved to the campus network (from the default NAT network).

The following VBoxManage commands effect the move, connecting the primary NIC to vboxnet0 and creating two new interfaces, isp and wild. These are connected to the simulated ISP and the simulated wild Ethernet (e.g. campus wireless access points, IoT, whatnot).

VBoxManage modifyvm gate --mac-address1=080027f31679
VBoxManage modifyvm gate --nic1 hostonly --hostonlyadapter1 vboxnet0
VBoxManage modifyvm gate --mac-address2=0800273d42e5
VBoxManage modifyvm gate --nic2 natnetwork --natnetwork2 public
VBoxManage modifyvm gate --mac-address3=0800274aded2
VBoxManage modifyvm gate --nic3 hostonly --hostonlyadapter3 vboxnet1

The MAC addresses above were specified so they match the example values of the MAC address variables in this table.

device network simulating MAC address variable
enp0s3 vboxnet0 campus Ethernet gate_lan_mac
enp0s8 public campus ISP gate_isp_mac
enp0s9 vboxnet1 campus IoT gate_wild_mac

gate is now prepared for configuration by Ansible.

13.2.6. The Test Core Machine

The core machine is created with 1GiB of RAM and 6GiB of disk. Assuming the ISO shell variable has not changed, core can be created with following command.

NAME=core RAM=2048 DISK=6144 create_vm

After Debian is installed (as detailed in A Test Machine) and the machine rebooted, the administrator copies the following script to the machine and executes it.

notebook$ scp private/test-core-prep USER@SERVER:
sysadm@core$ scp USER@SERVER:test-core-prep ./
sysadm@core$ ./test-core-prep

The script starts by installing additional software packages.

private/test-core-prep
#!/bin/bash -e

sudo apt install wireguard systemd-resolved unattended-upgrades \
                 chrony isc-dhcp-server bind9 apache2 postfix \
                 dovecot-imapd fetchmail rsync gnupg \
                 mariadb-server php php-{apcu,bcmath,curl,gd,gmp}\
                 php-{json,mysql,mbstring,intl,imagick,xml,zip} \
                 imagemagick libapache2-mod-php \
                 nagios4 monitoring-plugins-basic lm-sensors \
                 nagios-nrpe-plugin

The Postfix installation prompts for a couple settings. The defaults, listed below, are fine.

  • General type of mail configuration: Internet Site
  • System mail name: core.small.private

The script can now install the private WireGuard™ key, as well as Ansible's public SSH key.

private/test-core-prep
( umask 377
  echo "AI+KhwnsHzSPqyIyAObx7EBBTBXFZPiXb2/Qcts8zEI=" \
  | sudo tee /etc/wireguard/private-key >/dev/null )

<<test-auth>>

Next, the script configures the primary NIC with 10-lan.link and 10-lan.network files installed in /etc/systemd/network/.

private/test-core-prep

cat <<EOD | sudo tee /etc/systemd/network/10-lan.link >/dev/null
[Match]
MACAddress=08:00:27:b3:e5:5f

[Link]
Name=lan
EOD

cat <<EOD | sudo tee /etc/systemd/network/10-lan.network >/dev/null
[Match]
MACAddress=08:00:27:b3:e5:5f

[Network]
Address=192.168.56.1/24
Gateway=192.168.56.2
DNS=192.168.56.1
Domains=small.private
EOD

sudo systemctl --quiet enable systemd-networkd

With the preparatory script successfully executed, core is shut down and moved to the campus network (from the default NAT network).

The following VBoxManage commands effect the move, connecting the primary NIC to vboxnet0.

VBoxManage modifyvm core --mac-address1=080027b3e55f
VBoxManage modifyvm core --nic1 hostonly --hostonlyadapter1 vboxnet0

core is now prepared for configuration by Ansible.

13.3. Configure Test Machines

At this point the three test machines core, gate, and front are running fresh Debian systems with select additional packages, on their final networks, with a privileged account named sysadm that authorizes password-less access from the administrator's notebook, ready to be configured by Ansible. However the administrator's notebook may not recognize the test VMs or, worse yet, remember different public keys for them (from previous test machines). For this reason, the administrator executes the following commands before the initial ./inst config.

ssh sysadm@192.168.56.1 date
ssh sysadm@192.168.56.2 date
ssh sysadm@192.168.58.3 date
./inst config

Note that this initial run should exercise all of the handlers, and that subsequent runs probably do not.

Presumably the ./inst config command completed successfully, but before testing begins, gate is restarted. Basic networking tests will fail unless the interfaces on gate are renamed, and nothing less than a restart will get systemd-udevd to rename the isp and wifi interfaces.

13.4. Test Basics

At this point the test institute is just core, gate and front, no other campus servers, no members nor their VPN client devices. On each machine, Systemd should assess the system's state as running with 0 failed units.

systemctl status

gate and thus core should be able to reach the Internet and front. If core can reach the Internet and front, then gate is forwarding (and NATing). On core (and gate):

ping -c 1 8.8.4.4      # dns.google
ping -c 1 192.168.15.4 # front_addr

gate and thus core should be able to resolve internal and public domain names. (Front does not use the institute's internal domain names yet.) On core (and gate):

host dns.google
host core.small.private
host www

The last resort email address, root, should deliver to the administrator's account. On core, gate and front:

/sbin/sendmail root
Testing email to root.
.

Two messages, from core and gate, should appear in /home/sysadm/Maildir/new/ on core in just a couple seconds. The message from front should be delivered to the same directory but on front. While members' emails are automatically fetched (with fetchmail(1)) to core, the system administrator is expected to fetch system emails directly to their desktop (and to give them instant attention).

13.5. The Test Nextcloud

Further tests involve Nextcloud account management. Nextcloud is installed on core as described in Install Nextcloud. Once /Nextcloud/ is created, ./inst config core will validate or update its configuration files.

The administrator will need a desktop system in the test campus networks (using the campus name server). The test Nextcloud configuration requires that it be accessed with the domain name core.small.private. The following sections describe how a client desktop is simulated and connected to the test VPNs (and test campus name server). Its browser can then connect to core.small.private to exercise the test Nextcloud.

The process starts with enrolling the first member of the institute using the ./inst new command and registering a client's public key with the ./inst client command.

13.6. Test New Command

A member must be enrolled so that a member's client machine can be authorized and then test the VPNs, Nextcloud, and the web sites. The first member enrolled in the simulated institute is New Hampshire innkeeper Dick Loudon. Mr. Loudon's accounts on institute servers are named dick, as is his notebook.

./inst new dick

Take note of Dick's initial password.

13.7. The Test Member Notebook

A test member's notebook is created next, much like the servers, except with memory and disk space doubled to 2GiB and 8GiB, and a desktop. This machine is not configured by Ansible. Rather, its WireGuard™ tunnels and web browser test the VPN configurations on gate and front, and the Nextcloud installation on core.

NAME=dick
RAM=2048
DISK=8192
create_vm
VBoxManage modifyvm $NAME --macaddress1 080027dc54b5
VBoxManage modifyvm $NAME --nic1 hostonly --hostonlyadapter1 vboxnet1

Dick's notebook, dick, is initially connected to the host-only network vboxnet1 as though it were the campus wireless access point. It simulates a member's notebook on campus, connected to (NATed behind) the access point.

Debian is installed much as detailed in A Test Machine except that the SSH server option is not needed and the GNOME desktop option is. When the machine reboots, the administrator logs into the desktop and installs a couple additional software packages (which require several more).

sudo apt install systemd-resolved \
                 wireguard nextcloud-desktop evolution

13.8. Test Client Command

The ./inst client command is used to register the public key of a client wishing to connect to the institute's VPNs. In this test, new member Dick wants to connect his notebook, dick, to the institute VPNs. First he generates a pair of WireGuard™ keys by running the following commands on his notebook.

( umask 077; wg genkey \
  | sudo tee /etc/wireguard/private-key ) | wg pubkey

Dick then sends the resulting public key to the administrator, who runs the following command.

./inst client debian dick dick \
  4qd4xdRztZBKhFrX9jI/b4fnMzpKQ5qhg691hwYSsX8=

The command generates campus.conf and public.conf configuration files, which the administrator sends, openly (e.g. in email) to Dick. Dick then installs the configuration files in /etc/wireguard/ and creates the campus interface.

sudo cp {campus,public}.conf /etc/wireguard/
sudo wg-quick up campus
sudo systemctl enable wg-quick@campus

13.9. Test Campus WireGuard™ Subnet

A few basic tests are then performed in a terminal.

systemctl status
ping -c 1 8.8.8.8      # dns.google
ping -c 1 192.168.56.1 # core
host dns.google
host core.small.private
host www

13.10. Test Web Pages

Next, the administrator copies Backup/WWW/ (included in the distribution) to /WWW/ on core and sets the file permissions appropriately.

sudo chown -R monkey:staff /WWW/campus /WWW/live /WWW/test
sudo chmod 02775 /WWW/*
sudo chmod 664 /WWW/*/index.html
sudo -u monkey /usr/local/sbin/webupdate

then uses Firefox on dick to fetch the following URLs. They should all succeed and the content should be a simple sentence identifying the source file.

  • http://www/
  • http://www.small.private/
  • http://live/
  • http://live.small.private/
  • http://test/
  • http://test.small.private/

The first will probably be flagged as unverifiable, signed by an unknown issuer, etc. Otherwise, each should be accessible, displaying a short description of the website that was being simulated.

The simulated public web site at http://192.168.15.4/ is also tested. It should redirect to https://small.example.org/, which does not exist. However, the web site at https://192.168.15.4/ (with httpS) should exist and produce a legible page (after the usual warnings).

Next the administrator modifies /WWW/live/index.html on core and waits 15 minutes for the edit to appear in the web page at https://192.168.15.4/ (and in the file /home/www/index.html on front). The same is done to /home/www/index.html on front and the edit observed immediately, and its correction within 15 minutes.

13.11. Test Nextcloud

Using the browser on the simulated member notebook, the Nextcloud installation on core can be completed. The following steps are performed on dick's desktop.

  • Get http://core/nextcloud/. The attempt produces a warning about using Nextcloud via an untrusted name.
  • Get https://core.small.private/nextcloud/. Receive a login page.
  • Login as sysadm with password fubar.
  • Examine the security & setup warnings in the Settings > Administration > Overview web page. A few minor warnings are expected.
  • Download and enable Calendar and Contacts in the Apps > Featured web page.
  • Logout and login as dick with Dick's initial password (noted above).
  • Use the Nextcloud app to sync ~/Nextcloud/ with the cloud. In the Nextcloud Desktop app's Connection Wizard (the initial dialog), login with the URL https://core.small.private/nextcloud. The web browser should pop up with a new tab: "Connect to your account". Press "Log in" and "Grant access". The Nextcloud Connection Wizard then prompts for sync parameters. The defaults are fine. Presumably the Local Folder is /home/sysadm/Nextcloud/.
  • Drop a file in ~/Nextcloud/, then find it in the Nextcloud Files web page.
  • Create a Mail account in Evolution. This step does not involve Nextcloud, but placates Evolution's Welcome Wizard, and follows in the steps of the newly institutionalized luser. CardDAV and CalDAV accounts can be created in Evolution later.

    The account's full name is Dick Loudon and its email address is dick@small.example.org. The Receiving Email Server Type is IMAP, its name is mail.small.private and it uses the IMAPS port (993). The Username on the server is dick. The encryption method is TLS on a dedicated port. Authentication is by password. The Receiving Option defaults are fine. The Sending Email Server Type is SMTP with the name smtp.small.private using the default SMTP port (25). It requires neither authentication nor encryption.

    At some point Evolution will find that the server certificate is self-signed and unknown. It must be accepted (permanently).

  • Create a CardDAV account in Evolution. Choose Edit, Accounts, Add, Address Book, Type CardDAV, name Small Institute, and user dick. The URL starts with https://core.small.private/nextcloud/ and ends with remote.php/dav/addressbooks/users/dick/contacts/ (yeah, 88 characters!). Create a contact in the new address book and see it in the Contacts web page. At some point Evolution will need Dick's password to access the address book.
  • Create a CalDAV account in Evolution just like the CardDAV account except add a Calendar account of Type CalDAV with a URL that ends remote.php/dav/calendars/dick/personal/ (only 79 characters). Create an event in the new calendar and see it in the Calendar web page. At some point Evolution will need Dick's password to access the calendar.

13.12. Test Email

With Evolution running on the member notebook dick, one second email delivery can be demonstrated. The administrator runs the following commands on front

/sbin/sendmail dick
Subject: Hello, Dick.

How are you?
.

and sees a notification on dick's desktop in a second or less.

Outgoing email is also tested. A message to sysadm@small.example.org should be delivered to /home/sysadm/Maildir/new/ on front just as fast.

13.13. Test Public VPN

At this point, dick can move abroad, from the campus Wi-Fi (host-only network vboxnet1) to the broader Internet (the NAT network public). The following command makes the change. The machine does not need to be shut down if the GUI is used to change its NIC.

VBoxManage modifyvm dick --nic1 natnetwork --natnetwork1 public

Then the campus VPN is disconnected and the public VPN connected.

systemctl stop wg-quick@wg0
systemctl start wg-quick@wg1

Again, some basics are tested in a terminal.

ping -c 1 8.8.8.8      # dns.google
ping -c 1 192.168.56.1 # core
host dns.google
host core.small.private
host www

And, again, these web pages are fetched with a browser.

  • http://www/
  • http://www.small.private/
  • http://live/
  • http://live.small.private/
  • http://test/
  • http://test.small.private/
  • http://192.168.15.4/
  • https://192.168.15.4/
  • http://core.small.private/nextcloud/

The Nextcloud web pages too should still be refresh-able, editable, and Evolution should still be able to edit messages, contacts and calendar events.

13.14. Test Pass Command

To test the ./inst pass command, the administrator logs in to core as dick and runs passwd. A random password is entered, more obscure than fubar (else Nextcloud will reject it!).

The administrator then finds the password change request message in the most recent file in /home/sysadm/Maildir/new/ and pipes it to the ./inst pass command. The administrator might do that by copying the message to a more conveniently named temporary file on core, e.g. ~/msg, copying that to the current directory on the administrator's notebook, and feeding it to ./inst pass on standard input.

On core, logged in as sysadm:

sudo -u dick passwd
( cd ~/Maildir/new/
  cp `ls -1t | head -1` ~/msg )
grep Subject: ~/msg

To ensure that the most recent message is indeed the password change request, the last command should find the line Subject: New password.. Then on the administrator's notebook:

scp sysadm@192.168.56.1:msg ./
./inst pass < msg

The last command should complete without error.

Finally, the administrator verifies that dick can login on core, front and Nextcloud with the new password.

13.15. Test Old Command

One more institute command is left to exercise. The administrator retires dick and his main device dick.

./inst old dick

The administrator tests Dick's access to core, front and Nextcloud, and attempts to access the campus VPN. All of these should fail.

14. Future Work

The small institute's network, as currently defined in this doocument, is lacking in a number of respects.

14.1. Deficiencies

The current network monitoring is rudimentary. It could use some love, like intrusion detection via Snort or similar. Services on Front are not monitored except that the webupdate script should be emailing sysadm whenever it cannot update Front (every 15 minutes!).

Pro-active monitoring might include notifying root of any vandalism corrected by Monkey's quarter-hourly web update. This is a non-trivial task that must ignore intentional changes.

Monkey's cron jobs on Core should be systemd.timer and .service units.

The institute's reverse domains (e.g. 86.177.10.in-addr.arpa) are not available on Front, yet.

14.2. More Tests

The testing process described in the previous chapter is far from complete. Additional tests are needed.

14.2.1. Backup

The backup command has not been tested. It needs an encrypted partition with which to sync? And then some way to compare that to Backup/?

14.2.2. Restore

The restore process has not been tested. It might just copy Backup/ to core:/, but then it probably needs to fix up file ownerships, perhaps permissions too. It could also use an example Backup/Nextcloud/20220622.bak.

14.2.3. Campus Disconnect

Email access (IMAPS) on front is… difficult to test unless core's fetchmails are disconnected, i.e. the whole campus is disconnected, so that new email stays on front long enough to be seen.

  • Disconnect gate's NIC #2.
  • Send email to dick@small.example.org.
  • Find it in /home/dick/Maildir/new/.
  • Re-configure Evolution on dick. Edit the dick@small.example.org mail account (or create a new one?) so that the Receiving Email Server name is 192.168.15.4, not mail.small.private. The latter domain name will not work while the campus is disappeared. In actual use (with Front, not front), the institute domain name could be used.

15. Appendix: The Bootstrap

Creating the private network from whole cloth (machines with recent standard distributions installed) is not straightforward.

Standard distributions do not include all of the necessary server software, esp. isc-dhcp-server and bind9 for critical localnet services. These are typically downloaded from the Internet.

To access the Internet Core needs a default route to Gate, Gate needs to forward with NAT to an ISP, Core needs to query the ISP for names, etc.: quite a bit of temporary, manual localnet configuration just to get to the additional packages.

15.1. The Current Strategy

The strategy pursued in The Hardware is two phase: prepare the servers on the Internet where additional packages are accessible, then connect them to the campus facilities (the private Ethernet switch, Wi-Fi AP, ISP), manually configure IP addresses (while the DHCP client silently fails), and avoid names until BIND9 is configured.

15.2. Starting With Gate

The strategy of Starting With Gate concentrates on configuring Gate's connection to the campus ISP in hope of allowing all to download additional packages. This seems to require manual configuration of Core or a standard rendezvous.

  • Connect Gate to ISP, e.g. apartment WAN via Wi-Fi or Ethernet.
  • Connect Gate to private Ethernet switch.

    sudo ip address add GATE dev ISPDEV
    
  • Configure Gate to NAT from private Ethernet.
  • Configure Gate to serve DHCP on Ethernet, temporarily!

    • Push default route through Gate, DNS from 8.8.8.8.

    Or statically configure Core with address, route, and name server.

    sudo ip address add CORE dev PRIVETH
    sudo ip route add default via GATE
    sudo sh -c 'echo "nameserver 8.8.8.8" >/etc/resolve.conf'
    
  • Configure admin's notebook similarly?
  • Test remote access from administrator's notebook.
  • Finally, configure Gate and Core.

    ansible-playbook -l gate site.yml
    ansible-playbook -l core site.yml
    

15.3. Pre-provision With Ansible

A refinement of the current strategy might avoid the need to maintain (and test!) lists of "additional" packages. With Core and Gate and the admin's notebook all together on a café Wi-Fi, Ansible might be configured (e.g. tasks tagged) to just install the necessary packages. The administrator would put Core's and Gate's localnet IP addresses in Ansible's inventory file, then run just the Ansible tasks tagged base-install, leaving the new services in a decent (secure, innocuous, disabled) default state.

ansible-playbook -l core -t base-install site.yml
ansible-playbook -l gate -t base-install site.yml

Footnotes:

1

The recommended private top-level domains are listed in "Appendix G. Private DNS Namespaces" of RFC6762 (Multicast DNS). https://www.rfc-editor.org/rfc/rfc6762#appendix-G

2

The cipher set specified by Let's Encrypt is large enough to turn orange many parts of an SSL Report from Qualys SSL Labs.

3

Presumably, eventually, a former member's home directories are archived to external storage, their other files are given new ownerships, and their Unix accounts are deleted. This has never been done, and is left as a manual exercise.

4

Front is accessible via Gate but routing from the host address on vboxnet0 through Gate requires extensive interference with the routes on Front and Gate, making the simulation less… similar.

Author: Matt Birkholz

Created: 2025-11-23 Sun 15:25

Validate