From: Matt Birkholz Date: Sun, 1 Jun 2025 04:28:10 +0000 (-0600) Subject: Update README.html. X-Git-Url: https://birchwood-abbey.net/git?a=commitdiff_plain;h=a5c258b434ea345d09777656bd4f485594d9142e;p=Institute Update README.html. --- diff --git a/README.html b/README.html index b92892e..6d8b648 100644 --- a/README.html +++ b/README.html @@ -3,7 +3,7 @@ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> - + A Small Institute @@ -24,8 +24,8 @@ an expendable public face (easily wiped clean) while maintaining a secure and private campus that can function with or without the Internet.

-
-

1. Overview

+
+

1. Overview

This small institute has a public server on the Internet, Front, that @@ -48,7 +48,7 @@ connects to Front making the institute email, cloud, etc. available to members off campus.

-
+
                 =                                                   
               _|||_                                                 
         =-The-Institute-=                                           
@@ -95,8 +95,8 @@ uses OpenPGP encryption to secure message content.
 

-
-

2. Caveats

+
+

2. Caveats

This small institute prizes its privacy, so there is little or no @@ -144,8 +144,8 @@ month) because of this assumption.

-
-

3. The Services

+
+

3. The Services

The small institute's network is designed to provide a number of @@ -157,8 +157,8 @@ policies. On first reading, those subsections should be skipped; they reference particulars first introduced in the following chapter.

-
-

3.1. The Name Service

+
+

3.1. The Name Service

The institute has a public domain, e.g. small.example.org, and a @@ -172,8 +172,8 @@ names like core.

-
-

3.2. The Email Service

+
+

3.2. The Email Service

Front provides the public SMTP (Simple Mail Transfer Protocol) service @@ -231,8 +231,8 @@ sign outgoing emails per DKIM (Domain Keys Identified Mail) yet.

-Example Small Institute SPF Record
TXT    v=spf1 ip4:159.65.75.60 -all
-
+Example Small Institute SPF Record
TXT    v=spf1 ip4:159.65.75.60 -all
+

@@ -247,8 +247,8 @@ setting for the maximum message size is given in a code block labeled configurations wherever <<postfix-message-size>> appears.

-
-

3.2.1. The Postfix Configurations

+
+

3.2.1. The Postfix Configurations

The institute aims to accommodate encrypted email containing short @@ -263,8 +263,8 @@ handle maxi-messages.

-postfix-message-size
- { p: message_size_limit, v: 104857600 }
-
+postfix-message-size
- { p: message_size_limit, v: 104857600 }
+

@@ -278,10 +278,10 @@ re-sending the bounce (or just grabbing the go-bag!).

-postfix-queue-times
- { p: delay_warning_time, v: 1h }
+postfix-queue-times
- { p: delay_warning_time, v: 1h }
 - { p: maximal_queue_lifetime, v: 4h }
 - { p: bounce_queue_lifetime, v: 4h }
-
+

@@ -292,9 +292,9 @@ disables relaying (other than for the local networks).

-postfix-relaying
- p: smtpd_relay_restrictions
+postfix-relaying
- p: smtpd_relay_restrictions
   v: permit_mynetworks reject_unauth_destination
-
+

@@ -304,8 +304,8 @@ effect.

-postfix-maildir
- { p: home_mailbox, v: Maildir/ }
-
+postfix-maildir
- { p: home_mailbox, v: Maildir/ }
+

@@ -315,8 +315,8 @@ in the respective roles below.

-
-

3.2.2. The Dovecot Configurations

+
+

3.2.2. The Dovecot Configurations

The Dovecot settings on both Front and Core disable POP and require @@ -330,9 +330,9 @@ The official documentation for Dovecot once was a Wiki but now is

-dovecot-tls
protocols = imap
+dovecot-tls
protocols = imap
 ssl = required
-
+

@@ -342,12 +342,12 @@ configuration keeps them from even listening at the IMAP port

-dovecot-ports
service imap-login {
+dovecot-ports
service imap-login {
   inet_listener imap {
     port = 0
   }
 }
-
+

@@ -356,8 +356,8 @@ directories.

-dovecot-maildir
mail_location = maildir:~/Maildir
-
+dovecot-maildir
mail_location = maildir:~/Maildir
+

@@ -368,15 +368,15 @@ common settings with host specific settings for ssl_cert and

-
-

3.3. The Web Services

+
+

3.3. The Web Services

Front provides the public HTTP service that serves institute web pages at e.g. https://small.example.org/. The small institute initially runs with a self-signed, "snake oil" server certificate, causing browsers to warn of possible fraud, but this certificate is easily -replaced by one signed by a recognized authority, as discussed in The +replaced by one signed by a recognized authority, as discussed in The Front Role.

@@ -420,7 +420,7 @@ tree. Changes here are merged into the live tree, /WWW/live/, once they are complete and tested.
http://core/
is the Debian default site. The institute does not munge this site, to avoid conflicts with Debian-packaged web -services (e.g. Nextcloud, Zoneminder, MythTV's MythWeb).
+services (e.g. Nextcloud, AgentDVR, MythTV's MythWeb).

@@ -431,15 +431,15 @@ will automatically wipe it within 15 minutes.

-
-

3.4. The Cloud Service

+
+

3.4. The Cloud Service

Core runs Nextcloud to provide a private institute cloud at -http://core.small.private/nextcloud/. It is managed manually per +https://core.small.private/nextcloud/. It is managed manually per The Nextcloud Server Administration Guide. The code and data, including especially database dumps, are stored in /Nextcloud/ which -is included in Core's backup procedure as described in Backups. The +is included in Core's backup procedure as described in Backups. The default Apache2 configuration expects to find the web scripts in /var/www/nextcloud/, so the institute symbolically links this to /Nextcloud/nextcloud/. @@ -453,22 +453,22 @@ private network.

-
-

3.5. The VPN Services

+
+

3.5. The VPN Services

The institute's public and campus VPNs have many common configuration options that are discussed here. These are included, with example certificates and network addresses, in the complete server -configurations of The Front Role and The Gate Role, as well as the -matching client configurations in The Core Role and the .ovpn files -generated by The Client Command. The configurations are based on the +configurations of The Front Role and The Gate Role, as well as the +matching client configurations in The Core Role and the .ovpn files +generated by The Client Command. The configurations are based on the documentation for OpenVPN v2.4: the openvpn(8) manual page and this web page.

-
-

3.5.1. The VPN Configuration Options

+
+

3.5.1. The VPN Configuration Options

The institute VPNs use UDP on a subnet topology (rather than @@ -480,11 +480,11 @@ the VPN subnets using any (experimental) protocol.

-openvpn-dev-mode
dev-type tun
+openvpn-dev-mode
dev-type tun
 dev ovpn
 topology subnet
 client-to-client
-
+

@@ -495,20 +495,20 @@ interruptions.

-openvpn-keepalive
keepalive 10 120
-
+openvpn-keepalive
keepalive 10 120
+

-As mentioned in The Name Service, the institute uses a campus name +As mentioned in The Name Service, the institute uses a campus name server. OpenVPN is instructed to push its address and the campus search domain.

-openvpn-dns
push "dhcp-option DOMAIN {{ domain_priv }}"
+openvpn-dns
push "dhcp-option DOMAIN {{ domain_priv }}"
 push "dhcp-option DNS {{ core_addr }}"
-
+

@@ -519,11 +519,11 @@ device nor the key files.

-openvpn-drop-priv
user nobody
+openvpn-drop-priv
user nobody
 group nogroup
 persist-key
 persist-tun
-
+

@@ -535,9 +535,9 @@ the default for OpenVPN v2.4, and auth is upped to SHA256

-openvpn-crypt
cipher AES-256-GCM
+openvpn-crypt
cipher AES-256-GCM
 auth SHA256
-
+

@@ -546,8 +546,8 @@ accommodating a few members with a handful of devices each.

-openvpn-max
max-clients 20
-
+openvpn-max
max-clients 20
+

@@ -560,23 +560,23 @@ raised from the default level 1 to level 3 (just short of a deluge).

-openvpn-debug
ifconfig-pool-persist ipp.txt
+openvpn-debug
ifconfig-pool-persist ipp.txt
 status openvpn-status.log
 verb 3
-
+
-
-

3.6. Accounts

+
+

3.6. Accounts

A small institute has just a handful of members. For simplicity (and thus security) static configuration files are preferred over complex account management systems, LDAP, Active Directory, and the like. The Ansible scripts configure the same set of user accounts on Core and -Front. The Institute Commands (e.g. ./inst new dick) capture the +Front. The Institute Commands (e.g. ./inst new dick) capture the processes of enrolling, modifying and retiring members of the institute. They update the administrator's membership roll, and run Ansible to create (and disable) accounts on Core, Front, Nextcloud, @@ -591,8 +591,8 @@ accomplished via the campus cloud and the resulting desktop files can all be private (readable and writable only by the owner) by default.

-
-

3.6.1. The Administration Accounts

+
+

3.6.1. The Administration Accounts

The institute avoids the use of the root account (uid 0) because @@ -601,21 +601,21 @@ command is used to consciously (conscientiously!) run specific scripts and programs as root. When installation of a Debian OS leaves the host with no user accounts, just the root account, the next step is to create a system administrator's account named sysadm and to give -it permission to use the sudo command (e.g. as described in The +it permission to use the sudo command (e.g. as described in The Front Machine). When installation prompts for the name of an initial, privileged user account the same name is given (e.g. as -described in The Core Machine). Installation may not prompt and +described in The Core Machine). Installation may not prompt and still create an initial user account with a distribution specific name (e.g. pi). Any name can be used as long as it is provided as the value of ansible_user in hosts. Its password is specified by a vault-encrypted variable in the Secret/become.yml file. (The -hosts and Secret/become.yml files are described in The Ansible +hosts and Secret/become.yml files are described in The Ansible Configuration.)

-
-

3.6.2. The Monkey Accounts

+
+

3.6.2. The Monkey Accounts

The institute's Core uses a special account named monkey to run @@ -626,8 +626,8 @@ account is created on Front as well.

-
-

3.7. Keys

+
+

3.7. Keys

The institute keeps its "master secrets" in an encrypted @@ -714,7 +714,6 @@ rsync -a Secret/ Secret2/ rsync -a Secret/ Secret3/ -

This is out of consideration for the fragility of USB drives, and the importance of a certain SSH private key, without which the @@ -723,8 +722,8 @@ the administrator's password keep, to install a new SSH key.

-
-

3.8. Backups

+
+

3.8. Backups

The small institute backs up its data, but not so much so that nothing @@ -755,12 +754,12 @@ version 2. Given the -n flag, the script does a "pre-sync" which does not pause Nextcloud nor dump its DB. A pre-sync gets the big file (video) copies done while Nextcloud continues to run. A follow-up sudo -backup (without -n) produces the complete copy (with all the +backup, without -n, produces the complete copy (with all the files mentioned in the Nextcloud database dump).

-private/backup
#!/bin/bash -e
+private/backup
#!/bin/bash -e
 #
 # DO NOT EDIT.  Maintained (will be replaced) by Ansible.
 #
@@ -798,10 +797,8 @@ files mentioned in the Nextcloud database dump).
         echo "Mounting /backup/."
         cryptsetup luksOpen /dev/disk/by-partlabel/Backup backup
         mount /dev/mapper/backup /backup
-        mounted=indeed
     else
         echo "Found /backup/ already mounted."
-        mounted=
     fi
 
     if [ ! -d /backup/home ]
@@ -813,17 +810,20 @@ files mentioned in the Nextcloud database dump).
 
     if [ ! $presync ]
     then
-        echo "Putting nextcloud into maintenance mode."
+        echo "Putting Nextcloud into maintenance mode."
         ( cd /Nextcloud/nextcloud/
           sudo -u www-data php occ maintenance:mode --on &>/dev/null )
 
-        echo "Dumping nextcloud database."
+        echo "Dumping Nextcloud database."
         ( cd /Nextcloud/
           umask 07
-          BAK=`date +"%Y%m%d"`-dbbackup.bak.gz
+          BAK=`date +"%Y%m%d%H%M"`-dbbackup.bak.gz
           CNF=/Nextcloud/dbbackup.cnf
           mysqldump --defaults-file=$CNF nextcloud | gzip > $BAK
-          chmod 440 $BAK )
+          chmod 440 $BAK
+          ls -t1 *-dbbackup.bak.gz | tail -n +4 \
+          | while read; do rm "$REPLY"; done
+        )
     fi
 
 }
@@ -832,21 +832,19 @@ files mentioned in the Nextcloud database dump).
 
     if [ ! $presync ]
     then
-        echo "Putting nextcloud back into service."
+        echo "Putting Nextcloud back into service."
         ( cd /Nextcloud/nextcloud/
           sudo -u www-data php occ maintenance:mode --off &>/dev/null )
     fi
 
-    if [ $mounted ]
+    if mountpoint -q /backup/
     then
         echo "Unmounting /backup/."
         umount /backup
         cryptsetup luksClose backup
-        mounted=
+        echo "Done."
+        echo "The backup device can be safely disconnected."
     fi
-    echo "Done."
-    echo "The backup device can be safely disconnected."
-
 }
 
 start
@@ -858,13 +856,13 @@ start
 done
 
 finish
-
+
-
-

4. The Particulars

+
+

4. The Particulars

This chapter introduces Ansible variables intended to simplify @@ -876,13 +874,13 @@ stored in separate files: public/vars.yml a

The example settings in this document configure VirtualBox VMs as -described in the Testing chapter. For more information about how a +described in the Testing chapter. For more information about how a small institute turns the example Ansible code into a working Ansible -configuration, see chapter The Ansible Configuration. +configuration, see chapter The Ansible Configuration.

-
-

4.1. Generic Particulars

+
+

4.1. Generic Particulars

The small institute's domain name is used quite frequently in the @@ -892,9 +890,9 @@ replace {{ domain_name }} in the code with small.example.org<

-public/vars.yml
---
+public/vars.yml
---
 domain_name: small.example.org
-
+

@@ -915,14 +913,14 @@ like DNS-over-HTTPS will pass us by.

-private/vars.yml
---
+private/vars.yml
---
 domain_priv: small.private
-
+
-
-

4.2. Subnets

+
+

4.2. Subnets

The small institute uses a private Ethernet, two VPNs, and an @@ -1013,16 +1011,16 @@ example result follows the code.

-
(let ((bytes
+
(let ((bytes
          (let ((i (random (+ 256 16))))
            (if (< i 256)
                (list 10        i         (1+ (random 254)))
              (list  172 (+ 16 (- i 256)) (1+ (random 254)))))))
   (format "%d.%d.%d.0/24" (car bytes) (cadr bytes) (caddr bytes)))
-
+
-
+

=> 10.62.17.0/24

@@ -1035,16 +1033,16 @@ code block below. The small institute treats these addresses as sensitive information so again the code block below "tangles" into private/vars.yml rather than public/vars.yml. Two of the addresses are in 192.168 subnets because they are part of a test -configuration using mostly-default VirtualBoxes (described here). +configuration using mostly-default VirtualBoxes (described here).

-private/vars.yml
+private/vars.yml

 private_net_cidr:           192.168.56.0/24
+wild_net_cidr:              192.168.57.0/24
 public_vpn_net_cidr:        10.177.86.0/24
 campus_vpn_net_cidr:        10.84.138.0/24
-gate_wifi_net_cidr:         192.168.57.0/24
-
+

@@ -1056,12 +1054,11 @@ e.g. _net_and_mask rather than _net_cidr.

-private/vars.yml
private_net:
+private/vars.yml
private_net:
            "{{ private_net_cidr | ansible.utils.ipaddr('network') }}"
 private_net_mask:
            "{{ private_net_cidr | ansible.utils.ipaddr('netmask') }}"
-private_net_and_mask:
-                           "{{ private_net }} {{ private_net_mask }}"
+private_net_and_mask:      "{{ private_net }} {{ private_net_mask }}"
 public_vpn_net:
         "{{ public_vpn_net_cidr | ansible.utils.ipaddr('network') }}"
 public_vpn_net_mask:
@@ -1074,15 +1071,13 @@ campus_vpn_net_mask:
         "{{ campus_vpn_net_cidr | ansible.utils.ipaddr('netmask') }}"
 campus_vpn_net_and_mask:
                      "{{ campus_vpn_net }} {{ campus_vpn_net_mask }}"
-gate_wifi_net:
-         "{{ gate_wifi_net_cidr | ansible.utils.ipaddr('network') }}"
-gate_wifi_net_mask:
-         "{{ gate_wifi_net_cidr | ansible.utils.ipaddr('netmask') }}"
-gate_wifi_net_and_mask:
-                       "{{ gate_wifi_net }} {{ gate_wifi_net_mask }}"
-gate_wifi_broadcast:
-       "{{ gate_wifi_net_cidr | ansible.utils.ipaddr('broadcast') }}"
-
+wild_net: "{{ wild_net_cidr | ansible.utils.ipaddr('network') }}" +wild_net_mask: + "{{ wild_net_cidr | ansible.utils.ipaddr('netmask') }}" +wild_net_and_mask: "{{ wild_net }} {{ wild_net_mask }}" +wild_net_broadcast: + "{{ wild_net_cidr | ansible.utils.ipaddr('broadcast') }}" +

@@ -1093,8 +1088,8 @@ the institute's Internet domain name.

-public/vars.yml
front_addr: 192.168.15.5
-
+public/vars.yml
front_addr: 192.168.15.5
+

@@ -1109,49 +1104,48 @@ virtual machines and networks, and the VirtualBox user manual uses Finally, five host addresses are needed frequently in the Ansible code. The first two are Core's and Gate's addresses on the private Ethernet. The next two are Gate's and the campus Wi-Fi's addresses on -the Gate-WiFi subnet, the tiny Ethernet (gate_wifi_net) between Gate -and the (untrusted) campus Wi-Fi access point. The last is Front's -address on the public VPN, perversely called front_private_addr. -The following code block picks the obvious IP addresses for Core -(host 1) and Gate (host 2). +the "wild" subnet, the untrusted Ethernet (wild_net) between Gate +and the campus Wi-Fi access point(s) and IoT appliances. The last is +Front's address on the public VPN, perversely called +front_private_addr. The following code block picks the obvious IP +addresses for Core (host 1) and Gate (host 2).

-private/vars.yml
core_addr_cidr:  "{{ private_net_cidr | ansible.utils.ipaddr('1') }}"
+private/vars.yml
core_addr_cidr:  "{{ private_net_cidr | ansible.utils.ipaddr('1') }}"
 gate_addr_cidr:  "{{ private_net_cidr | ansible.utils.ipaddr('2') }}"
-gate_wifi_addr_cidr:
-               "{{ gate_wifi_net_cidr | ansible.utils.ipaddr('1') }}"
-wifi_wan_addr_cidr:
-               "{{ gate_wifi_net_cidr | ansible.utils.ipaddr('2') }}"
+gate_wild_addr_cidr:
+                    "{{ wild_net_cidr | ansible.utils.ipaddr('1') }}"
+wifi_wan_addr_cidr: "{{ wild_net_cidr | ansible.utils.ipaddr('2') }}"
 front_private_addr_cidr:
               "{{ public_vpn_net_cidr | ansible.utils.ipaddr('1') }}"
 
 core_addr:   "{{ core_addr_cidr | ansible.utils.ipaddr('address') }}"
 gate_addr:   "{{ gate_addr_cidr | ansible.utils.ipaddr('address') }}"
-gate_wifi_addr:
-        "{{ gate_wifi_addr_cidr | ansible.utils.ipaddr('address') }}"
+gate_wild_addr:
+        "{{ gate_wild_addr_cidr | ansible.utils.ipaddr('address') }}"
 wifi_wan_addr:
          "{{ wifi_wan_addr_cidr | ansible.utils.ipaddr('address') }}"
 front_private_addr:
     "{{ front_private_addr_cidr | ansible.utils.ipaddr('address') }}"
-
+
-
-

5. The Hardware

+
+

5. The Hardware

The small institute's network was built by its system administrator using Ansible on a trusted notebook. The Ansible configuration and scripts were generated by "tangling" the Ansible code included here. -(The Ansible Configuration describes how to do this.) The following +(The Ansible Configuration describes how to do this.) The following sections describe how Front, Gate and Core were prepared for Ansible.

-
-

5.1. The Front Machine

+
+

5.1. The Front Machine

Front is the small institute's public facing server, a virtual machine @@ -1164,8 +1158,8 @@ possible to quickly re-provision a new Front machine from a frontier Internet café using just the administrator's notebook.

-
-

5.1.1. A Digital Ocean Droplet

+
+

5.1.1. A Digital Ocean Droplet

The following example prepared a new front on a Digital Ocean droplet. @@ -1186,11 +1180,10 @@ notebook$ ssh root@159.65.75.60 root@ubuntu# -

The freshly created Digital Ocean droplet came with just one account, root, but the small institute avoids remote access to the "super -user" account (per the policy in The Administration Accounts), so the +user" account (per the policy in The Administration Accounts), so the administrator created a sysadm account with the ability to request escalated privileges via the sudo command.

@@ -1209,12 +1202,11 @@ root@ubuntu# logout notebook$ -

The password was generated by gpw, saved in the administrator's password keep, and later added to Secret/become.yml as shown below. (Producing a working Ansible configuration with Secret/become.yml -file is described in The Ansible Configuration.) +file is described in The Ansible Configuration.)

@@ -1225,11 +1217,10 @@ notebook$ ansible-vault encrypt_string givitysticangout \
 notebook_     >>Secret/become.yml
 
-

After creating the sysadm account on the droplet, the administrator concatenated a personal public ssh key and the key found in -Secret/ssh_admin/ (created by The CA Command) into an admin_keys +Secret/ssh_admin/ (created by The CA Command) into an admin_keys file, copied it to the droplet, and installed it as the authorized_keys for sysadm.

@@ -1253,7 +1244,6 @@ notebook$ rm admin_keys notebook$ -

The Ansible configuration expects certain host keys on the new front. The administrator should install them now, and deal with the machine's @@ -1272,7 +1262,6 @@ sysadm@ubuntu$ logout notebook$ ssh-keygen -f ~/.ssh/known_hosts -R 159.65.75.60 -

The last command removes the old host key from the administrator's known_hosts file. The next SSH connection should ask to confirm the @@ -1291,7 +1280,6 @@ sysadm@ubuntu$ sudo head -1 /etc/shadow root:*:18355:0:99999:7::: -

After passing the above test, the administrator disabled root logins on the droplet. The last command below tested that root logins were @@ -1306,7 +1294,6 @@ root@159.65.75.60: Permission denied (publickey). notebook$ -

At this point the droplet was ready for configuration by Ansible. Later, provisioned with all of Front's services and tested, the @@ -1316,8 +1303,8 @@ address.

-
-

5.2. The Core Machine

+
+

5.2. The Core Machine

Core is the small institute's private file, email, cloud and whatnot @@ -1341,7 +1328,7 @@ The following example prepared a new core on a PC with Debian 11 freshly installed. During installation, the machine was named core, no desktop or server software was installed, no root password was set, and a privileged account named sysadm was created (per the policy in -The Administration Accounts). +The Administration Accounts).

@@ -1353,12 +1340,11 @@ Retype new password: oingstramextedil
 Is the information correct? [Y/n] 
 
-

The password was generated by gpw, saved in the administrator's password keep, and later added to Secret/become.yml as shown below. (Producing a working Ansible configuration with Secret/become.yml -file is described in The Ansible Configuration.) +file is described in The Ansible Configuration.)

@@ -1369,7 +1355,6 @@ notebook$ ansible-vault encrypt_string oingstramextedil \
 notebook_     >>Secret/become.yml
 
-

With Debian freshly installed, Core needed several additional software packages. The administrator temporarily plugged Core into a cable @@ -1383,7 +1368,6 @@ _ postfix dovecot-imapd fetchmail expect rsync \ _ gnupg openssh-server -

The Nextcloud configuration requires Apache2, MariaDB and a number of PHP modules. Installing them while Core was on a cable modem sped up @@ -1396,7 +1380,6 @@ _ php-{json,mysql,mbstring,intl,imagick,xml,zip} \ _ libapache2-mod-php -

Similarly, the NAGIOS configuration requires a handful of packages that were pre-loaded via cable modem (to test a frontier deployment). @@ -1407,10 +1390,9 @@ $ sudo apt install nagios4 monitoring-plugins-basic lm-sensors \ _ nagios-nrpe-plugin -

Next, the administrator concatenated a personal public ssh key and the -key found in Secret/ssh_admin/ (created by The CA Command) into an +key found in Secret/ssh_admin/ (created by The CA Command) into an admin_keys file, copied it to Core, and installed it as the authorized_keys for sysadm.

@@ -1434,7 +1416,6 @@ notebook$ rm admin_keys notebook$ -

Note that the name core.lan should be known to the cable modem's DNS service. An IP address might be used instead, discovered with an ip @@ -1451,7 +1432,7 @@ a new, private IP address and a default route.

In the example command lines below, the address 10.227.248.1 was generated by the random subnet address picking procedure described in -Subnets, and is named core_addr in the Ansible code. The second +Subnets, and is named core_addr in the Ansible code. The second address, 10.227.248.2, is the corresponding address for Gate's Ethernet interface, and is named gate_addr in the Ansible code. @@ -1462,14 +1443,13 @@ sysadm@core$ sudo ip address add 10.227.248.1 dev enp82s0 sysadm@core$ sudo ip route add default via 10.227.248.2 dev enp82s0 -

At this point Core was ready for provisioning with Ansible.

-
-

5.3. The Gate Machine

+
+

5.3. The Gate Machine

Gate is the small institute's route to the Internet, and the campus @@ -1480,9 +1460,9 @@ interfaces.

  1. lan is its main Ethernet interface, connected to the campus's private Ethernet switch.
  2. -
  3. wifi is its second Ethernet interface, connected to the campus -Wi-Fi access point's WAN Ethernet interface (with a cross-over -cable).
  4. +
  5. wild is its second Ethernet interface, connected to the +untrusted network of campus IoT appliances and Wi-Fi access +point(s).
  6. isp is its third network interface, connected to the campus ISP. This could be an Ethernet device connected to a cable modem. It could be a USB port tethered to a phone, a @@ -1490,7 +1470,7 @@ USB-Ethernet adapter, or a wireless adapter connected to a campground Wi-Fi access point, etc.
-
+
 =============== | ==================================================
                 |                                           Premises
           (Campus ISP)                                              
@@ -1503,8 +1483,8 @@ campground Wi-Fi access point, etc.
                 +----Ethernet switch                                
 
-
-

5.3.1. Alternate Gate Topology

+
+

5.3.1. Alternate Gate Topology

While Gate and Core really need to be separate machines for security @@ -1513,7 +1493,7 @@ This avoids the need for a second Wi-Fi access point and leads to the following topology.

-
+
 =============== | ==================================================
                 |                                           Premises
            (House ISP)                                              
@@ -1525,7 +1505,8 @@ following topology.
                 +----Ethernet switch                                
 

-In this case Gate has two interfaces and there is no Gate-WiFi subnet. +In this case Gate has two interfaces and there is no wild subnet +other than the Internets themselves.

@@ -1536,12 +1517,12 @@ its Ethernet and Wi-Fi clients are allowed to communicate).

-
-

5.3.2. Original Gate Topology

+
+

5.3.2. Original Gate Topology

The Ansible code in this document is somewhat dependent on the -physical network shown in the Overview wherein Gate has three network +physical network shown in the Overview wherein Gate has three network interfaces.

@@ -1550,7 +1531,7 @@ The following example prepared a new gate on a PC with Debian 11 freshly installed. During installation, the machine was named gate, no desktop or server software was installed, no root password was set, and a privileged account named sysadm was created (per the policy in -The Administration Accounts). +The Administration Accounts).

@@ -1562,12 +1543,11 @@ Retype new password: icismassssadestm
 Is the information correct? [Y/n] 
 
-

The password was generated by gpw, saved in the administrator's password keep, and later added to Secret/become.yml as shown below. (Producing a working Ansible configuration with Secret/become.yml -file is described in The Ansible Configuration.) +file is described in The Ansible Configuration.)

@@ -1578,7 +1558,6 @@ notebook$ ansible-vault encrypt_string icismassssadestm \
 notebook_     >>Secret/become.yml
 
-

With Debian freshly installed, Gate needed a couple additional software packages. The administrator temporarily plugged Gate into a @@ -1591,10 +1570,9 @@ _ ufw isc-dhcp-server postfix openvpn \ _ openssh-server -

Next, the administrator concatenated a personal public ssh key and the -key found in Secret/ssh_admin/ (created by The CA Command) into an +key found in Secret/ssh_admin/ (created by The CA Command) into an admin_keys file, copied it to Gate, and installed it as the authorized_keys for sysadm.

@@ -1618,7 +1596,6 @@ notebook$ rm admin_keys notebook$ -

Note that the name gate.lan should be known to the cable modem's DNS service. An IP address might be used instead, discovered with an ip @@ -1635,20 +1612,19 @@ a new, private IP address.

In the example command lines below, the address 10.227.248.2 was generated by the random subnet address picking procedure described in -Subnets, and is named gate_addr in the Ansible code. +Subnets, and is named gate_addr in the Ansible code.

 $ sudo ip address add 10.227.248.2 dev eth0
 
-

Gate was also connected to the USB Ethernet dongles cabled to the campus Wi-Fi access point and the campus ISP. The three network adapters are known by their MAC addresses, the values of the variables -gate_lan_mac, gate_wifi_mac, and gate_isp_mac. (For more -information, see the Gate role's Configure Netplan task.) +gate_lan_mac, gate_wild_mac, and gate_isp_mac. (For more +information, see the Gate role's Configure Netplan task.)

@@ -1658,37 +1634,37 @@ At this point Gate was ready for provisioning with Ansible.

-
-

6. The All Role

+
+

6. The All Role

The all role contains tasks that are executed on all of the institute's servers. At the moment there is just the one.

-
-

6.1. Include Particulars

+
+

6.1. Include Particulars

The all role's task contains a reference to a common institute particular, the institute's domain_name, a variable found in the public/vars.yml file. Thus the first task of the all role is to -include the variables defined in this file (described in The +include the variables defined in this file (described in The Particulars). The code block below is the first to tangle into roles/all/tasks/main.yml.

-roles/all/tasks/main.yml
---
+roles/all/tasks/main.yml
---
 - name: Include public variables.
   include_vars: ../public/vars.yml
   tags: accounts
-
+
-
-

6.2. Enable Systemd Resolved

+
+

6.2. Enable Systemd Resolved

The systemd-networkd and systemd-resolved service units are not @@ -1709,7 +1685,7 @@ follows these recommendations (and not the suggestion to enable

-roles_t/all/tasks/main.yml
+roles_t/all/tasks/main.yml

 - name: Install systemd-resolved.
   become: yes
   apt: pkg=systemd-resolved
@@ -1741,22 +1717,22 @@ follows these recommendations (and not the suggestion to enable
   when:
   - ansible_distribution == 'Debian'
   - 12 > ansible_distribution_major_version|int
-
+
-
-

6.3. Trust Institute Certificate Authority

+
+

6.3. Trust Institute Certificate Authority

All servers should recognize the institute's Certificate Authority as trustworthy, so its certificate is added to the set of trusted CAs on each host. More information about how the small institute manages its -X.509 certificates is available in Keys. +X.509 certificates is available in Keys.

-roles_t/all/tasks/main.yml
+roles_t/all/tasks/main.yml

 - name: Trust the institute CA.
   become: yes
   copy:
@@ -1766,28 +1742,28 @@ X.509 certificates is available in Keys.
     owner: root
     group: root
   notify: Update CAs.
-
+
-roles_t/all/handlers/main.yml
+roles_t/all/handlers/main.yml

 - name: Update CAs.
   become: yes
   command: update-ca-certificates
-
+
-
-

7. The Front Role

+
+

7. The Front Role

The front role installs and configures the services expected on the institute's publicly accessible "front door": email, web, VPN. The virtual machine is prepared with an Ubuntu Server install and remote access to a privileged, administrator's account. (For details, see -The Front Machine.) +The Front Machine.)

@@ -1808,17 +1784,17 @@ uses the institute's CA and server certificates, and expects client certificates signed by the institute CA.

-
-

7.1. Include Particulars

+
+

7.1. Include Particulars

-The first task, as in The All Role, is to include the institute +The first task, as in The All Role, is to include the institute particulars. The front role refers to private variables and the membership roll, so these are included was well.

-roles/front/tasks/main.yml
---
+roles/front/tasks/main.yml
---
 - name: Include public variables.
   include_vars: ../public/vars.yml
   tags: accounts
@@ -1830,12 +1806,12 @@ membership roll, so these are included was well.
 - name: Include members.
   include_vars: "{{ lookup('first_found', membership_rolls) }}"
   tags: accounts
-
+
-
-

7.2. Configure Hostname

+
+

7.2. Configure Hostname

This task ensures that Front's /etc/hostname and /etc/mailname are @@ -1844,7 +1820,7 @@ delivery.

-roles_t/front/tasks/main.yml
- name: Configure hostname.
+roles_t/front/tasks/main.yml
- name: Configure hostname.
   become: yes
   copy:
     content: "{{ domain_name }}\n"
@@ -1853,20 +1829,20 @@ delivery.
   - /etc/hostname
   - /etc/mailname
   notify: Update hostname.
-
+
-roles_t/front/handlers/main.yml
---
+roles_t/front/handlers/main.yml
---
 - name: Update hostname.
   become: yes
   command: hostname -F /etc/hostname
-
+
-
-

7.3. Add Administrator to System Groups

+
+

7.3. Add Administrator to System Groups

The administrator often needs to read (directories of) log files owned @@ -1875,19 +1851,19 @@ these groups speeds up debugging.

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Add {{ ansible_user }} to system groups.
   become: yes
   user:
     name: "{{ ansible_user }}"
     append: yes
     groups: root,adm
-
+
-
-

7.4. Configure SSH

+
+

7.4. Configure SSH

The SSH service on Front needs to be known to Monkey. The following @@ -1896,7 +1872,7 @@ those stored in Secret/ssh_front/etc/ssh/

-roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml

 - name: Reload SSH server.
   become: yes
   systemd:
     service: ssh
     state: reloaded
-
+
-
-

7.5. Configure Monkey

+
+

7.5. Configure Monkey

The small institute runs cron jobs and web scripts that generate @@ -1934,13 +1910,13 @@ reports and perform checks. The un-privileged jobs are run by a system account named monkey. One of Monkey's more important jobs on Core is to run rsync to update the public web site on Front. Monkey on Core will login as monkey on Front to synchronize the files (as -described in *Configure Apache2). To do that without needing a +described in *Configure Apache2). To do that without needing a password, the monkey account on Front should authorize Monkey's SSH key on Core.

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Create monkey.
   become: yes
   user:
@@ -1962,54 +1938,54 @@ key on Core.
     name: "{{ ansible_user }}"
     append: yes
     groups: monkey
-
+
-
-

7.6. Install Rsync

+
+

7.6. Install Rsync

Monkey uses Rsync to keep the institute's public web site up-to-date.

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Install rsync.
   become: yes
   apt: pkg=rsync
-
+
-
-

7.7. Install Unattended Upgrades

+
+

7.7. Install Unattended Upgrades

The institute prefers to install security updates as soon as possible.

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Install basic software.
   become: yes
   apt: pkg=unattended-upgrades
-
+
-
-

7.8. Configure User Accounts

+
+

7.8. Configure User Accounts

User accounts are created immediately so that Postfix and Dovecot can start delivering email immediately, without returning "no such -recipient" replies. The Account Management chapter describes the +recipient" replies. The Account Management chapter describes the members and usernames variables used below.

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Create user accounts.
   become: yes
   user:
@@ -2038,12 +2014,12 @@ recipient" replies.  The Account Management chapter de
   loop: "{{ usernames }}"
   when: members[item].status != 'current'
   tags: accounts
-
+
-
-

7.9. Install Server Certificate

+
+

7.9. Install Server Certificate

The servers on Front use the same certificate (and key) to @@ -2053,7 +2029,7 @@ readable by root.

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Install server certificate/key.
   become: yes
   copy:
@@ -2069,12 +2045,12 @@ readable by root.
   notify:
   - Restart Postfix.
   - Restart Dovecot.
-
+
-
-

7.10. Configure Postfix on Front

+
+

7.10. Configure Postfix on Front

Front uses Postfix to provide the institute's public SMTP service, and @@ -2091,7 +2067,7 @@ The appropriate answers are listed here but will be checked

-As discussed in The Email Service above, Front's Postfix configuration +As discussed in The Email Service above, Front's Postfix configuration includes site-wide support for larger message sizes, shorter queue times, the relaying configuration, and the common path to incoming emails. These and a few Front-specific Postfix configurations @@ -2104,13 +2080,13 @@ relays messages from the campus.

-postfix-front-networks
- p: mynetworks
+postfix-front-networks
- p: mynetworks
   v: >-
      {{ public_vpn_net_cidr }}
      127.0.0.0/8
      [::ffff:127.0.0.0]/104
      [::1]/128
-
+

@@ -2120,13 +2096,13 @@ difficult for internal hosts, who do not have (public) domain names.

-postfix-front-restrictions
- p: smtpd_recipient_restrictions
+postfix-front-restrictions
- p: smtpd_recipient_restrictions
   v: >-
      permit_mynetworks
      reject_unauth_pipelining
      reject_unauth_destination
      reject_unknown_sender_domain
-
+

@@ -2141,15 +2117,15 @@ messages; incoming messages are delivered locally, without

-postfix-header-checks
- p: smtp_header_checks
+postfix-header-checks
- p: smtp_header_checks
   v: regexp:/etc/postfix/header_checks.cf
-
+
-postfix-header-checks-content
/^Received:/    IGNORE
+postfix-header-checks-content
/^Received:/    IGNORE
 /^User-Agent:/  IGNORE
-
+

@@ -2159,7 +2135,7 @@ Debian default for inet_interfaces.

-postfix-front
- { p: smtpd_tls_cert_file, v: /etc/server.crt }
+postfix-front
- { p: smtpd_tls_cert_file, v: /etc/server.crt }
 - { p: smtpd_tls_key_file, v: /etc/server.key }
 <<postfix-front-networks>>
 <<postfix-front-restrictions>>
@@ -2168,7 +2144,7 @@ Debian default for inet_interfaces.
 <<postfix-queue-times>>
 <<postfix-maildir>>
 <<postfix-header-checks>>
-
+

@@ -2178,7 +2154,7 @@ start and enable the service.

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Install Postfix.
   become: yes
   apt: pkg=postfix
@@ -2207,11 +2183,11 @@ start and enable the service.
     service: postfix
     enabled: yes
     state: started
-
+
-roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml

 - name: Restart Postfix.
   become: yes
   systemd:
@@ -2224,12 +2200,12 @@ start and enable the service.
     chdir: /etc/postfix/
     cmd: postmap header_checks.cf
   notify: Restart Postfix.
-
+
-
-

7.11. Configure Public Email Aliases

+
+

7.11. Configure Public Email Aliases

The institute's Front needs to deliver email addressed to a number of @@ -2246,7 +2222,7 @@ created by a more specialized role.

-roles_t/front/tasks/main.yml
- name: Install institute email aliases.
+roles_t/front/tasks/main.yml
- name: Install institute email aliases.
   become: yes
   blockinfile:
     block: |
@@ -2258,20 +2234,20 @@ created by a more specialized role.
     path: /etc/aliases
     marker: "# {mark} INSTITUTE MANAGED BLOCK"
   notify: New aliases.
-
+
-roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml

 - name: New aliases.
   become: yes
   command: newaliases
-
+
-
-

7.12. Configure Dovecot IMAPd

+
+

7.12. Configure Dovecot IMAPd

Front uses Dovecot's IMAPd to allow user Fetchmail jobs on Core to @@ -2280,7 +2256,7 @@ default with POP and IMAP (without TLS) support disabled. This is a bit "over the top" given that Core accesses Front via VPN, but helps to ensure privacy even when members must, in extremis, access recent email directly from their accounts on Front. For more information -about Front's role in the institute's email services, see The Email +about Front's role in the institute's email services, see The Email Service.

@@ -2299,7 +2275,7 @@ and enables it to start at every reboot.

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Install Dovecot IMAPd.
   become: yes
   apt: pkg=dovecot-imapd
@@ -2322,22 +2298,22 @@ and enables it to start at every reboot.
     service: dovecot
     enabled: yes
     state: started
-
+
-roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml

 - name: Restart Dovecot.
   become: yes
   systemd:
     service: dovecot
     state: restarted
-
+
-
-

7.13. Configure Apache2

+
+

7.13. Configure Apache2

This is the small institute's public web site. It is simple, static, @@ -2373,7 +2349,7 @@ taken from https://www

-apache-ciphers
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
+apache-ciphers
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
 SSLHonorCipherOrder on
 SSLCipherSuite {{ [ 'ECDHE-ECDSA-AES128-GCM-SHA256',
                     'ECDHE-ECDSA-AES256-GCM-SHA384',
@@ -2403,7 +2379,7 @@ SSLHonorCipherOrder on
                     '!SRP',
                     '!DSS',
                     '!RC4' ] |join(":") }}
-
+

@@ -2428,12 +2404,12 @@ used on all of the institute's web sites.

-apache-userdir-front
UserDir /home/www-users
+apache-userdir-front
UserDir /home/www-users
 <Directory /home/www-users/>
         Require all granted
         AllowOverride None
 </Directory>
-
+

@@ -2443,10 +2419,10 @@ HTTPS URLs.

-apache-redirect-front
<VirtualHost *:80>
+apache-redirect-front
<VirtualHost *:80>
         Redirect permanent / https://{{ domain_name }}/
 </VirtualHost>
-
+

@@ -2468,7 +2444,7 @@ the inside of a VirtualHost block. They should apply globally.

-apache-front
ServerName {{ domain_name }}
+apache-front
ServerName {{ domain_name }}
 ServerAdmin webmaster@{{ domain_name }}
 
 DocumentRoot /home/www
@@ -2493,7 +2469,7 @@ CustomLog ${APACHE_LOG_DIR}/access.log combined
 </VirtualHost>
 
 <<apache-ciphers>>
-
+

@@ -2503,7 +2479,7 @@ e.g. /etc/apache2/sites-available/small.example.org.conf and runs

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Install Apache2.
   become: yes
   apt: pkg=apache2
@@ -2544,17 +2520,17 @@ e.g. /etc/apache2/sites-available/small.example.org.conf and runs
     service: apache2
     enabled: yes
     state: started
-
+
-roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml

 - name: Restart Apache2.
   become: yes
   systemd:
     service: apache2
     state: restarted
-
+

@@ -2563,7 +2539,7 @@ that it does not interfere with its replacement.

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Disable default vhosts.
   become: yes
   file:
@@ -2571,7 +2547,7 @@ that it does not interfere with its replacement.
     state: absent
   loop: [ 000-default.conf, default-ssl.conf ]
   notify: Restart Apache2.
-
+

@@ -2581,14 +2557,14 @@ same records as access.log.

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Disable other-vhosts-access-log option.
   become: yes
   file:
     path: /etc/apache2/conf-enabled/other-vhosts-access-log.conf
     state: absent
   notify: Restart Apache2.
-
+

@@ -2597,7 +2573,7 @@ the users' ~/Public/HTML/ directories.

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Create UserDir.
   become: yes
   file:
@@ -2623,12 +2599,12 @@ the users' ~/Public/HTML/ directories.
   loop: "{{ usernames }}"
   when: members[item].status != 'current'
   tags: accounts
-
+
-
-

7.14. Configure OpenVPN

+
+

7.14. Configure OpenVPN

Front uses OpenVPN to provide the institute's public VPN service. The @@ -2641,9 +2617,9 @@ route packets for the campus networks to Core.

-openvpn-ccd-core
iroute {{ private_net_and_mask }}
+openvpn-ccd-core
iroute {{ private_net_and_mask }}
 iroute {{ campus_vpn_net_and_mask }}
-
+

@@ -2658,21 +2634,21 @@ through some ISP, and thus needs the same routes as the clients.

-openvpn-front-routes
route {{ private_net_and_mask }}
+openvpn-front-routes
route {{ private_net_and_mask }}
 route {{ campus_vpn_net_and_mask }}
 push "route {{ private_net_and_mask }}"
 push "route {{ campus_vpn_net_and_mask }}"
-
+

The complete OpenVPN configuration for Front includes a server option, the client-config-dir option, the routes mentioned above, -and the common options discussed in The VPN Service. +and the common options discussed in The VPN Service.

-openvpn-front
server {{ public_vpn_net_and_mask }}
+openvpn-front
server {{ public_vpn_net_and_mask }}
 client-config-dir /etc/openvpn/ccd
 <<openvpn-front-routes>>
 <<openvpn-dev-mode>>
@@ -2686,8 +2662,8 @@ ca /usr/local/share/ca-certificates/{{ domain_name }}.crt
 cert server.crt
 key server.key
 dh dh2048.pem
-tls-auth ta.key 0
-
+tls-crypt shared.key +

@@ -2696,7 +2672,7 @@ configure the OpenVPN server on Front.

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Install OpenVPN.
   become: yes
   apt: pkg=openvpn
@@ -2752,7 +2728,7 @@ configure the OpenVPN server on Front.
     mode: u=r,g=,o=
   loop:
   - { src: front-dh2048.pem, dest: dh2048.pem }
-  - { src: front-ta.key, dest: ta.key }
+  - { src: front-shared.key, dest: shared.key }
   notify: Restart OpenVPN.
 
 - name: Configure OpenVPN.
@@ -2770,22 +2746,22 @@ configure the OpenVPN server on Front.
     service: openvpn@server
     enabled: yes
     state: started
-
+
-roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml

 - name: Restart OpenVPN.
   become: yes
   systemd:
     service: openvpn@server
     state: restarted
-
+
-
-

7.15. Configure Kamailio

+
+

7.15. Configure Kamailio

Front uses Kamailio to provide a SIP service on the public VPN so that @@ -2807,8 +2783,8 @@ specifies the actual IP, known here as front_private_addr.

-kamailio
listen=udp:{{ front_private_addr }}:5060
-
+kamailio
listen=udp:{{ front_private_addr }}:5060
+

@@ -2823,11 +2799,11 @@ The first step is to install Kamailio.

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Install Kamailio.
   become: yes
   apt: pkg=kamailio
-
+

@@ -2838,7 +2814,7 @@ be started before the tun device has appeared.

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Create Kamailio/Systemd configuration drop.
   become: yes
   file:
@@ -2854,16 +2830,16 @@ be started before the tun device has appeared.
       After=sys-devices-virtual-net-ovpn.device
     dest: /etc/systemd/system/kamailio.service.d/depend.conf
   notify: Reload Systemd.
-
+
-roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml

 - name: Reload Systemd.
   become: yes
   systemd:
     daemon-reload: yes
-
+

@@ -2871,7 +2847,7 @@ Finally, Kamailio can be configured and started.

-roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml

 - name: Configure Kamailio.
   become: yes
   copy:
@@ -2886,42 +2862,42 @@ Finally, Kamailio can be configured and started.
     service: kamailio
     enabled: yes
     state: started
-
+
-roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml

 - name: Restart Kamailio.
   become: yes
   systemd:
     service: kamailio
     state: restarted
-
+
-
-

8. The Core Role

+
+

8. The Core Role

The core role configures many essential campus network services as well as the institute's private cloud, so the core machine has horsepower (CPUs and RAM) and large disks and is prepared with a Debian install and remote access to a privileged, administrator's -account. (For details, see The Core Machine.) +account. (For details, see The Core Machine.)

-
-

8.1. Include Particulars

+
+

8.1. Include Particulars

-The first task, as in The Front Role, is to include the institute +The first task, as in The Front Role, is to include the institute particulars and membership roll.

-roles_t/core/tasks/main.yml
---
+roles_t/core/tasks/main.yml
---
 - name: Include public variables.
   include_vars: ../public/vars.yml
   tags: accounts
@@ -2931,12 +2907,12 @@ particulars and membership roll.
 - name: Include members.
   include_vars: "{{ lookup('first_found', membership_rolls) }}"
   tags: accounts
-
+
-
-

8.2. Configure Hostname

+
+

8.2. Configure Hostname

This task ensures that Core's /etc/hostname and /etc/mailname are @@ -2947,7 +2923,7 @@ proper email delivery.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Configure hostname.
   become: yes
   copy:
@@ -2957,20 +2933,20 @@ proper email delivery.
   - { name: "core.{{ domain_priv }}", file: /etc/mailname }
   - { name: "{{ inventory_hostname }}", file: /etc/hostname }
   notify: Update hostname.
-
+
-roles_t/core/handlers/main.yml
---
+roles_t/core/handlers/main.yml
---
 - name: Update hostname.
   become: yes
   command: hostname -F /etc/hostname
-
+
-
-

8.3. Configure Systemd Resolved

+
+

8.3. Configure Systemd Resolved

Core runs the campus name server, so Resolved is configured to use it @@ -2979,7 +2955,7 @@ list, and to disable its cache and stub listener.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Configure resolved.
   become: yes
   lineinfile:
@@ -2995,11 +2971,11 @@ list, and to disable its cache and stub listener.
   notify:
   - Reload Systemd.
   - Restart Systemd resolved.
-
+
-roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml

 - name: Reload Systemd.
   become: yes
   systemd:
@@ -3010,12 +2986,12 @@ list, and to disable its cache and stub listener.
   systemd:
     service: systemd-resolved
     state: restarted
-
+
-
-

8.4. Configure Netplan

+
+

8.4. Configure Netplan

Core's network interface is statically configured using Netplan and an @@ -3035,12 +3011,12 @@ fact was an empty hash at first boot on a simulated campus Ethernet.)

-private/vars.yml
core_ethernet:              enp0s3
-
+private/vars.yml
core_ethernet:              enp0s3
+
-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install netplan.
   become: yes
   apt: pkg=netplan.io
@@ -3062,20 +3038,20 @@ fact was an empty hash at first boot on a simulated campus Ethernet.)
     dest: /etc/netplan/60-core.yaml
     mode: u=rw,g=r,o=
   notify: Apply netplan.
-
+
-roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml

 - name: Apply netplan.
   become: yes
   command: netplan apply
-
+
-
-

8.5. Configure DHCP For the Private Ethernet

+
+

8.5. Configure DHCP For the Private Ethernet

Core speaks DHCP (Dynamic Host Configuration Protocol) using the @@ -3090,12 +3066,13 @@ The example configuration file, private/cor RFC3442's extension to encode a second (non-default) static route. The default route is through the campus ISP at Gate. A second route directs campus traffic to the Front VPN through Core. This is just an -example file. The administrator adds and removes actual machines from -the actual private/core-dhcpd.conf file. +example file, with MAC addresses chosen to (probably?) match +VirtualBox test machines. In actual use private/core-dhcpd.conf +refers to a replacement file.

-private/core-dhcpd.conf
option domain-name "small.private";
+private/core-dhcpd.conf
option domain-name "small.private";
 option domain-name-servers 192.168.56.1;
 
 default-lease-time 3600;
@@ -3123,16 +3100,16 @@ log-facility daemon;
   hardware ethernet 08:00:27:e0:79:ab; fixed-address 192.168.56.2; }
 host server {
   hardware ethernet 08:00:27:f3:41:66; fixed-address 192.168.56.3; }
-
+

-The following tasks install the ISC's DHCP server and configure it -with the real private/core-dhcpd.conf (not the example above). +The following tasks install ISC's DHCP server and configure it with +the real private/core-dhcpd.conf (not the example above).

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install DHCP server.
   become: yes
   apt: pkg=isc-dhcp-server
@@ -3158,26 +3135,26 @@ with the real private/core-dhcpd.conf
+
-roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml

 - name: Restart DHCP server.
   become: yes
   systemd:
     service: isc-dhcp-server
     state: restarted
-
+
-
-

8.6. Configure BIND9

+
+

8.6. Configure BIND9

Core uses BIND9 to provide name service for the institute as described -in The Name Service. The configuration supports reverse name lookups, +in The Name Service. The configuration supports reverse name lookups, resolving many private network addresses to private domain names.

@@ -3186,7 +3163,7 @@ The following tasks install and configure BIND9 on Core.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install BIND9.
   become: yes
   apt: pkg=bind9
@@ -3221,17 +3198,17 @@ The following tasks install and configure BIND9 on Core.
     service: bind9
     enabled: yes
     state: started
-
+
-roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml

 - name: Reload BIND9.
   become: yes
   systemd:
     service: bind9
     state: reloaded
-
+

@@ -3242,11 +3219,11 @@ probably be used as forwarders rather than Google.

-bind-options
acl "trusted" {
+bind-options
acl "trusted" {
         {{ private_net_cidr }};
+        {{ wild_net_cidr }};
         {{ public_vpn_net_cidr }};
         {{ campus_vpn_net_cidr }};
-        {{ gate_wifi_net_cidr }};
         localhost;
 };
 
@@ -3267,11 +3244,11 @@ probably be used as forwarders rather than Google.
                 localhost;
         };
 };
-
+
-bind-local
include "/etc/bind/zones.rfc1918";
+bind-local
include "/etc/bind/zones.rfc1918";
 
 zone "{{ domain_priv }}." {
         type master;
@@ -3295,11 +3272,11 @@ probably be used as forwarders rather than Google.
         type master;
         file "/etc/bind/db.campus_vpn";
 };
-
+
-private/db.domain
;
+private/db.domain
;
 ; BIND data file for a small institute's PRIVATE domain names.
 ;
 $TTL    604800
@@ -3323,11 +3300,11 @@ probably be used as forwarders rather than Google.
 ;
 core    IN      A       192.168.56.1
 gate    IN      A       192.168.56.2
-
+
-private/db.private
;
+private/db.private
;
 ; BIND reverse data file for a small institute's private Ethernet.
 ;
 $TTL    604800
@@ -3342,11 +3319,11 @@ probably be used as forwarders rather than Google.
 $TTL    7200
 1       IN      PTR     core.small.private.
 2       IN      PTR     gate.small.private.
-
+
-private/db.public_vpn
;
+private/db.public_vpn
;
 ; BIND reverse data file for a small institute's public VPN.
 ;
 $TTL    604800
@@ -3361,11 +3338,11 @@ probably be used as forwarders rather than Google.
 $TTL    7200
 1       IN      PTR     front-p.small.private.
 2       IN      PTR     core-p.small.private.
-
+
-private/db.campus_vpn
;
+private/db.campus_vpn
;
 ; BIND reverse data file for a small institute's campus VPN.
 ;
 $TTL    604800
@@ -3379,12 +3356,12 @@ probably be used as forwarders rather than Google.
 @       IN      NS      core.small.private.
 $TTL    7200
 1       IN      PTR     gate-c.small.private.
-
+
-
-

8.7. Add Administrator to System Groups

+
+

8.7. Add Administrator to System Groups

The administrator often needs to read (directories of) log files owned @@ -3393,30 +3370,30 @@ these groups speeds up debugging.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Add {{ ansible_user }} to system groups.
   become: yes
   user:
     name: "{{ ansible_user }}"
     append: yes
     groups: root,adm
-
+
-
-

8.8. Configure Monkey

+
+

8.8. Configure Monkey

The small institute runs cron jobs and web scripts that generate reports and perform checks. The un-privileged jobs are run by a system account named monkey. One of Monkey's more important jobs on Core is to run rsync to update the public web site on Front (as -described in *Configure Apache2). +described in *Configure Apache2).

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Create monkey.
   become: yes
   user:
@@ -3468,54 +3445,54 @@ described in *Configure Apache2).
     owner: monkey
     group: monkey
     mode: "u=rw,g=,o="
-
+
-
-

8.9. Install Unattended Upgrades

+
+

8.9. Install Unattended Upgrades

The institute prefers to install security updates as soon as possible.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install basic software.
   become: yes
   apt: pkg=unattended-upgrades
-
+
-
-

8.10. Install Expect

+
+

8.10. Install Expect

-The expect program is used by The Institute Commands to interact +The expect program is used by The Institute Commands to interact with Nextcloud on the command line.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install expect.
   become: yes
   apt: pkg=expect
-
+
-
-

8.11. Configure User Accounts

+
+

8.11. Configure User Accounts

User accounts are created immediately so that backups can begin -restoring as soon as possible. The Account Management chapter +restoring as soon as possible. The Account Management chapter describes the members and usernames variables.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Create user accounts.
   become: yes
   user:
@@ -3544,12 +3521,12 @@ describes the members and usernames variables.
   loop: "{{ usernames }}"
   when: members[item].status != 'current'
   tags: accounts
-
+
-
-

8.12. Install Server Certificate

+
+

8.12. Install Server Certificate

The servers on Core use the same certificate (and key) to authenticate @@ -3558,7 +3535,7 @@ themselves to institute clients. They share the /etc/server.crt and

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install server certificate/key.
   become: yes
   copy:
@@ -3574,12 +3551,12 @@ themselves to institute clients.  They share the /etc/server.crt and
   - Restart Postfix.
   - Restart Dovecot.
   - Restart OpenVPN.
-
+
-
-

8.13. Install NTP

+
+

8.13. Install NTP

Core uses NTP to provide a time synchronization service to the campus. @@ -3587,16 +3564,16 @@ The default daemon's default configuration is fine.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install NTP.
   become: yes
   apt: pkg=ntp
-
+
-
-

8.14. Configure Postfix on Core

+
+

8.14. Configure Postfix on Core

Core uses Postfix to provide SMTP service to the campus. The default @@ -3612,7 +3589,7 @@ The appropriate answers are listed here but will be checked

-As discussed in The Email Service above, Core delivers email addressed +As discussed in The Email Service above, Core delivers email addressed to any internal domain name locally, and uses its smarthost Front to relay the rest. Core is reachable only on institute networks, so there is little benefit in enabling TLS, but it does need to handle @@ -3625,7 +3602,7 @@ Core relays messages from any institute network.

-postfix-core-networks
- p: mynetworks
+postfix-core-networks
- p: mynetworks
   v: >-
      {{ private_net_cidr }}
      {{ public_vpn_net_cidr }}
@@ -3633,7 +3610,7 @@ Core relays messages from any institute network.
      127.0.0.0/8
      [::ffff:127.0.0.0]/104
      [::1]/128
-
+

@@ -3641,8 +3618,8 @@ Core uses Front to relay messages to the Internet.

-postfix-core-relayhost
- { p: relayhost, v: "[{{ front_private_addr }}]" }
-
+postfix-core-relayhost
- { p: relayhost, v: "[{{ front_private_addr }}]" }
+

@@ -3653,9 +3630,9 @@ file.

-postfix-transport
.{{ domain_name }}      local:$myhostname
+postfix-transport
.{{ domain_name }}      local:$myhostname
 .{{ domain_priv }}      local:$myhostname
-
+

@@ -3664,7 +3641,7 @@ The complete list of Core's Postfix settings for

-postfix-core
<<postfix-relaying>>
+postfix-core
<<postfix-relaying>>
 - { p: smtpd_tls_security_level, v: none }
 - { p: smtp_tls_security_level, v: none }
 <<postfix-message-size>>
@@ -3673,7 +3650,7 @@ The complete list of Core's Postfix settings for
 <<postfix-core-networks>>
 <<postfix-core-relayhost>>
 - { p: inet_interfaces, v: "127.0.0.1 {{ core_addr }}" }
-
+

@@ -3684,7 +3661,7 @@ enable the service. Whenever /etc/postfix/transport is changed, the

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install Postfix.
   become: yes
   apt: pkg=postfix
@@ -3714,11 +3691,11 @@ enable the service.  Whenever /etc/postfix/transport is changed, the
     service: postfix
     enabled: yes
     state: started
-
+
-roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml

 - name: Restart Postfix.
   become: yes
   systemd:
@@ -3731,12 +3708,12 @@ enable the service.  Whenever /etc/postfix/transport is changed, the
     chdir: /etc/postfix/
     cmd: postmap transport
   notify: Restart Postfix.
-
+
-
-

8.15. Configure Private Email Aliases

+
+

8.15. Configure Private Email Aliases

The institute's Core needs to deliver email addressed to institute @@ -3748,7 +3725,7 @@ installed by more specialized roles.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install institute email aliases.
   become: yes
   blockinfile:
@@ -3761,20 +3738,20 @@ installed by more specialized roles.
     path: /etc/aliases
     marker: "# {mark} INSTITUTE MANAGED BLOCK"
   notify: New aliases.
-
+
-roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml

 - name: New aliases.
   become: yes
   command: newaliases
-
+
-
-

8.16. Configure Dovecot IMAPd

+
+

8.16. Configure Dovecot IMAPd

Core uses Dovecot's IMAPd to store and serve member emails. As on @@ -3784,7 +3761,7 @@ top" given that Core is only accessed from private (encrypted) networks, but helps to ensure privacy even when members accidentally attempt connections from outside the private networks. For more information about Core's role in the institute's email services, see -The Email Service. +The Email Service.

@@ -3792,7 +3769,7 @@ The institute follows the recommendation in the package README.Debian (in /usr/share/dovecot-core/) but replaces the default "snake oil" certificate with another, signed by the institute. (For more information about the institute's X.509 certificates, see -Keys.) +Keys.)

@@ -3802,7 +3779,7 @@ and enables it to start at every reboot.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install Dovecot IMAPd.
   become: yes
   apt: pkg=dovecot-imapd
@@ -3824,22 +3801,22 @@ and enables it to start at every reboot.
     service: dovecot
     enabled: yes
     state: started
-
+
-roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml

 - name: Restart Dovecot.
   become: yes
   systemd:
     service: dovecot
     state: restarted
-
+
-
-

8.17. Configure Fetchmail

+
+

8.17. Configure Fetchmail

Core runs a fetchmail for each member of the institute. Individual @@ -3856,7 +3833,7 @@ the username. The template is only used when the record has a

-fetchmail-config
# Permissions on this file may be no greater than 0600.
+fetchmail-config
# Permissions on this file may be no greater than 0600.
 
 set no bouncemail
 set no spambounce
@@ -3867,7 +3844,7 @@ poll {{ front_private_addr }} protocol imap timeout 15
     username {{ item }}
     password "{{ members[item].password_fetchmail }}" fetchall
     ssl sslproto tls1.2+ sslcertck sslcommonname {{ domain_name }}
-
+

@@ -3875,7 +3852,7 @@ The Systemd service description.

-fetchmail-service
[Unit]
+fetchmail-service
[Unit]
 Description=Fetchmail --idle task for {{ item }}.
 AssertPathExists=/home/{{ item }}/.fetchmailrc
 After=openvpn@front.service
@@ -3890,7 +3867,7 @@ The Systemd service description.
 
 [Install]
 WantedBy=default.target
-
+

@@ -3903,7 +3880,7 @@ provided the Core service.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install fetchmail.
   become: yes
   apt: pkg=fetchmail
@@ -3946,7 +3923,7 @@ provided the Core service.
   - members[item].status == 'current'
   - members[item].password_fetchmail is defined
   tags: accounts
-
+

@@ -3955,7 +3932,7 @@ stopped and disabled from restarting at boot, deleted even.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Stop former user fetchmail services.
   become: yes
   systemd:
@@ -3967,7 +3944,7 @@ stopped and disabled from restarting at boot, deleted even.
   - members[item].status != 'current'
   - members[item].password_fetchmail is defined
   tags: accounts
-
+

@@ -3977,7 +3954,7 @@ Otherwise the following task might be appropriate.

-
+

 - name: Delete former user fetchmail services.
   become: yes
   file:
@@ -3988,16 +3965,16 @@ Otherwise the following task might be appropriate.
   - members[item].status != 'current'
   - members[item].password_fetchmail is defined
   tags: accounts
-
+
-
-

8.18. Configure Apache2

+
+

8.18. Configure Apache2

This is the small institute's campus web server. It hosts several web -sites as described in The Web Services. +sites as described in The Web Services.

@@ -4068,12 +4045,12 @@ naming a sub-directory in the member's home directory on Core. The

-apache-userdir-core
UserDir Public/HTML
+apache-userdir-core
UserDir Public/HTML
 <Directory /home/*/Public/HTML/>
         Require all granted
         AllowOverride None
 </Directory>
-
+

@@ -4083,7 +4060,7 @@ redirect, the encryption ciphers and certificates.

-apache-live
<VirtualHost *:80>
+apache-live
<VirtualHost *:80>
         ServerName live
         ServerAlias live.{{ domain_priv }}
         ServerAdmin webmaster@core.{{ domain_priv }}
@@ -4101,7 +4078,7 @@ redirect, the encryption ciphers and certificates.
 
         IncludeOptional /etc/apache2/sites-available/live-vhost.conf
 </VirtualHost>
-
+

@@ -4110,7 +4087,7 @@ familiar.

-apache-test
<VirtualHost *:80>
+apache-test
<VirtualHost *:80>
         ServerName test
         ServerAlias test.{{ domain_priv }}
         ServerAdmin webmaster@core.{{ domain_priv }}
@@ -4128,7 +4105,7 @@ familiar.
 
         IncludeOptional /etc/apache2/sites-available/test-vhost.conf
 </VirtualHost>
-
+

@@ -4139,7 +4116,7 @@ trained staffers, monitored by a revision control system, etc.

-apache-campus
<VirtualHost *:80>
+apache-campus
<VirtualHost *:80>
         ServerName www
         ServerAlias www.{{ domain_priv }}
         ServerAdmin webmaster@core.{{ domain_priv }}
@@ -4159,7 +4136,7 @@ trained staffers, monitored by a revision control system, etc.
 
         IncludeOptional /etc/apache2/sites-available/www-vhost.conf
 </VirtualHost>
-
+

@@ -4167,7 +4144,7 @@ The tasks below install Apache2 and edit its default configuration.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install Apache2.
   become: yes
   apt: pkg=apache2
@@ -4176,9 +4153,21 @@ The tasks below install Apache2 and edit its default configuration.
   become: yes
   apache2_module:
     name: "{{ item }}"
-  loop: [ userdir, cgid ]
+  loop: [ userdir, cgid, ssl ]
   notify: Restart Apache2.
-
+ +- name: Configure Apache2 SSL certificate. + become: yes + lineinfile: + path: /etc/apache2/sites-available/default-ssl.conf + regexp: "^([\t ]*){{ item.p }}" + line: "\\1{{ item.p }}\t{{ item.v }}" + backrefs: yes + loop: + - { p: SSLCertificateFile, v: "/etc/server.crt" } + - { p: SSLCertificateKeyFile, v: "/etc/server.key" } + notify: Restart Apache2. +

@@ -4188,7 +4177,7 @@ The a2ensite command enables them.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install live web site.
   become: yes
   copy:
@@ -4221,7 +4210,7 @@ The a2ensite command enables them.
   command:
     cmd: a2ensite -q {{ item }}
     creates: /etc/apache2/sites-enabled/{{ item }}.conf
-  loop: [ live, test, www ]
+  loop: [ live, test, www, default-ssl ]
   notify: Restart Apache2.
 
 - name: Enable/Start Apache2.
@@ -4230,22 +4219,22 @@ The a2ensite command enables them.
     service: apache2
     enabled: yes
     state: started
-
+
-roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml

 - name: Restart Apache2.
   become: yes
   systemd:
     service: apache2
     state: restarted
-
+
-
-

8.19. Configure Website Updates

+
+

8.19. Configure Website Updates

Monkey on Core runs /usr/local/sbin/webupdate every 15 minutes via a @@ -4254,7 +4243,7 @@ Monkey on Core runs /usr/local/sbin/webupdate every 15 minutes via a

-private/webupdate
#!/bin/bash -e
+private/webupdate
#!/bin/bash -e
 #
 # DO NOT EDIT.  This file was tangled from institute.org.
 
@@ -4264,17 +4253,17 @@ rsync -avz --delete --chmod=g-w         \'exclude *~'           \
         --filter='exclude .git*'        \
         ./ {{ domain_name }}:/home/www/
-
+

The following tasks install the webupdate script from private/, and create Monkey's cron job. An example webupdate script is -provided here. +provided here.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: "Install Monkey's webupdate script."
   become: yes
   copy:
@@ -4291,12 +4280,12 @@ provided here.
     job: "[ -d /WWW/live ] && /usr/local/sbin/webupdate"
     name: webupdate
     user: monkey
-
+
-
-

8.20. Configure OpenVPN Connection to Front

+
+

8.20. Configure OpenVPN Connection to Front

Core connects to Front's public VPN to provide members abroad with a @@ -4313,7 +4302,7 @@ called openvpn@front.

-openvpn-core
client
+openvpn-core
client
 dev-type tun
 dev ovpn
 remote {{ front_addr }}
@@ -4326,8 +4315,8 @@ verb 3
 ca /usr/local/share/ca-certificates/{{ domain_name }}.crt
 cert client.crt
 key client.key
-tls-auth ta.key 1
-
+tls-crypt shared.key +

@@ -4336,7 +4325,7 @@ for Core.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install OpenVPN.
   become: yes
   apt: pkg=openvpn
@@ -4351,8 +4340,8 @@ for Core.
 - name: Install OpenVPN secret.
   become: yes
   copy:
-    src: ../Secret/front-ta.key
-    dest: /etc/openvpn/ta.key
+    src: ../Secret/front-shared.key
+    dest: /etc/openvpn/shared.key
     mode: u=r,g=,o=
   notify: Restart OpenVPN.
 
@@ -4382,22 +4371,22 @@ for Core.
     service: openvpn@front
     state: started
     enabled: yes
-
+
-roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml

 - name: Restart OpenVPN.
   become: yes
   systemd:
     service: openvpn@front
     state: restarted
-
+
-
-

8.21. Configure NAGIOS

+
+

8.21. Configure NAGIOS

Core runs a nagios4 server to monitor "services" on institute hosts. @@ -4417,7 +4406,7 @@ Core and Campus (and thus Gate) machines.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install NAGIOS4.
   become: yes
   apt:
@@ -4465,21 +4454,21 @@ Core and Campus (and thus Gate) machines.
     service: nagios4
     enabled: yes
     state: started
-
+
-roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml

 - name: Reload NAGIOS4.
   become: yes
   systemd:
     service: nagios4
     state: reloaded
-
+
-
-

8.21.1. Configure NAGIOS Monitors for Core

+
+

8.21.1. Configure NAGIOS Monitors for Core

The first block in nagios.cfg specifies monitors for services on @@ -4489,7 +4478,7 @@ used here may specify plugin arguments.

-roles_t/core/templates/nagios.cfg
define host {
+roles_t/core/templates/nagios.cfg
define host {
     use                     linux-server
     host_name               core
     address                 127.0.0.1
@@ -4550,12 +4539,12 @@ used here may specify plugin arguments.
     service_description     HTTP
     check_command           check_http
 }
-
+
-
-

8.21.2. Custom NAGIOS Monitor inst_sensors

+
+

8.21.2. Custom NAGIOS Monitor inst_sensors

The check_sensors plugin is included in the package @@ -4565,7 +4554,7 @@ small institute substitutes a slightly modified version,

-roles_t/core/files/inst_sensors
#!/bin/sh
+roles_t/core/files/inst_sensors
#!/bin/sh
 
 PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin"
 export PATH
@@ -4641,7 +4630,7 @@ small institute substitutes a slightly modified version,
                 exit $exit
                 ;;
 esac
-
+

@@ -4650,7 +4639,7 @@ Core.

-roles_t/core/templates/nagios.cfg
+roles_t/core/templates/nagios.cfg

 define command {
     command_name            inst_sensors
     command_line            /usr/local/sbin/inst_sensors
@@ -4662,12 +4651,12 @@ Core.
     service_description     Temperature Sensors
     check_command           inst_sensors
 }
-
+
-
-

8.21.3. Configure NAGIOS Monitors for Remote Hosts

+
+

8.21.3. Configure NAGIOS Monitors for Remote Hosts

The following sections contain code blocks specifying monitors for @@ -4684,12 +4673,12 @@ plugin with pre-defined arguments appropriate for the institute. The commands are defined in code blocks interleaved with the blocks that monitor them. The command blocks are appended to nrpe.cfg and the monitoring blocks to nagios.cfg. The nrpe.cfg file is installed -on each campus host by the campus role's Configure NRPE tasks. +on each campus host by the campus role's Configure NRPE tasks.

-
-

8.21.4. Configure NAGIOS Monitors for Gate

+
+

8.21.4. Configure NAGIOS Monitors for Gate

Define the monitored host, gate. Monitor its response to network @@ -4697,13 +4686,13 @@ pings.

-roles_t/core/templates/nagios.cfg
+roles_t/core/templates/nagios.cfg

 define host {
     use                     linux-server
     host_name               gate
     address                 {{ gate_addr }}
 }
-
+

@@ -4712,8 +4701,8 @@ space on the root partition.

-roles_t/campus/files/nrpe.cfg
command[inst_root]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /
-
+roles_t/campus/files/nrpe.cfg
command[inst_root]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /
+

@@ -4721,14 +4710,14 @@ Monitor inst_root on Gate.

-roles_t/core/templates/nagios.cfg
+roles_t/core/templates/nagios.cfg

 define service {
     use                     generic-service
     host_name               gate
     service_description     Root Partition
     check_command           check_nrpe!inst_root
 }
-
+

@@ -4736,14 +4725,14 @@ Monitor check_load on Gate.

-roles_t/core/templates/nagios.cfg
+roles_t/core/templates/nagios.cfg

 define service {
     use                     generic-service
     host_name               gate
     service_description     Current Load
     check_command           check_nrpe!check_load
 }
-
+

@@ -4751,7 +4740,7 @@ Monitor check_zombie_procs and check_total_procs on Ga

-roles_t/core/templates/nagios.cfg
+roles_t/core/templates/nagios.cfg

 define service {
     use                     generic-service
     host_name               gate
@@ -4765,7 +4754,7 @@ Monitor check_zombie_procs and check_total_procs on Ga
     service_description     Total Processes
     check_command           check_nrpe!check_total_procs
 }
-
+

@@ -4774,8 +4763,8 @@ usage.

-roles_t/campus/files/nrpe.cfg
command[inst_swap]=/usr/lib/nagios/plugins/check_swap -w 20% -c 10%
-
+roles_t/campus/files/nrpe.cfg
command[inst_swap]=/usr/lib/nagios/plugins/check_swap -w 20% -c 10%
+

@@ -4783,14 +4772,14 @@ Monitor inst_swap on Gate.

-roles_t/core/templates/nagios.cfg
+roles_t/core/templates/nagios.cfg

 define service {
     use                     generic-service
     host_name               gate
     service_description     Swap Usage
     check_command           check_nrpe!inst_swap
 }
-
+

@@ -4799,8 +4788,8 @@ CPU temperatures.

-roles_t/campus/files/nrpe.cfg
command[inst_sensors]=/usr/local/sbin/inst_sensors
-
+roles_t/campus/files/nrpe.cfg
command[inst_sensors]=/usr/local/sbin/inst_sensors
+

@@ -4808,52 +4797,52 @@ Monitor inst_sensors on Gate.

-roles_t/core/templates/nagios.cfg
+roles_t/core/templates/nagios.cfg

 define service {
     use                     generic-service
     host_name               gate
     service_description     Temperature Sensors
     check_command           check_nrpe!inst_sensors
 }
-
+
-
-

8.22. Configure Backups

+
+

8.22. Configure Backups

The following task installs the backup script from private/. An -example script is provided in here. +example script is provided in here.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install backup script.
   become: yes
   copy:
     src: ../private/backup
     dest: /usr/local/sbin/backup
     mode: u=rx,g=r,o=
-
+
-
-

8.23. Configure Nextcloud

+
+

8.23. Configure Nextcloud

Core runs Nextcloud to provide a private institute cloud, as described -in The Cloud Service. Installing, restoring (from backup), and +in The Cloud Service. Installing, restoring (from backup), and upgrading Nextcloud are manual processes documented in The Nextcloud Admin Manual, Maintenance. However Ansible can help prepare Core before an install or restore, and perform basic security checks afterwards.

-
-

8.23.1. Prepare Core For Nextcloud

+
+

8.23.1. Prepare Core For Nextcloud

The Ansible code contained herein prepares Core to run Nextcloud by @@ -4862,7 +4851,7 @@ installing a cron job.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install packages required by Nextcloud.
   become: yes
   apt:
@@ -4870,7 +4859,7 @@ installing a cron job.
            php-curl, php-gd, php-gmp, php-json, php-mysql,
            php-mbstring, php-intl, php-imagick, php-xml, php-zip,
            libapache2-mod-php ]
-
+

@@ -4878,13 +4867,13 @@ Next, a number of Apache2 modules are enabled.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Enable Apache2 modules for Nextcloud.
   become: yes
   apache2_module:
     name: "{{ item }}"
   loop: [ rewrite, headers, env, dir, mime ]
-
+

@@ -4896,7 +4885,7 @@ Administration Guide (sub-section -roles_t/core/files/nextcloud.conf

Alias /nextcloud "/var/www/nextcloud/"
+roles_t/core/files/nextcloud.conf
Alias /nextcloud "/var/www/nextcloud/"
 
 <Directory /var/www/nextcloud/>
     Require all granted
@@ -4907,11 +4896,11 @@ Administration Guide (sub-section 
-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Install Nextcloud web configuration.
   become: yes
   copy:
@@ -4925,7 +4914,7 @@ Administration Guide (sub-section 
-roles_t/core/files/nextcloud.conf
+roles_t/core/files/nextcloud.conf

 <Directory /var/www/html/>
     <IfModule mod_rewrite.c>
         RewriteEngine on
@@ -4951,7 +4940,7 @@ virtual host's document root.
             /nextcloud/index.php/.well-known/nodeinfo [R=301,L]
       </IfModule>
 </Directory>
-
+

@@ -4962,12 +4951,12 @@ page. The following portion of nextcloud.conf sets a

-roles_t/core/files/nextcloud.conf
+roles_t/core/files/nextcloud.conf

 <IfModule mod_headers.c>
     Header always set \
         Strict-Transport-Security "max-age=15552000; includeSubDomains"
 </IfModule>
-
+

@@ -4978,14 +4967,14 @@ cloud FUBARs.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Add {{ ansible_user }} to web server group.
   become: yes
   user:
     name: "{{ ansible_user }}"
     append: yes
     groups: www-data
-
+

@@ -4994,7 +4983,7 @@ jobs.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Create Nextcloud cron job.
   become: yes
   cron:
@@ -5004,7 +4993,7 @@ jobs.
       && /usr/bin/php -f /var/www/nextcloud/cron.php
     name: Nextcloud
     user: www-data
-
+

@@ -5015,8 +5004,8 @@ the apg -n 1 -x 12 -m 12 command.

-private/vars.yml
nextcloud_dbpass:           ippAgmaygyob
-
+private/vars.yml
nextcloud_dbpass:           ippAgmaygyob
+

@@ -5025,7 +5014,7 @@ the following task can create Nextcloud's DB.

-
+

 - name: Create Nextcloud DB.
   become: yes
   mysql_db:
@@ -5033,7 +5022,7 @@ the following task can create Nextcloud's DB.
     name: nextcloud
     collation: utf8mb4_general_ci
     encoding: utf8mb4
-
+

@@ -5047,12 +5036,12 @@ created manually. The following task would work (mysql_user supports check_implicit_admin) but the nextcloud database was not created above. Thus both database and user are created manually, with SQL -given in the 8.23.5 subsection below, before occ +given in the 8.23.5 subsection below, before occ maintenance:install can run.

-
+

 - name: Create Nextcloud DB user.
   become: yes
   mysql_user:
@@ -5061,7 +5050,7 @@ maintenance:install can run.
     password: "{{ nextcloud_dbpass }}"
     update_password: always
     priv: 'nextcloud.*:all'
-
+

@@ -5072,7 +5061,7 @@ its document root.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Link /var/www/nextcloud.
   become: yes
   file:
@@ -5081,12 +5070,12 @@ its document root.
     state: link
     force: yes
     follow: no
-
+
-
-

8.23.2. Configure PHP

+
+

8.23.2. Configure PHP

The following tasks set a number of PHP parameters for better @@ -5094,7 +5083,7 @@ performance, as recommended by Nextcloud.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Set PHP memory_limit for Nextcloud.
   become: yes
   lineinfile:
@@ -5125,12 +5114,12 @@ performance, as recommended by Nextcloud.
     creates: /etc/php/8.2/apache2/conf.d/20-{{ item }}.ini
   loop: [ nextcloud, apcu ]
   notify: Restart Apache2.
-
+
-
-

8.23.3. Create /Nextcloud/

+
+

8.23.3. Create /Nextcloud/

The Ansible tasks up to this point have completed Core's LAMP stack @@ -5155,9 +5144,9 @@ test machines.

-
sudo mkdir /Nextcloud
+
sudo mkdir /Nextcloud
 sudo chmod 775 /Nextcloud
-
+

@@ -5167,10 +5156,10 @@ second partition is mounted at /home/.

-
sudo mkdir /home/nextcloud
+
sudo mkdir /home/nextcloud
 sudo chmod 775 /home/nextcloud
 sudo ln -s /home/nextcloud /Nextcloud
-
+

@@ -5179,17 +5168,17 @@ partitioning) second hard drive, /dev/sdb.

-
sudo mkfs -t ext4 /dev/sdb
+
sudo mkfs -t ext4 /dev/sdb
 sudo mkdir /Nextcloud
 echo "/dev/sdb  /Nextcloud  ext4  errors=remount-ro  0  2" \
 | sudo tee -a /etc/fstab >/dev/null
 sudo mount /Nextcloud
-
+
-
-

8.23.4. Restore Nextcloud

+
+

8.23.4. Restore Nextcloud

Restoring Nextcloud in the newly created /Nextcloud/ presumably @@ -5201,8 +5190,8 @@ a successful, complete copy).

-
rsync -a /media/sysadm/Backup/Nextcloud/ /Nextcloud/
-
+
rsync -a /media/sysadm/Backup/Nextcloud/ /Nextcloud/
+

@@ -5213,8 +5202,8 @@ make it so.

-
sudo chown -R www-data.www-data /Nextcloud/nextcloud/
-
+
sudo chown -R www-data.www-data /Nextcloud/nextcloud/
+

@@ -5222,15 +5211,15 @@ The database is restored with the following commands, which assume the last dump was made February 20th 2022 and thus was saved in /Nextcloud/20220220.bak. The database will need to be created first as when installing Nextcloud. The appropriate SQL are -given in Install Nextcloud below. +given in Install Nextcloud below.

-
cd /Nextcloud/
+
cd /Nextcloud/
 sudo mysql --defaults-file=dbbackup.cnf nextcloud < 20220220.bak
 cd nextcloud/
 sudo -u www-data php occ maintenance:data-fingerprint
-
+

@@ -5240,8 +5229,8 @@ Overview web page.

-
-

8.23.5. Install Nextcloud

+
+

8.23.5. Install Nextcloud

Installing Nextcloud in the newly created /Nextcloud/ starts with @@ -5252,12 +5241,12 @@ directories and files.

-
cd /Nextcloud/
+
cd /Nextcloud/
 tar xzf ~/Downloads/nextcloud-23.0.0.tar.bz2
 sudo chown -R www-data.www-data nextcloud
 sudo find nextcloud -type d -exec chmod 750 {} \;
 sudo find nextcloud -type f -exec chmod 640 {} \;
-
+

@@ -5271,25 +5260,25 @@ SQL prompt of the sudo mysql command). The shell command then runs

-
create database nextcloud
+
create database nextcloud
     character set utf8mb4
     collate utf8mb4_general_ci;
 grant all on nextcloud.*
     to 'nextclouduser'@'localhost'
     identified by 'ippAgmaygyobwyt5';
 flush privileges;
-
+
-
cd /var/www/nextcloud/
+
cd /var/www/nextcloud/
 sudo -u www-data php occ maintenance:install \
      --data-dir=/var/www/nextcloud/data \
      --database=mysql --database-name=nextcloud \
      --database-user=nextclouduser \
      --database-pass=ippAgmaygyobwyt5 \
      --admin-user=sysadm --admin-pass=PASSWORD
-
+

@@ -5305,15 +5294,14 @@ Core is next checked (or updated) e.g. with ./inst config -n core.

Before calling Nextcloud "configured", the administrator runs ./inst -config core, surfs to http://core.small.private/nextcloud/, -logins in as sysadm, and follows any reasonable -instructions (reasonable for a small organization) on the +config core, surfs to https://core.small.private/nextcloud/, logins +in as sysadm, and follows any reasonable instructions on the Administration > Overview page.

-
-

8.23.6. Afterwards

+
+

8.23.6. Afterwards

Whether Nextcloud was restored or installed, there are a few things @@ -5326,7 +5314,7 @@ afterwards tasks causes them to skip rather than fail.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Test for /Nextcloud/nextcloud/.
   stat:
     path: /Nextcloud/nextcloud
@@ -5334,7 +5322,7 @@ afterwards tasks causes them to skip rather than fail.
 - debug:
     msg: "/Nextcloud/ does not yet exist"
   when: not nextcloud.stat.exists
-
+

@@ -5357,7 +5345,7 @@ Pretty URLs (below).

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Configure Nextcloud trusted domains.
   become: yes
   replace:
@@ -5379,7 +5367,7 @@ Pretty URLs (below).
     insertbefore: "^[)];"
     firstmatch: yes
   when: nextcloud.stat.exists
-
+

@@ -5389,7 +5377,7 @@ enables it.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Configure Nextcloud memcache.
   become: yes
   lineinfile:
@@ -5399,7 +5387,7 @@ enables it.
     insertbefore: "^[)];"
     firstmatch: yes
   when: nextcloud.stat.exists
-
+

@@ -5411,7 +5399,7 @@ and htaccess.RewriteBase.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Configure Nextcloud for Pretty URLs.
   become: yes
   lineinfile:
@@ -5428,7 +5416,7 @@ and htaccess.RewriteBase.
   - regexp: "^ *'htaccess.RewriteBase' *=>"
     line: "  'htaccess.RewriteBase' => '/nextcloud',"
   when: nextcloud.stat.exists
-
+

@@ -5437,12 +5425,12 @@ a complaint on the Settings > Administration > Overview web page.

-private/vars.yml
nextcloud_region:           US
-
+private/vars.yml
nextcloud_region:           US
+
-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Configure Nextcloud phone region.
   become: yes
   lineinfile:
@@ -5452,7 +5440,7 @@ a complaint on the Settings > Administration > Overview web page.
     insertbefore: "^[)];"
     firstmatch: yes
   when: nextcloud.stat.exists
-
+

@@ -5465,7 +5453,7 @@ run before the next backup.

-roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml

 - name: Create /Nextcloud/dbbackup.cnf.
   no_log: yes
   become: yes
@@ -5489,33 +5477,33 @@ run before the next backup.
     regexp: password=
     line: password={{ nextcloud_dbpass }}
   when: nextcloud.stat.exists
-
+
-
-

9. The Gate Role

+
+

9. The Gate Role

-The gate role configures the services expected at the campus gate: a -VPN into the campus network via a campus Wi-Fi access point, and -Internet access via NAT to the Internet. The gate machine uses -three network interfaces (see The Gate Machine) configured with -persistent names used in its firewall rules. +The gate role configures the services expected at the campus gate: +access to the private Ethernet from the untrusted Ethernet (e.g. a +campus Wi-Fi AP) via VPN, and access to the Internet via NAT. The +gate machine uses three network interfaces (see The Gate Machine) +configured with persistent names used in its firewall rules.

lan
The campus Ethernet.
-
wifi
The campus Wi-Fi AP.
+
wild
The campus IoT (Wi-Fi APs).
isp
The campus ISP.

-Requiring a VPN to access the campus network from the campus Wi-Fi -bolsters the native Wi-Fi encryption and frustrates non-RYF (Respects -Your Freedom) wireless equipment. +Requiring a VPN to access the campus network from the untrusted +Ethernet (a campus Wi-Fi AP) bolsters the native Wi-Fi encryption and +frustrates non-RYF (Respects Your Freedom) wireless equipment.

@@ -5524,15 +5512,15 @@ applied first, by which Gate gets a campus machine's DNS and Postfix configurations, etc.

-
-

9.1. Include Particulars

+
+

9.1. Include Particulars

The following should be familiar boilerplate by now.

-roles_t/gate/tasks/main.yml
---
+roles_t/gate/tasks/main.yml
---
 - name: Include public variables.
   include_vars: ../public/vars.yml
   tags: accounts
@@ -5542,17 +5530,17 @@ The following should be familiar boilerplate by now.
 - name: Include members.
   include_vars: "{{ lookup('first_found', membership_rolls) }}"
   tags: accounts
-
+
-
-

9.2. Configure Netplan

+
+

9.2. Configure Netplan

Gate's network interfaces are configured using Netplan and two files. /etc/netplan/60-gate.yaml describes the static interfaces, to the -campus Ethernet and WiFi. /etc/netplan/60-isp.yaml is expected to +campus Ethernet and Wi-Fi. /etc/netplan/60-isp.yaml is expected to be revised more frequently as the campus ISP changes.

@@ -5563,10 +5551,10 @@ example code here.

-private/vars.yml
gate_lan_mac:               08:00:27:f3:16:79
+private/vars.yml
gate_lan_mac:               08:00:27:f3:16:79
+gate_wild_mac:              08:00:27:4a:de:d2
 gate_isp_mac:               08:00:27:3d:42:e5
-gate_wifi_mac:              08:00:27:4a:de:d2
-
+

@@ -5575,7 +5563,7 @@ new network plan.

-roles_t/gate/tasks/main.yml
+roles_t/gate/tasks/main.yml

 - name: Install netplan (gate).
   become: yes
   apt: pkg=netplan.io
@@ -5598,11 +5586,11 @@ new network plan.
             routes:
               - to: {{ public_vpn_net_cidr }}
                 via: {{ core_addr }}
-          wifi:
+          wild:
             match:
-              macaddress: {{ gate_wifi_mac }}
-            addresses: [ {{ gate_wifi_addr_cidr }} ]
-            set-name: wifi
+              macaddress: {{ gate_wild_mac }}
+            addresses: [ {{ gate_wild_addr_cidr }} ]
+            set-name: wild
             dhcp4: false
     dest: /etc/netplan/60-gate.yaml
     mode: u=rw,g=r,o=
@@ -5625,15 +5613,15 @@ new network plan.
     mode: u=rw,g=r,o=
     force: no
   notify: Apply netplan.
-
+
-roles_t/gate/handlers/main.yml
---
+roles_t/gate/handlers/main.yml
---
 - name: Apply netplan.
   become: yes
   command: netplan apply
-
+

@@ -5643,8 +5631,8 @@ campus ISP without interference from Ansible.

-
-

9.3. UFW Rules

+
+

9.3. UFW Rules

Gate uses the Uncomplicated FireWall (UFW) to install its packet @@ -5662,32 +5650,32 @@ in /etc/ufw/sysctl.conf. NAT is enabled per the ufw-framework(8) manual page, by introducing nat table rules in a block at the end of /etc/ufw/before.rules. They translate packets going to the ISP. These can come from the -private Ethernet or campus Wi-Fi. Hosts on the other institute -networks (the two VPNs) should not be routing their Internet traffic -through their VPN. +private Ethernet or the untrusted Ethernet (campus IoT, including +Wi-Fi APs). Hosts on the other institute networks (the two VPNs) +should not be routing their Internet traffic through their VPN.

-ufw-nat
-A POSTROUTING -s {{   private_net_cidr }} -o isp -j MASQUERADE
--A POSTROUTING -s {{ gate_wifi_net_cidr }} -o isp -j MASQUERADE
-
+ufw-nat
-A POSTROUTING -s {{ private_net_cidr }} -o isp -j MASQUERADE
+-A POSTROUTING -s {{    wild_net_cidr }} -o isp -j MASQUERADE
+

Forwarding rules are also needed. The nat table is a post routing rule set, so the default routing policy (DENY) will drop packets before NAT can translate them. The following rules are added to allow -packets to be forwarded from the campus Ethernet or Gate-WiFi subnet +packets to be forwarded from the campus Ethernet or its wild subnet to an ISP on the isp interface, and back (if related to an outgoing packet).

-ufw-forward-nat
-A FORWARD -i lan  -o isp  -j ACCEPT
--A FORWARD -i wifi -o isp  -j ACCEPT
+ufw-forward-nat
-A FORWARD -i lan  -o isp  -j ACCEPT
+-A FORWARD -i wild -o isp  -j ACCEPT
 -A FORWARD -i isp  -o lan  {{ ACCEPT_RELATED }}
--A FORWARD -i isp  -o wifi {{ ACCEPT_RELATED }}
-
+-A FORWARD -i isp -o wild {{ ACCEPT_RELATED }} +

@@ -5700,7 +5688,6 @@ the following iptables(8) rule specification parameters. -m state --state ESTABLISHED,RELATED -j ACCEPT -

If "the standard iptables-restore syntax" as it is described in the ufw-framework manual page, allows continuation lines, please let us @@ -5716,19 +5703,19 @@ public and campus VPNs is also allowed.

-ufw-forward-private
-A FORWARD -i lan  -o ovpn -j ACCEPT
+ufw-forward-private
-A FORWARD -i lan  -o ovpn -j ACCEPT
 -A FORWARD -i ovpn -o lan  -j ACCEPT
-
+

Note that there are no forwarding rules to allow packets to pass from -the wifi device to the lan device, just the ovpn device. +the wild device to the lan device, just the ovpn device.

-
-

9.4. Install UFW

+
+

9.4. Install UFW

The following tasks install the Uncomplicated Firewall (UFW), set its @@ -5740,16 +5727,15 @@ new-gate, enabling the firewall could break Ansible's current and future ssh sessions. For this reason, Ansible does not enable the firewall. The administrator must login and execute the following command after Gate is configured or new gate is "in position" -(connected to old Gate's wifi and isp networks). +(connected to old Gate's wild and isp networks).

 sudo ufw enable
 
-
-roles_t/gate/tasks/main.yml
+roles_t/gate/tasks/main.yml

 - name: Install UFW.
   become:
   apt: pkg=ufw
@@ -5785,64 +5771,68 @@ sudo ufw enable
       <<ufw-forward-private>>
       COMMIT
     insertafter: EOF
-
+
-
-

9.5. Configure DHCP For The Gate-WiFi Ethernet

+
+

9.5. Configure DHCP For The Wild Ethernet

-To accommodate commodity Wi-Fi access points without re-configuring -them, the institute attempts to look like an up-link, an ISP, e.g. a -cable modem. Thus it expects the wireless AP to route non-local -traffic out its WAN Ethernet port, and to get an IP address for the -WAN port using DHCP. Thus Gate runs ISC's DHCP daemon configured to -listen on one network interface, recognize exactly one client host, -and provide that one client with an IP address and customary network -parameters (default route, time server, etc.). +To accommodate commodity Wi-Fi access points, as well as wired IoT +appliances, without re-configuring them, the institute attempts to +look like an up-link, an ISP, e.g. a cable modem (aka "router"). It +expects a wireless AP (or IoT appliance) to route non-local traffic +out its WAN (or only) Ethernet port, and to get an IP address for that +port using DHCP. Thus Gate runs ISC's DHCP daemon configured to +listen on one network interface, recognize a specific list of clients, +and provide each with an IP address and customary network parameters +(default route, time server, etc.), much as was done on Core for the +private Ethernet.

-Two Ansible variables are needed to configure Gate's DHCP service, -specifically the sole subnet host: wifi_wan_name is any word -appropriate for identifying the Wi-Fi AP, and wifi_wan_mac is the -AP's MAC address. +The example configuration file, private/gate-dhcpd.conf, unlike +private/core-dhcpd.conf, does not need RFC3442 (Classless static +routes). The wild, wired or wireless IoT need know nothing about the +private network(s). This is just an example file, with a MAC address +chosen to (probably?) match a VirtualBox test machine. In actual use +private/core-dhcpd.conf refers to a replacement file.

-private/vars.yml
wifi_wan_mac:               94:83:c4:19:7d:57
-wifi_wan_name:              campus-wifi-ap
-
-
+private/gate-dhcpd.conf
default-lease-time 3600;
+max-lease-time 7200;
 
-

-If Gate is configured with ./abbey config gate and then connected to -actual networks (i.e. not rebooted), the following command is -executed. If a new gate was configured with ./abbey config new-gate -and not rebooted, the following command would also be executed. -

+ddns-update-style none; -
-sudo systemctl start isc-dhcp-server
-
+authoritative; +log-facility daemon; -

-If physically moved or rebooted for some other reason, the above -command would not be necessary. -

+subnet 192.168.57.0 netmask 255.255.255.0 { + option subnet-mask 255.255.255.0; + option broadcast-address 192.168.57.255; + option routers 192.168.57.1; +} + +host campus-wifi-ap { + hardware ethernet 94:83:c4:19:7d:57; + fixed-address 192.168.57.2; +} +
+

Installation and configuration of the DHCP daemon follows. Note that -the daemon listens only on the Gate-WiFi network interface. Also -note the drop-in Requires dependency, without which the DHCP server -intermittently fails, finding the wifi interface has no IPv4 -addresses (or perhaps finding no wifi interface at all?). +the daemon listens only on the wild network interface. Also note +the drop-in Requires dependency, without which the DHCP server +intermittently fails, finding the wild interface has no IPv4 +addresses (or perhaps finding no wild interface at all?).

-roles_t/gate/tasks/main.yml
+roles_t/gate/tasks/main.yml

 - name: Install DHCP server.
   become: yes
   apt: pkg=isc-dhcp-server
@@ -5851,10 +5841,17 @@ addresses (or perhaps finding no wifi interface at all?).
   become: yes
   lineinfile:
     path: /etc/default/isc-dhcp-server
-    line: INTERFACESv4="wifi"
+    line: INTERFACESv4="wild"
     regexp: ^INTERFACESv4=
   notify: Restart DHCP server.
 
+- name: Configure DHCP subnet.
+  become: yes
+  copy:
+    src: ../private/gate-dhcpd.conf
+    dest: /etc/dhcp/dhcpd.conf
+  notify: Restart DHCP server.
+
 - name: Configure DHCP server dependence on interface.
   become: yes
   copy:
@@ -5864,39 +5861,17 @@ addresses (or perhaps finding no wifi interface at all?).
     dest: /etc/systemd/system/isc-dhcp-server.service.d/depend.conf
   notify: Reload Systemd.
 
-- name: Configure DHCP for WiFiAP service.
-  become: yes
-  copy:
-    content: |
-      default-lease-time 3600;
-      max-lease-time 7200;
-      ddns-update-style none;
-      authoritative;
-      log-facility daemon;
-
-      subnet {{ gate_wifi_net }} netmask {{ gate_wifi_net_mask }} {
-        option subnet-mask {{ gate_wifi_net_mask }};
-        option broadcast-address {{ gate_wifi_broadcast }};
-        option routers {{ gate_wifi_addr }};
-      }
-
-      host {{ wifi_wan_name }} {
-        hardware ethernet {{ wifi_wan_mac }};
-        fixed-address {{ wifi_wan_addr }};
-      }
-    dest: /etc/dhcp/dhcpd.conf
-  notify: Restart DHCP server.
-
-- name: Enable DHCP server.
+- name: Enable/Start DHCP server.
   become: yes
   systemd:
     service: isc-dhcp-server
     enabled: yes
-
+ state: started +
-roles_t/gate/handlers/main.yml
+roles_t/gate/handlers/main.yml

 - name: Restart DHCP server.
   become: yes
   systemd:
@@ -5907,12 +5882,28 @@ addresses (or perhaps finding no wifi interface at all?).
   become: yes
   systemd:
     daemon-reload: yes
-
+
+ +

+If Gate is configured with ./abbey config gate and then connected to +actual networks (i.e. not rebooted), the following command is +executed. If a new gate was configured with ./abbey config new-gate +and not rebooted, the following command would also be executed. +

+ +
+sudo systemctl start isc-dhcp-server
+
+ +

+If physically moved or rebooted for some other reason, the above +command would not be necessary. +

-
-

9.6. Install Server Certificate

+
+

9.6. Install Server Certificate

The (OpenVPN) server on Gate uses an institute certificate (and key) @@ -5922,7 +5913,7 @@ and Front) do.

-roles_t/gate/tasks/main.yml
+roles_t/gate/tasks/main.yml

 - name: Install server certificate/key.
   become: yes
   copy:
@@ -5935,12 +5926,12 @@ and Front) do.
   - { path: "private/gate.{{ domain_priv }}", typ: key,
       mode: "u=r,g=,o=" }
   notify: Restart OpenVPN.
-
+
-
-

9.7. Configure OpenVPN

+
+

9.7. Configure OpenVPN

Gate uses OpenVPN to provide the institute's campus VPN service. Its @@ -5951,19 +5942,19 @@ to Front.

-openvpn-gate-routes
push "route {{ private_net_and_mask }}"
+openvpn-gate-routes
push "route {{ private_net_and_mask }}"
 push "route {{ public_vpn_net_and_mask }}"
-
+

The complete OpenVPN configuration for Gate includes a server option, the pushed routes mentioned above, and the common options -discussed in The VPN Services. +discussed in The VPN Services.

-openvpn-gate
server {{ campus_vpn_net_and_mask }}
+openvpn-gate
server {{ campus_vpn_net_and_mask }}
 client-config-dir /etc/openvpn/ccd
 <<openvpn-gate-routes>>
 <<openvpn-dev-mode>>
@@ -5977,8 +5968,8 @@ ca /usr/local/share/ca-certificates/{{ domain_name }}.crt
 cert /etc/server.crt
 key /etc/server.key
 dh dh2048.pem
-tls-auth ta.key 0
-
+tls-crypt shared.key +

@@ -5987,7 +5978,7 @@ configure the OpenVPN server on Gate.

-roles_t/gate/tasks/main.yml
+roles_t/gate/tasks/main.yml

 - name: Install OpenVPN.
   become: yes
   apt: pkg=openvpn
@@ -6023,7 +6014,7 @@ configure the OpenVPN server on Gate.
     mode: u=r,g=,o=
   loop:
   - { src: gate-dh2048.pem, dest: dh2048.pem }
-  - { src: gate-ta.key, dest: ta.key }
+  - { src: gate-shared.key, dest: shared.key }
   notify: Restart OpenVPN.
 
 - name: Configure OpenVPN.
@@ -6034,23 +6025,23 @@ configure the OpenVPN server on Gate.
     dest: /etc/openvpn/server.conf
     mode: u=r,g=r,o=
   notify: Restart OpenVPN.
-
+
-roles_t/gate/handlers/main.yml
+roles_t/gate/handlers/main.yml

 - name: Restart OpenVPN.
   become: yes
   systemd:
     service: openvpn@server
     state: restarted
-
+
-
-

10. The Campus Role

+
+

10. The Campus Role

The campus role configures generic campus server machines: network @@ -6067,32 +6058,32 @@ Wireless campus devices can get a key to the campus VPN from the configured manually.

-
-

10.1. Include Particulars

+
+

10.1. Include Particulars

The following should be familiar boilerplate by now.

-roles_t/campus/tasks/main.yml
---
+roles_t/campus/tasks/main.yml
---
 - name: Include public variables.
   include_vars: ../public/vars.yml
 - name: Include private variables.
   include_vars: ../private/vars.yml
-
+
-
-

10.2. Configure Hostname

+
+

10.2. Configure Hostname

Clients should be using the expected host name.

-roles_t/campus/tasks/main.yml
+roles_t/campus/tasks/main.yml

 - name: Configure hostname.
   become: yes
   copy:
@@ -6108,12 +6099,12 @@ Clients should be using the expected host name.
   become: yes
   command: hostname -F /etc/hostname
   when: inventory_hostname != ansible_hostname
-
+
-
-

10.3. Configure Systemd Resolved

+
+

10.3. Configure Systemd Resolved

Campus machines use the campus name server on Core (or dns.google), @@ -6121,7 +6112,7 @@ and include the institute's private domain in their search lists.

-roles_t/campus/tasks/main.yml
+roles_t/campus/tasks/main.yml

 - name: Configure resolved.
   become: yes
   lineinfile:
@@ -6135,11 +6126,11 @@ and include the institute's private domain in their search lists.
   notify:
   - Reload Systemd.
   - Restart Systemd resolved.
-
+
-roles_t/campus/handlers/main.yml
---
+roles_t/campus/handlers/main.yml
---
 - name: Reload Systemd.
   become: yes
   systemd:
@@ -6150,12 +6141,12 @@ and include the institute's private domain in their search lists.
   systemd:
     service: systemd-resolved
     state: restarted
-
+
-
-

10.4. Configure Systemd Timesyncd

+
+

10.4. Configure Systemd Timesyncd

The institute uses a common time reference throughout the campus. @@ -6164,29 +6155,29 @@ and file timestamps.

-roles_t/campus/tasks/main.yml
+roles_t/campus/tasks/main.yml

 - name: Configure timesyncd.
   become: yes
   lineinfile:
     path: /etc/systemd/timesyncd.conf
     line: NTP=ntp.{{ domain_priv }}
   notify: Restart systemd-timesyncd.
-
+
-roles_t/campus/handlers/main.yml
+roles_t/campus/handlers/main.yml

 - name: Restart systemd-timesyncd.
   become: yes
   systemd:
     service: systemd-timesyncd
     state: restarted
-
+
-
-

10.5. Add Administrator to System Groups

+
+

10.5. Add Administrator to System Groups

The administrator often needs to read (directories of) log files owned @@ -6195,35 +6186,35 @@ these groups speeds up debugging.

-roles_t/campus/tasks/main.yml
+roles_t/campus/tasks/main.yml

 - name: Add {{ ansible_user }} to system groups.
   become: yes
   user:
     name: "{{ ansible_user }}"
     append: yes
     groups: root,adm
-
+
-
-

10.6. Install Unattended Upgrades

+
+

10.6. Install Unattended Upgrades

The institute prefers to install security updates as soon as possible.

-roles_t/campus/tasks/main.yml
+roles_t/campus/tasks/main.yml

 - name: Install basic software.
   become: yes
   apt: pkg=unattended-upgrades
-
+
-
-

10.7. Configure Postfix on Campus

+
+

10.7. Configure Postfix on Campus

The Postfix settings used by the campus include message size, queue @@ -6240,7 +6231,7 @@ tasks below.

-roles_t/campus/tasks/main.yml
+roles_t/campus/tasks/main.yml

 - name: Install Postfix.
   become: yes
   apt: pkg=postfix
@@ -6270,22 +6261,22 @@ tasks below.
     service: postfix
     enabled: yes
     state: started
-
+
-roles_t/campus/handlers/main.yml
+roles_t/campus/handlers/main.yml

 - name: Restart Postfix.
   become: yes
   systemd:
     service: postfix
     state: restarted
-
+
-
-

10.8. Set Domain Name

+
+

10.8. Set Domain Name

The host's fully qualified (private) domain name (FQDN) is set by an @@ -6295,7 +6286,7 @@ manpage.)

-roles_t/campus/tasks/main.yml
+roles_t/campus/tasks/main.yml

 - name: Set domain name.
   become: yes
   vars:
@@ -6304,22 +6295,22 @@ manpage.)
     path: /etc/hosts
     regexp: "^127.0.1.1[        ].*"
     line: "127.0.1.1    {{ name }}.{{ domain_priv }} {{ name }}"
-
+
-
-

10.9. Configure NRPE

+
+

10.9. Configure NRPE

Each campus host runs an NRPE (a NAGIOS Remote Plugin Executor) server so that the NAGIOS4 server on Core can collect statistics. The -NAGIOS service is discussed in the Configure NRPE section of The Core +NAGIOS service is discussed in the Configure NRPE section of The Core Role.

-roles_t/campus/tasks/main.yml
+roles_t/campus/tasks/main.yml

 - name: Install NRPE.
   become: yes
   apt:
@@ -6353,23 +6344,23 @@ Role.
     service: nagios-nrpe-server
     enabled: yes
     state: started
-
+
-roles_t/campus/handlers/main.yml
+roles_t/campus/handlers/main.yml

 - name: Reload NRPE server.
   become: yes
   systemd:
     service: nagios-nrpe-server
     state: reloaded
-
+
-
-

11. The Ansible Configuration

+
+

11. The Ansible Configuration

The small institute uses Ansible to maintain the configuration of its @@ -6378,7 +6369,7 @@ runs playbook site.yml to apply the appro role(s) to each host. Examples of these files are included here, and are used to test the roles. The example configuration applies the institutional roles to VirtualBox machines prepared according to -chapter Testing. +chapter Testing.

@@ -6391,13 +6382,13 @@ while changes to the institute's particulars are committed to a separate revision history.

-
-

11.1. ansible.cfg

+
+

11.1. ansible.cfg

The Ansible configuration file ansible.cfg contains just a handful of settings, some included just to create a test jig as described in -Testing. +Testing.

    @@ -6406,7 +6397,7 @@ of settings, some included just to create a test jig as described in that Python 3 can be expected on all institute hosts.
  • vault_password_file is set to suppress prompts for the vault password. The institute keeps its vault password in Secret/ (as -described in Keys) and thus sets this parameter to +described in Keys) and thus sets this parameter to Secret/vault-password.
  • inventory is set to avoid specifying it on the command line.
  • roles_path is set to the recently tangled roles files in @@ -6414,17 +6405,17 @@ described in Keys) and thus sets this parameter to
-ansible.cfg
[defaults]
+ansible.cfg
[defaults]
 interpreter_python=/usr/bin/python3
 vault_password_file=Secret/vault-password
 inventory=hosts
 roles_path=roles_t
-
+
-
-

11.2. hosts

+
+

11.2. hosts

The Ansible inventory file hosts describes all of the institute's @@ -6436,7 +6427,7 @@ describes three test servers named front, core and

-hosts
all:
+hosts
all:
   vars:
     ansible_user: sysadm
     ansible_ssh_extra_args: -i Secret/ssh_admin/id_rsa
@@ -6454,7 +6445,7 @@ describes three test servers named front, core and 
+

@@ -6465,7 +6456,7 @@ line.

-Secret/become.yml
become_front: !vault |
+Secret/become.yml
become_front: !vault |
         $ANSIBLE_VAULT;1.1;AES256
         3563626131333733666466393166323135383838666338666131336335326
         3656437663032653333623461633866653462636664623938356563306264
@@ -6489,7 +6480,7 @@ become_gate: !vault |
         0636537633139366333373933396637633034383132373064393939363231
         636264323132370a393135666335303361326330623438613630333638393
         1303632663738306634
-
+

@@ -6501,8 +6492,8 @@ the Secret/vault-password file.

-
-

11.3. playbooks/site.yml

+
+

11.3. playbooks/site.yml

The example playbooks/site.yml playbook (below) applies the @@ -6511,7 +6502,7 @@ the example inventory: hosts.

-playbooks/site.yml
---
+playbooks/site.yml
---
 - name: Configure All
   hosts: all
   roles: [ all ]
@@ -6531,12 +6522,12 @@ the example inventory: hosts.
 - name: Configure Campus
   hosts: campus
   roles: [ campus ]
-
+
-
-

11.4. Secret/vault-password

+
+

11.4. Secret/vault-password

As already mentioned, the small institute keeps its Ansible vault @@ -6548,17 +6539,17 @@ example password matches the example encryptions above.

-Secret/vault-password
alitysortstagess
-
+Secret/vault-password
alitysortstagess
+
-
-

11.5. Creating A Working Ansible Configuration

+
+

11.5. Creating A Working Ansible Configuration

A working Ansible configuration can be "tangled" from this document to -produce the test configuration described in the Testing chapter. The +produce the test configuration described in the Testing chapter. The tangling is done by Emacs's org-babel-tangle function and has already been performed with the resulting tangle included in the distribution with this document. @@ -6574,13 +6565,13 @@ and add an Institute/ submodule.

-
cd
+
cd
 mkdir network
 cd network
 git init
 git submodule add git://birchwood-abbey.net/~puck/Institute
 git add Institute
-
+

@@ -6601,7 +6592,7 @@ would be copied, with appropriate changes, into new subdirectories public/ and private/.

  • ~/net/Secret would be a symbolic link to the (auto-mounted?) location of the administrator's encrypted USB drive, as described in -section Keys.
  • +section Keys.

    @@ -6621,9 +6612,9 @@ the example ~/net/Institute/private/.

    -
    cp -r Institute/roles_t Institute/roles
    +
    cp -r Institute/roles_t Institute/roles
     ( cd playbooks; ln -s ../Institute/playbooks/* . )
    -
    +

    @@ -6632,13 +6623,13 @@ super-project's directory.

    -
    ./Institute/inst config -n
    -
    +
    ./Institute/inst config -n
    +
    -
    -

    11.6. Maintaining A Working Ansible Configuration

    +
    +

    11.6. Maintaining A Working Ansible Configuration

    The Ansible roles currently tangle into the roles_t/ directory to @@ -6657,8 +6648,8 @@ their way back to the code block in this document.

    -
    -

    12. The Institute Commands

    +
    +

    12. The Institute Commands

    The institute's administrator uses a convenience script to reliably @@ -6668,8 +6659,8 @@ Ansible configuration. The Ansible commands it executes are expected to get their defaults from ./ansible.cfg.

    -
    -

    12.1. Sub-command Blocks

    +
    +

    12.1. Sub-command Blocks

    The code blocks in this chapter tangle into the inst script. Each @@ -6683,18 +6674,18 @@ The first code block is the header of the ./inst script.

    -inst
    #!/usr/bin/perl -w
    +inst
    #!/usr/bin/perl -w
     #
     # DO NOT EDIT.  This file was tangled from an institute.org file.
     
     use strict;
     use IO::File;
    -
    +
    -
    -

    12.2. Sanity Check

    +
    +

    12.2. Sanity Check

    The next code block does not implement a sub-command; it implements @@ -6706,7 +6697,7 @@ permissions. It probes past the Secret/ mount poin

    -inst
    +inst
    
     sub note_missing_file_p ($);
     sub note_missing_directory_p ($);
     
    @@ -6750,12 +6741,12 @@ permissions.  It probes past the Secret/ mount poin
         return 0;
       }
     }
    -
    +
    -
    -

    12.3. Importing Ansible Variables

    +
    +

    12.3. Importing Ansible Variables

    To ensure that Ansible and ./inst are sympatico vis-a-vi certain @@ -6767,7 +6758,7 @@ them.

    -inst
    +inst
    
     sub mysystem (@) {
       my $line = join (" ", @_);
       print "$line\n";
    @@ -6777,9 +6768,9 @@ them.
     
     mysystem "ansible-playbook playbooks/check-inst-vars.yml >/dev/null";
     
    -our ($domain_name, $domain_priv, $front_addr, $gate_wifi_addr);
    +our ($domain_name, $domain_priv, $front_addr, $gate_wild_addr);
     do "./private/vars.pl";
    -
    +

    @@ -6787,7 +6778,7 @@ The playbook that updates private/vars.pl:

    -playbooks/check-inst-vars.yml
    - hosts: localhost
    +playbooks/check-inst-vars.yml
    - hosts: localhost
       gather_facts: no
       tasks:
       - include_vars: ../public/vars.yml
    @@ -6797,15 +6788,15 @@ The playbook that updates private/vars.pl:
             $domain_name = "{{ domain_name }}";
             $domain_priv = "{{ domain_priv }}";
             $front_addr = "{{ front_addr }}";
    -        $gate_wifi_addr = "{{ gate_wifi_addr }}";
    +        $gate_wild_addr = "{{ gate_wild_addr }}";
           dest: ../private/vars.pl
           mode: u=rw,g=,o=
    -
    +
    -
    -

    12.4. The CA Command

    +
    +

    12.4. The CA Command

    The next code block implements the CA sub-command, which creates a @@ -6817,8 +6808,8 @@ here.

    -public/vars.yml
    full_name: Small Institute LLC
    -
    +public/vars.yml
    full_name: Small Institute LLC
    +

    @@ -6831,7 +6822,6 @@ symbolic link to a volume's automount location. ln -s /media/sysadm/ADE7-F866/ Secret -

    The Secret/CA/ directory is prepared using Easy RSA's make-cadir command. The Secret/CA/vars file thus created is edited to contain @@ -6844,7 +6834,6 @@ sudo apt install easy-rsa ./inst CA -

    Running ./inst CA creates the new CA and keys. The command prompts for the Common Name (or several levels of Organizational names) of the @@ -6855,7 +6844,7 @@ config.

    -inst
    +inst
    
     if (defined $ARGV[0] && $ARGV[0] eq "CA") {
       die "usage: $0 CA" if @ARGV != 1;
       die "Secret/CA/easyrsa: not an executable\n"
    @@ -6874,8 +6863,8 @@ config.
       mysystem "cd Secret/CA; ./easyrsa build-server-full core.$pvt nopass";
       mysystem "cd Secret/CA; ./easyrsa build-client-full core nopass";
       umask 077;
    -  mysystem "openvpn --genkey secret Secret/front-ta.key";
    -  mysystem "openvpn --genkey secret Secret/gate-ta.key";
    +  mysystem "openvpn --genkey secret Secret/front-shared.key";
    +  mysystem "openvpn --genkey secret Secret/gate-shared.key";
       mysystem "openssl dhparam -out Secret/front-dh2048.pem 2048";
       mysystem "openssl dhparam -out Secret/gate-dh2048.pem 2048";
     
    @@ -6909,17 +6898,17 @@ config.
       mysystem "ssh-keygen -A -f Secret/ssh_front -C $dom";
       exit;
     }
    -
    +
    -
    -

    12.5. The Config Command

    +
    +

    12.5. The Config Command

    The next code block implements the config sub-command, which provisions network services by running the site.yml playbook -described in playbooks/site.yml. It recognizes an optional -n +described in playbooks/site.yml. It recognizes an optional -n flag indicating that the service configurations should just be checked. Given an optional host name, it provisions (or checks) just the named host. @@ -6935,9 +6924,8 @@ Example command lines: ./inst config -n HOST -

    -inst
    +inst
    
     if (defined $ARGV[0] && $ARGV[0] eq "config") {
       die "Secret/CA/easyrsa: not executable\n"
         if ! -x "Secret/CA/easyrsa";
    @@ -6961,16 +6949,16 @@ Example command lines:
       mysystem $cmd;
       exit;
     }
    -
    +
    -
    -

    12.6. Account Management

    +
    +

    12.6. Account Management

    For general information about members and their Unix accounts, see -Accounts. The account management sub-commands maintain a mapping +Accounts. The account management sub-commands maintain a mapping associating member "usernames" (Unix account names) with their records. The mapping is stored among other things in private/members.yml as the value associated with the key members. @@ -6995,7 +6983,7 @@ Front and Core. (The example hashes are truncated versions.)

    -private/members.yml
    ---
    +private/members.yml
    ---
     members:
       dick:
         status: current
    @@ -7018,7 +7006,7 @@ usernames:
     - dick
     revoked:
     - dick-phone
    -
    +

    @@ -7030,11 +7018,11 @@ is used instead.

    -private/members-empty.yml
    ---
    +private/members-empty.yml
    ---
     members:
     usernames: []
     revoked: []
    -
    +

    @@ -7043,10 +7031,10 @@ Both locations go on the membership_rolls variable used by the

    -private/vars.yml
    membership_rolls:
    +private/vars.yml
    membership_rolls:
     - "../private/members.yml"
     - "../private/members-empty.yml"
    -
    +

    @@ -7056,7 +7044,7 @@ read from the file. The dump subroutine is another story (below).

    -inst
    +inst
    
     use YAML::XS qw(LoadFile DumpFile);
     
     sub read_members_yaml () {
    @@ -7115,7 +7103,7 @@ read from the file.  The dump subroutine is another story (below).
       }
       close $O or die "Could not close $pathname: $!\n";
     }
    -
    +

    @@ -7132,7 +7120,7 @@ each record.

    -inst
    +inst
    
     sub print_member ($$) {
       my ($out, $member) = @_;
       print $out "  ", $member->{"username"}, ":\n";
    @@ -7165,12 +7153,12 @@ each record.
         print $out "    $key: ", $member->{$key}, "\n";
       }
     }
    -
    +
    -
    -

    12.7. The New Command

    +
    +

    12.7. The New Command

    The next code block implements the new sub-command. It adds a new @@ -7184,7 +7172,7 @@ initial, generated password.

    -inst
    +inst
    
     sub valid_username (@);
     sub shell_escape ($);
     sub strip_vault ($);
    @@ -7243,11 +7231,11 @@ initial, generated password.
       my @lines = split /^ */m, $string;
       return (join "", @lines[1..$#lines]);
     }
    -
    +
    -playbooks/nextcloud-new.yml
    - hosts: core
    +playbooks/nextcloud-new.yml
    - hosts: core
       no_log: yes
       tasks:
       - name: Run occ user:add.
    @@ -7270,12 +7258,12 @@ initial, generated password.
         args:
           chdir: /var/www/nextcloud/
           executable: /usr/bin/expect
    -
    +
    -
    -

    12.8. The Pass Command

    +
    +

    12.8. The Pass Command

    The institute's passwd command on Core securely emails root with a @@ -7289,8 +7277,8 @@ Ansible site.yml playbook to update the message is sent to member@core.

    -
    -

    12.8.1. Less Aggressive passwd.

    +
    +

    12.8.1. Less Aggressive passwd.

    The next code block implements the less aggressive passwd command. @@ -7304,7 +7292,7 @@ in Secret/.

    -roles_t/core/templates/passwd
    #!/bin/perl -wT
    +roles_t/core/templates/passwd
    #!/bin/perl -wT
     
     use strict;
     
    @@ -7382,12 +7370,12 @@ print "
     Your request was sent to Root.  PLEASE WAIT for email confirmation
     that the change was completed.\n";
     exit;
    -
    +
    -
    -

    12.8.2. Less Aggressive Pass Command

    +
    +

    12.8.2. Less Aggressive Pass Command

    The following code block implements the ./inst pass command, used by @@ -7396,7 +7384,7 @@ the administrator to update private/members.yml before running

    -inst
    +inst
    
     use MIME::Base64;
     
     if (defined $ARGV[0] && $ARGV[0] eq "pass") {
    @@ -7446,7 +7434,7 @@ the administrator to update private/members.yml before running
       close $O or die "pipe to sendmail failed: $!\n";
       exit;
     }
    -
    +

    @@ -7455,7 +7443,7 @@ users:resetpassword command using expect(1).

    -playbooks/nextcloud-pass.yml
    - hosts: core
    +playbooks/nextcloud-pass.yml
    - hosts: core
       no_log: yes
       tasks:
       - name: Run occ user:resetpassword.
    @@ -7480,12 +7468,12 @@ users:resetpassword command using expect(1).
         args:
           chdir: /var/www/nextcloud/
           executable: /usr/bin/expect
    -
    +
    -
    -

    12.8.3. Installing the Less Aggressive passwd

    +
    +

    12.8.3. Installing the Less Aggressive passwd

    The following Ansible tasks install the less aggressive passwd @@ -7498,7 +7486,7 @@ configuration so that the email to root can be encrypted.

    -roles_t/core/tasks/main.yml
    +roles_t/core/tasks/main.yml
    
     - name: Install institute passwd command.
       become: yes
       template:
    @@ -7540,28 +7528,28 @@ configuration so that the email to root can be encrypted.
         dest: ~/.gnupg-root-pub.pem
         mode: u=r,g=r,o=r
       notify: Import root PGP key.
    -
    +
    -roles_t/core/handlers/main.yml
    +roles_t/core/handlers/main.yml
    
     - name: Import root PGP key.
       become: no
       command: gpg --import ~/.gnupg-root-pub.pem
    -
    +
    -
    -

    12.9. The Old Command

    +
    +

    12.9. The Old Command

    The old command disables a member's accounts and clients.

    -inst
    +inst
    
     if (defined $ARGV[0] && $ARGV[0] eq "old") {
       my $user = valid_username (@ARGV);
       my $yaml = read_members_yaml ();
    @@ -7579,11 +7567,11 @@ The old command disables a member's accounts and clients.
                 "-t accounts playbooks/site.yml");
       exit;
     }
    -
    +
    -playbooks/nextcloud-old.yml
    - hosts: core
    +playbooks/nextcloud-old.yml
    - hosts: core
       tasks:
       - name: Run occ user:disable.
         shell: |
    @@ -7595,12 +7583,12 @@ The old command disables a member's accounts and clients.
         args:
           chdir: /var/www/nextcloud/
           executable: /usr/bin/expect
    -
    +
    -
    -

    12.10. The Client Command

    +
    +

    12.10. The Client Command

    The client command creates an OpenVPN configuration (.ovpn) file @@ -7647,14 +7635,14 @@ connection is restarted.

    -openvpn-up
    script-security 2
    +openvpn-up
    script-security 2
     up /etc/openvpn/update-systemd-resolved
     up-restart
    -
    +
    -inst
    sub write_template ($$$$$$$$$);
    +inst
    sub write_template ($$$$$$$$$);
     sub read_file ($);
     sub add_client ($$$);
     
    @@ -7715,13 +7703,13 @@ up-restart
     <<openvpn-up>>";
     
       if ($type ne "campus") {
    -    my $TA = read_file "Secret/front-ta.key";
    -    write_template ($DEV,$UP,$CA,$CRT,$KEY,$TA, $front_addr,
    +    my $TC = read_file "Secret/front-shared.key";
    +    write_template ($DEV,$UP,$CA,$CRT,$KEY,$TC, $front_addr,
                         $domain_name, "public.ovpn");
         print "Wrote public VPN configuration to public.ovpn.\n";
       }
    -  my $TA = read_file "Secret/gate-ta.key";
    -  write_template ($DEV,$UP,$CA,$CRT,$KEY,$TA, $gate_wifi_addr,
    +  my $TC = read_file "Secret/gate-shared.key";
    +  write_template ($DEV,$UP,$CA,$CRT,$KEY,$TC, $gate_wild_addr,
                       "gate.$domain_priv", "campus.ovpn");
       print "Wrote campus VPN configuration to campus.ovpn.\n";
     
    @@ -7729,7 +7717,7 @@ up-restart
     }
     
     sub write_template ($$$$$$$$$) {
    -  my ($DEV,$UP,$CA,$CRT,$KEY,$TA,$ADDR,$NAME,$FILE) = @_;
    +  my ($DEV,$UP,$CA,$CRT,$KEY,$TC,$ADDR,$NAME,$FILE) = @_;
       my $O = new IO::File;
       open ($O, ">$FILE.tmp") or die "Could not open $FILE.tmp: $!\n";
       print $O "client
    @@ -7746,7 +7734,7 @@ up-restart
     <ca>\n$CA</ca>
     <cert>\n$CRT</cert>
     <key>\n$KEY</key>
    -<tls-auth>\n$TA</tls-auth>\n";
    +<tls-crypt>\n$TC</tls-crypt>\n";
       close $O or die "Could not close $FILE.tmp: $!\n";
       rename ("$FILE.tmp", $FILE)
         or die "Could not rename $FILE.tmp: $!\n";
    @@ -7761,12 +7749,12 @@ up-restart
       close $I or die "$path: could not close: $!\n";
       return $c;
     }
    -
    +
    -
    -

    12.11. Institute Command Help

    +
    +

    12.11. Institute Command Help

    This should be the last block tangled into the inst script. It @@ -7775,33 +7763,34 @@ above.

    -inst
    +inst
    
     die "usage: $0 [CA|config|new|pass|old|client] ...\n";
    -
    +
    -
    -

    13. Testing

    +
    +

    13. Testing

    The example files in this document, ansible.cfg and hosts as well as those in public/ and private/, along with the matching EasyRSA certificate authority and GnuPG key-ring in Secret/ (included in the distribution), can be used to configure three VirtualBox VMs -simulating Core, Gate and Front in test networks simulating a campus -Ethernet, Wi-Fi, ISP, and a commercial cloud. With the test networks -up and running, a simulated member's notebook can be created and -alternately attached to the simulated campus Wi-Fi or the Internet (as -though abroad). The administrator's notebook in this simulation is -the VirtualBox host. +simulating Core, Gate and Front in test networks simulating a private +Ethernet, an untrusted Ethernet, the campus ISP, and a commercial +cloud. With the test networks up and running, a simulated member's +notebook can be created and alternately attached to the untrusted +Ethernet (as though it were on the campus Wi-Fi) or the Internet (as +though it were abroad). The administrator's notebook in this +simulation is the VirtualBox host.

    The next two sections list the steps taken to create the simulated Core, Gate and Front machines, and connect them to their networks. -The process is similar to that described in The (Actual) Hardware, but +The process is similar to that described in The (Actual) Hardware, but is covered in detail here where the VirtualBox hypervisor can be assumed and exact command lines can be given (and copied during re-testing). The remaining sections describe the manual testing @@ -7817,8 +7806,8 @@ HTML version of the latest revision can be found on the official web site at https://www.virtualbox.org/manual/UserManual.html.

    -
    -

    13.1. The Test Networks

    +
    +

    13.1. The Test Networks

    The networks used in the test: @@ -7835,8 +7824,8 @@ private Ethernet switch. It has no services, no DHCP, just the host machine at 192.168.56.10 pretending to be the administrator's notebook. -

    vboxnet1
    Another Host-only network, simulating the tiny -Ethernet between Gate and the campus Wi-Fi access point. It has no +
    vboxnet1
    Another Host-only network, simulating the untrusted +Ethernet between Gate and the campus IoT (and Wi-Fi APs). It has no services, no DHCP, just the host at 192.168.57.2, simulating the NATed Wi-Fi network.
    @@ -7856,7 +7845,7 @@ following VBoxManage commands.

    -
    VBoxManage natnetwork add --netname premises \
    +
    VBoxManage natnetwork add --netname premises \
                               --network 192.168.15.0/24 \
                               --enable --dhcp on --ipv6 off
     VBoxManage natnetwork start --netname premises
    @@ -7865,7 +7854,7 @@ VBoxManage hostonlyif ipconfig vboxnet0 --ip=192.168.56.10
     VBoxManage dhcpserver modify --interface=vboxnet0 --disable
     VBoxManage hostonlyif create # vboxnet1
     VBoxManage hostonlyif ipconfig vboxnet1 --ip=192.168.57.2
    -
    +

    @@ -7882,15 +7871,15 @@ on the private 192.168.15.0/24 network.

    -
    -

    13.2. The Test Machines

    +
    +

    13.2. The Test Machines

    The virtual machines are created by VBoxManage command lines in the following sub-sections. They each start with a recent Debian release (e.g. debian-12.5.0-amd64-netinst.iso) in their simulated DVD -drives. As in The Hardware preparation process being simulated, a few -additional software packages are installed. Unlike in The Hardware +drives. As in The Hardware preparation process being simulated, a few +additional software packages are installed. Unlike in The Hardware preparation, machines are moved to their final networks and then remote access is authorized. (They are not accessible via ssh on the VirtualBox NAT network where they first boot.) @@ -7902,8 +7891,8 @@ privileged accounts on the virtual machines, they are prepared for configuration by Ansible.

    -
    -

    13.2.1. A Test Machine

    +
    +

    13.2.1. A Test Machine

    The following shell function contains most of the VBoxManage @@ -7916,7 +7905,7 @@ taken from the ISO shell variable.

    -
    function create_vm {
    +
    function create_vm {
       VBoxManage createvm --name $NAME --ostype Debian_64 --register
       VBoxManage modifyvm $NAME --memory $RAM
       VBoxManage createhd --size $DISK \
    @@ -7932,7 +7921,7 @@ taken from the ISO shell variable.
           --port 0 --device 0 --type dvddrive --medium $ISO
       VBoxManage modifyvm $NAME --boot1 dvd --boot2 disk
     }
    -
    +

    @@ -7948,12 +7937,12 @@ CDROM drive.

    -
    NAME=front
    +
    NAME=front
     RAM=512
     DISK=4096
     ISO=~/Downloads/debian-12.5.0-amd64-netinst.iso
     create_vm
    -
    +

    @@ -8034,8 +8023,8 @@ preparation (below).

    -
    -

    13.2.2. The Test Front Machine

    +
    +

    13.2.2. The Test Front Machine

    The front machine is created with 512MiB of RAM, 4GiB of disk, and @@ -8048,13 +8037,13 @@ After Debian is installed (as detailed above) front is shut down an its primary network interface moved to the simulated Internet, the NAT network premises. front also gets a second network interface, on the host-only network vboxnet1, to make it directly accessible to -the administrator's notebook (as described in The Test Networks). +the administrator's notebook (as described in The Test Networks).

    -
    VBoxManage modifyvm front --nic1 natnetwork --natnetwork1 premises
    +
    VBoxManage modifyvm front --nic1 natnetwork --natnetwork1 premises
     VBoxManage modifyvm front --nic2 hostonly --hostonlyadapter2 vboxnet1
    -
    +

    @@ -8065,10 +8054,10 @@ address using a drop-in configuration file:

    -eth1
    auto enp0s8
    +eth1
    auto enp0s8
     iface enp0s8 inet static
         address 192.168.57.3/24
    -
    +

    @@ -8080,13 +8069,13 @@ Note that there is no pre-provisioning for front, which is never deployed on a frontier, always in the cloud. Additional Debian packages are assumed to be readily available. Thus Ansible installs them as necessary, but first the administrator authorizes remote -access by following the instructions in the final section: Ansible +access by following the instructions in the final section: Ansible Test Authorization.

    -
    -

    13.2.3. The Test Gate Machine

    +
    +

    13.2.3. The Test Gate Machine

    The gate machine is created with the same amount of RAM and disk as @@ -8095,21 +8084,21 @@ not changed, gate can be created with two commands.

    -
    NAME=gate
    +
    NAME=gate
     create_vm
    -
    +

    -After Debian is installed (as detailed in A Test Machine) and the +After Debian is installed (as detailed in A Test Machine) and the machine rebooted, the administrator logs in and installs several additional software packages.

    -
    sudo apt install netplan.io systemd-resolved unattended-upgrades \
    +
    sudo apt install netplan.io systemd-resolved unattended-upgrades \
                      ufw isc-dhcp-server postfix openvpn
    -
    +

    @@ -8125,19 +8114,19 @@ defaults, listed below, are fine.

    gate can then move to the campus. It is shut down before the following VBoxManage commands are executed. The commands disconnect -the primary Ethernet interface from premises and connected it to -vboxnet0. They also create two new interfaces, isp and wifi, +the primary Ethernet interface from premises and connect it to +vboxnet0. They also create two new interfaces, isp and wild, connected to the simulated ISP and campus wireless access point.

    -
    VBoxManage modifyvm gate --mac-address1=080027f31679
    +
    VBoxManage modifyvm gate --mac-address1=080027f31679
     VBoxManage modifyvm gate --nic1 hostonly --hostonlyadapter1 vboxnet0
     VBoxManage modifyvm gate --mac-address2=0800273d42e5
     VBoxManage modifyvm gate --nic2 natnetwork --natnetwork2 premises
     VBoxManage modifyvm gate --mac-address3=0800274aded2
     VBoxManage modifyvm gate --nic3 hostonly --hostonlyadapter3 vboxnet1
    -
    +

    @@ -8183,8 +8172,8 @@ values of the MAC address variables in this table.

    - - + +
    enp0s9 vboxnet1campus wirelessgate_wifi_maccampus IoTgate_wild_mac
    @@ -8196,18 +8185,18 @@ Ethernet interface is temporarily configured with an IP address.

    -
    sudo ip address add 192.168.56.2/24 dev enp0s3
    -
    +
    sudo ip address add 192.168.56.2/24 dev enp0s3
    +

    Finally, the administrator authorizes remote access by following the -instructions in the final section: Ansible Test Authorization. +instructions in the final section: Ansible Test Authorization.

    -
    -

    13.2.4. The Test Core Machine

    +
    +

    13.2.4. The Test Core Machine

    The core machine is created with 1GiB of RAM and 6GiB of disk. @@ -8216,21 +8205,21 @@ created with following commands.

    -
    NAME=core
    +
    NAME=core
     RAM=2048
     DISK=6144
     create_vm
    -
    +

    -After Debian is installed (as detailed in A Test Machine) and the +After Debian is installed (as detailed in A Test Machine) and the machine rebooted, the administrator logs in and installs several additional software packages.

    -
    sudo apt install netplan.io systemd-resolved unattended-upgrades \
    +
    sudo apt install netplan.io systemd-resolved unattended-upgrades \
                      ntp isc-dhcp-server bind9 apache2 openvpn \
                      postfix dovecot-imapd fetchmail expect rsync \
                      gnupg
    @@ -8239,7 +8228,7 @@ sudo apt install mariadb-server php php-{apcu,bcmath,curl,gd,gmp}\
                      nagios-nrpe-plugin
    -
    +

    @@ -8252,6 +8241,12 @@ defaults, listed below, are fine.

  • System mail name: core.small.private
  • +

    +And domain name resolution may be broken after installing +systemd-resolved. A reboot is often needed after the first apt +install command above. +

    +

    Before shutting down, the name of the primary Ethernet interface should be compared to the example variable setting in @@ -8267,8 +8262,8 @@ Ethernet.

    -
    VBoxManage modifyvm core --nic1 hostonly --hostonlyadapter1 vboxnet0
    -
    +
    VBoxManage modifyvm core --nic1 hostonly --hostonlyadapter1 vboxnet0
    +

    @@ -8278,18 +8273,18 @@ Netplan soon.)

    -
    sudo ip address add 192.168.56.1/24 dev enp0s3
    -
    +
    sudo ip address add 192.168.56.1/24 dev enp0s3
    +

    Finally, the administrator authorizes remote access by following the -instructions in the next section: Ansible Test Authorization. +instructions in the next section: Ansible Test Authorization.

    -
    -

    13.2.5. Ansible Test Authorization

    +
    +

    13.2.5. Ansible Test Authorization

    To authorize Ansible's access to the three test machines, they must @@ -8299,11 +8294,11 @@ key to each test machine.

    -
    SRC=Secret/ssh_admin/id_rsa.pub
    +
    SRC=Secret/ssh_admin/id_rsa.pub
     scp $SRC sysadm@192.168.57.3:admin_key # Front
     scp $SRC sysadm@192.168.56.2:admin_key # Gate
     scp $SRC sysadm@192.168.56.1:admin_key # Core
    -
    +

    @@ -8313,8 +8308,8 @@ each machine).

    -
    ( cd; umask 077; mkdir .ssh; cp admin_key .ssh/authorized_keys )
    -
    +
    ( cd; umask 077; mkdir .ssh; cp admin_key .ssh/authorized_keys )
    +

    @@ -8330,8 +8325,8 @@ command.

    -
    scp Secret/ssh_front/etc/ssh/ssh_host_* sysadm@192.168.57.3:
    -
    +
    scp Secret/ssh_front/etc/ssh/ssh_host_* sysadm@192.168.57.3:
    +

    @@ -8339,10 +8334,10 @@ Then they are installed with these commands.

    -
    chmod 600 ssh_host_*
    +
    chmod 600 ssh_host_*
     chmod 644 ssh_host_*.pub
     sudo cp -b ssh_host_* /etc/ssh/
    -
    +

    @@ -8355,8 +8350,8 @@ ssh-keygen -f ~/.ssh/known_hosts -R 192.168.57.3

    -
    -

    13.3. Configure Test Machines

    +
    +

    13.3. Configure Test Machines

    At this point the three test machines core, gate, and front are @@ -8374,8 +8369,8 @@ not.

    -
    -

    13.4. Test Basics

    +
    +

    13.4. Test Basics

    At this point the test institute is just core, gate and front, @@ -8385,8 +8380,8 @@ with 0 failed units.

    -
    systemctl status
    -
    +
    systemctl status
    +

    @@ -8396,9 +8391,9 @@ forwarding (and NATing). On core (and gate):

    -
    ping -c 1 8.8.4.4      # dns.google
    +
    ping -c 1 8.8.4.4      # dns.google
     ping -c 1 192.168.15.5 # front_addr
    -
    +

    @@ -8408,10 +8403,10 @@ names yet.) On core (and gate):

    -
    host dns.google
    +
    host dns.google
     host core.small.private
     host www
    -
    +

    @@ -8420,10 +8415,10 @@ administrator's account. On core, gate and fron

    -
    /sbin/sendmail root
    +
    /sbin/sendmail root
     Testing email to root.
     .
    -
    +

    @@ -8437,12 +8432,12 @@ instant attention).

    -
    -

    13.5. The Test Nextcloud

    +
    +

    13.5. The Test Nextcloud

    Further tests involve Nextcloud account management. Nextcloud is -installed on core as described in Configure Nextcloud. Once +installed on core as described in Configure Nextcloud. Once /Nextcloud/ is created, ./inst config core will validate or update its configuration files.

    @@ -8464,8 +8459,8 @@ using the ./inst new command and issuing client VPN keys with the

    -
    -

    13.6. Test New Command

    +
    +

    13.6. Test New Command

    A member must be enrolled so that a member's client machine can be @@ -8476,8 +8471,8 @@ named dick, as is his notebook.

    -
    ./inst new dick
    -
    +
    ./inst new dick
    +

    @@ -8485,8 +8480,8 @@ Take note of Dick's initial password.

    -
    -

    13.7. The Test Member Notebook

    +
    +

    13.7. The Test Member Notebook

    A test member's notebook is created next, much like the servers, @@ -8497,13 +8492,13 @@ desktop VPN client and web browser test the OpenVPN configurations on

    -
    NAME=dick
    +
    NAME=dick
     RAM=2048
     DISK=8192
     create_vm
     VBoxManage modifyvm $NAME --macaddress1 080027dc54b5
     VBoxManage modifyvm $NAME --nic1 hostonly --hostonlyadapter1 vboxnet1
    -
    +

    @@ -8514,7 +8509,7 @@ behind) the access point.

    -Debian is installed much as detailed in A Test Machine except that +Debian is installed much as detailed in A Test Machine except that the SSH server option is not needed and the GNOME desktop option is. When the machine reboots, the administrator logs into the desktop and installs a couple additional software packages (which @@ -8522,15 +8517,15 @@ require several more).

    -
    sudo apt install network-manager-openvpn-gnome \
    +
    sudo apt install network-manager-openvpn-gnome \
                      openvpn-systemd-resolved \
                      nextcloud-desktop evolution
    -
    +
    -
    -

    13.8. Test Client Command

    +
    +

    13.8. Test Client Command

    The ./inst client command is used to issue keys for the institute's @@ -8541,23 +8536,23 @@ the test VPNs.

    -
    ./inst client debian dick dick
    -
    +
    ./inst client debian dick dick
    +
    -
    -

    13.9. Test Campus VPN

    +
    +

    13.9. Test Campus VPN

    -The campus.ovpn OpenVPN configuration file (generated in Test Client +The campus.ovpn OpenVPN configuration file (generated in Test Client Command) is transferred to dick, which is at the Wi-Fi access point's wifi_wan_addr.

    -
    scp *.ovpn sysadm@192.168.57.2:
    -
    +
    scp *.ovpn sysadm@192.168.57.2:
    +

    @@ -8575,18 +8570,18 @@ instantly) and does a few basic tests in a terminal.

    -
    systemctl status
    +
    systemctl status
     ping -c 1 8.8.4.4      # dns.google
     ping -c 1 192.168.56.1 # core
     host dns.google
     host core.small.private
     host www
    -
    +
    -
    -

    13.10. Test Web Pages

    +
    +

    13.10. Test Web Pages

    Next, the administrator copies Backup/WWW/ (included in the @@ -8595,11 +8590,11 @@ appropriately.

    -
    sudo chown -R sysadm.staff /WWW/campus
    +
    sudo chown -R sysadm.staff /WWW/campus
     sudo chown -R monkey.staff /WWW/live /WWW/test
     sudo chmod 02775 /WWW/*
     sudo chmod 664 /WWW/*/index.html
    -
    +

    @@ -8625,8 +8620,8 @@ will warn but allow the luser to continue.

    -
    -

    13.11. Test Web Update

    +
    +

    13.11. Test Web Update

    Modify /WWW/live/index.html on core and wait 15 minutes for it to @@ -8640,8 +8635,8 @@ Hack /home/www/index.html on front and observe the result at

    -
    -

    13.12. Test Nextcloud

    +
    +

    13.12. Test Nextcloud

    Nextcloud is typically installed and configured after the first @@ -8649,9 +8644,9 @@ Ansible run, when core has Internet access via gate. installation directory /Nextcloud/nextcloud/ appears, the Ansible code skips parts of the Nextcloud configuration. The same installation (or restoration) process used on Core is used on core -to create /Nextcloud/. The process starts with Create -/Nextcloud/, involves Restore Nextcloud or Install Nextcloud, -and runs ./inst config core again 8.23.6. When the ./inst +to create /Nextcloud/. The process starts with Create +/Nextcloud/, involves Restore Nextcloud or Install Nextcloud, +and runs ./inst config core again 8.23.6. When the ./inst config core command is happy with the Nextcloud configuration on core, the administrator uses Dick's notebook to test it, performing the following tests on dick's desktop. @@ -8661,7 +8656,7 @@ the following tests on dick's desktop.

  • Use a web browser to get http://core/nextcloud/. It should be a warning about accessing Nextcloud by an untrusted name.
  • -
  • Get http://core.small.private/nextcloud/. It should be a +
  • Get https://core.small.private/nextcloud/. It should be a login web page.
  • Login as sysadm with password fubar.
  • @@ -8680,7 +8675,7 @@ above).
  • Use the Nextcloud app to sync ~/nextCloud/ with the cloud. In the Nextcloud app's Connection Wizard (the initial dialog), choose to "Log in to your Nextcloud" with the URL -http://core.small.private/nextcloud. The web browser should pop +https://core.small.private/nextcloud. The web browser should pop up with a new tab: "Connect to your account". Press "Log in" and "Grant access". The Nextcloud Connection Wizard then prompts for sync parameters. The defaults are fine. Presumably the Local @@ -8714,7 +8709,7 @@ self-signed and unknown. It must be accepted (permanently).
  • Create a CardDAV account in Evolution. Choose Edit, Accounts, Add, Address Book, Type CardDAV, name Small Institute, and user dick. -The URL starts with http://core.small.private/nextcloud/ and +The URL starts with https://core.small.private/nextcloud/ and ends with remote.php/dav/addressbooks/users/dick/contacts/ (yeah, 88 characters!). Create a contact in the new address book and see it in the Contacts web page. At some point Evolution will need @@ -8729,8 +8724,8 @@ the calendar.
  • -
    -

    13.13. Test Email

    +
    +

    13.13. Test Email

    With Evolution running on the member notebook dick, one second email @@ -8739,12 +8734,12 @@ commands on front

    -
    /sbin/sendmail dick
    +
    /sbin/sendmail dick
     Subject: Hello, Dick.
     
     How are you?
     .
    -
    +

    @@ -8758,8 +8753,8 @@ Outgoing email is also tested. A message to

    -
    -

    13.14. Test Public VPN

    +
    +

    13.14. Test Public VPN

    At this point, dick can move abroad, from the campus Wi-Fi @@ -8769,8 +8764,8 @@ machine does not need to be shut down.

    -
    VBoxManage modifyvm dick --nic1 natnetwork --natnetwork1 premises
    -
    +
    VBoxManage modifyvm dick --nic1 natnetwork --natnetwork1 premises
    +

    @@ -8783,12 +8778,12 @@ tested in a terminal.

    -
    ping -c 1 8.8.4.4      # dns.google
    +
    ping -c 1 8.8.4.4      # dns.google
     ping -c 1 192.168.56.1 # core
     host dns.google
     host core.small.private
     host www
    -
    +

    @@ -8812,8 +8807,8 @@ calendar events.

    -
    -

    13.15. Test Pass Command

    +
    +

    13.15. Test Pass Command

    To test the ./inst pass command, the administrator logs in to core @@ -8832,10 +8827,10 @@ On core, logged in as sysadm:

    -
    ( cd ~/Maildir/new/
    +
    ( cd ~/Maildir/new/
       cp `ls -1t | head -1` ~/msg )
     grep Subject: ~/msg
    -
    +

    @@ -8845,9 +8840,9 @@ password.. Then on the administrator's notebook:

    -
    scp sysadm@192.168.56.1:msg ./
    +
    scp sysadm@192.168.56.1:msg ./
     ./inst pass < msg
    -
    +

    @@ -8860,8 +8855,8 @@ Finally, the administrator verifies that dick can login on co

    -
    -

    13.16. Test Old Command

    +
    +

    13.16. Test Old Command

    One more institute command is left to exercise. The administrator @@ -8869,8 +8864,8 @@ retires dick and his main device dick.

    -
    ./inst old dick
    -
    +
    ./inst old dick
    +

    @@ -8881,16 +8876,16 @@ should fail.

    -
    -

    14. Future Work

    +
    +

    14. Future Work

    The small institute's network, as currently defined in this doocument, is lacking in a number of respects.

    -
    -

    14.1. Deficiencies

    +
    +

    14.1. Deficiencies

    The current network monitoring is rudimentary. It could use some @@ -8942,30 +8937,18 @@ include the essential verify-x509-name. Use the same name on separate certificates for Gate and Front? Use the same certificate and key on Gate and Front?

    - -

    -Nextcloud should really be found at https://CLOUD.small.private/ -rather than https://core.small.private/nextcloud/, to ease -future expansion (moving services to additional machines). -

    - -

    -HTTPS could be used for Nextcloud transactions even though they are -carried on encrypted VPNs. This would eliminate a big warning on the -Nextcloud Administration Overview page. -

    -
    -

    14.2. More Tests

    +
    +

    14.2. More Tests

    The testing process described in the previous chapter is far from complete. Additional tests are needed.

    -
    -

    14.2.1. Backup

    +
    +

    14.2.1. Backup

    The backup command has not been tested. It needs an encrypted @@ -8974,8 +8957,8 @@ partition with which to sync? And then some way to compare that to

    -
    -

    14.2.2. Restore

    +
    +

    14.2.2. Restore

    The restore process has not been tested. It might just copy Backup/ @@ -8985,8 +8968,8 @@ perhaps permissions too. It could also use an example

    -
    -

    14.2.3. Campus Disconnect

    +
    +

    14.2.3. Campus Disconnect

    Email access (IMAPS) on front is… difficult to test unless @@ -9010,8 +8993,8 @@ could be used.

    -
    -

    15. Appendix: The Bootstrap

    +
    +

    15. Appendix: The Bootstrap

    Creating the private network from whole cloth (machines with recent @@ -9031,11 +9014,11 @@ etc.: quite a bit of temporary, manual localnet configuration just to get to the additional packages.

    -
    -

    15.1. The Current Strategy

    +
    +

    15.1. The Current Strategy

    -The strategy pursued in The Hardware is two phase: prepare the servers +The strategy pursued in The Hardware is two phase: prepare the servers on the Internet where additional packages are accessible, then connect them to the campus facilities (the private Ethernet switch, Wi-Fi AP, ISP), manually configure IP addresses (while the DHCP client silently @@ -9043,8 +9026,8 @@ fails), and avoid names until BIND9 is configured.

    -
    -

    15.2. Starting With Gate

    +
    +

    15.2. Starting With Gate

    The strategy of Starting With Gate concentrates on configuring Gate's @@ -9088,8 +9071,8 @@ ansible-playbook -l core site.yml

    -
    -

    15.3. Pre-provision With Ansible

    +
    +

    15.3. Pre-provision With Ansible

    A refinement of the current strategy might avoid the need to maintain @@ -9142,7 +9125,7 @@ routes on Front and Gate, making the simulation less… similar.

    Author: Matt Birkholz

    -

    Created: 2024-10-29 Tue 21:35

    +

    Created: 2025-05-31 Sat 22:27

    Validate