From 0ef7cafe670924c3be8486ce378abefbd1f2ed08 Mon Sep 17 00:00:00 2001 From: Matt Birkholz Date: Thu, 18 Sep 2025 18:00:01 -0600 Subject: [PATCH] Update README.html. --- README.html | 1655 +++++++++++++++++++++++++++++---------------------- 1 file changed, 936 insertions(+), 719 deletions(-) diff --git a/README.html b/README.html index 0ea101b..11f0bd5 100644 --- a/README.html +++ b/README.html @@ -3,7 +3,7 @@ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> - + A Small Institute @@ -24,8 +24,8 @@ an expendable public face (easily wiped clean) while maintaining a secure and private campus that can function with or without the Internet.

-
-

1. Overview

+
+

1. Overview

This small institute has a public server on the Internet, Front, that @@ -48,7 +48,7 @@ connects to Front making the institute email, cloud, etc. available to members off campus.

-
+
                 =                                                   
               _|||_                                                 
         =-The-Institute-=                                           
@@ -95,8 +95,8 @@ uses OpenPGP encryption to secure message content.
 

-
-

2. Caveats

+
+

2. Caveats

This small institute prizes its privacy, so there is little or no @@ -144,8 +144,8 @@ month) because of this assumption.

-
-

3. The Services

+
+

3. The Services

The small institute's network is designed to provide a number of @@ -157,8 +157,8 @@ policies. On first reading, those subsections should be skipped; they reference particulars first introduced in the following chapter.

-
-

3.1. The Name Service

+
+

3.1. The Name Service

The institute has a public domain, e.g. small.example.org, and a @@ -172,8 +172,8 @@ names like core.

-
-

3.2. The Email Service

+
+

3.2. The Email Service

Front provides the public SMTP (Simple Mail Transfer Protocol) service @@ -247,8 +247,8 @@ setting for the maximum message size is given in a code block labeled configurations wherever <<postfix-message-size>> appears.

-
-

3.2.1. The Postfix Configurations

+
+

3.2.1. The Postfix Configurations

The institute aims to accommodate encrypted email containing short @@ -263,7 +263,7 @@ handle maxi-messages.

-postfix-message-size
- { p: message_size_limit, v: 104857600 }
+postfix-message-size
- { p: message_size_limit, v: 104857600 }
 
@@ -278,7 +278,7 @@ re-sending the bounce (or just grabbing the go-bag!).

-postfix-queue-times
- { p: delay_warning_time, v: 1h }
+postfix-queue-times
- { p: delay_warning_time, v: 1h }
 - { p: maximal_queue_lifetime, v: 4h }
 - { p: bounce_queue_lifetime, v: 4h }
 
@@ -292,7 +292,7 @@ disables relaying (other than for the local networks).

-postfix-relaying
- p: smtpd_relay_restrictions
+postfix-relaying
- p: smtpd_relay_restrictions
   v: permit_mynetworks reject_unauth_destination
 
@@ -304,7 +304,7 @@ effect.

-postfix-maildir
- { p: home_mailbox, v: Maildir/ }
+postfix-maildir
- { p: home_mailbox, v: Maildir/ }
 
@@ -315,8 +315,8 @@ in the respective roles below.

-
-

3.2.2. The Dovecot Configurations

+
+

3.2.2. The Dovecot Configurations

The Dovecot settings on both Front and Core disable POP and require @@ -330,7 +330,7 @@ The official documentation for Dovecot once was a Wiki but now is

-dovecot-tls
protocols = imap
+dovecot-tls
protocols = imap
 ssl = required
 
@@ -342,7 +342,7 @@ configuration keeps them from even listening at the IMAP port

-dovecot-ports
service imap-login {
+dovecot-ports
service imap-login {
   inet_listener imap {
     port = 0
   }
@@ -356,7 +356,7 @@ directories.
 

-dovecot-maildir
mail_location = maildir:~/Maildir
+dovecot-maildir
mail_location = maildir:~/Maildir
 
@@ -368,15 +368,15 @@ common settings with host specific settings for ssl_cert and
-
-

3.3. The Web Services

+
+

3.3. The Web Services

Front provides the public HTTP service that serves institute web pages at e.g. https://small.example.org/. The small institute initially runs with a self-signed, "snake oil" server certificate, causing browsers to warn of possible fraud, but this certificate is easily -replaced by one signed by a recognized authority, as discussed in The +replaced by one signed by a recognized authority, as discussed in The Front Role.

@@ -431,15 +431,15 @@ will automatically wipe it within 15 minutes.

-
-

3.4. The Cloud Service

+
+

3.4. The Cloud Service

Core runs Nextcloud to provide a private institute cloud at https://core.small.private/nextcloud/. It is managed manually per The Nextcloud Server Administration Guide. The code and data, including especially database dumps, are stored in /Nextcloud/ which -is included in Core's backup procedure as described in Backups. The +is included in Core's backup procedure as described in Backups. The default Apache2 configuration expects to find the web scripts in /var/www/nextcloud/, so the institute symbolically links this to /Nextcloud/nextcloud/. @@ -453,15 +453,15 @@ private network.

-
-

3.5. Accounts

+
+

3.5. Accounts

A small institute has just a handful of members. For simplicity (and thus security) static configuration files are preferred over complex account management systems, LDAP, Active Directory, and the like. The Ansible scripts configure the same set of user accounts on Core and -Front. The Institute Commands (e.g. ./inst new dick) capture the +Front. The Institute Commands (e.g. ./inst new dick) capture the processes of enrolling, modifying and retiring members of the institute. They update the administrator's membership roll, and run Ansible to create (and disable) accounts on Core, Front, Nextcloud, @@ -476,8 +476,8 @@ accomplished via the campus cloud and the resulting desktop files can all be private (readable and writable only by the owner) by default.

-
-

3.5.1. The Administration Accounts

+
+

3.5.1. The Administration Accounts

The institute avoids the use of the root account (uid 0) because @@ -486,21 +486,21 @@ command is used to consciously (conscientiously!) run specific scripts and programs as root. When installation of a Debian OS leaves the host with no user accounts, just the root account, the next step is to create a system administrator's account named sysadm and to give -it permission to use the sudo command (e.g. as described in The +it permission to use the sudo command (e.g. as described in The Front Machine). When installation prompts for the name of an initial, privileged user account the same name is given (e.g. as -described in The Core Machine). Installation may not prompt and +described in The Core Machine). Installation may not prompt and still create an initial user account with a distribution specific name (e.g. pi). Any name can be used as long as it is provided as the value of ansible_user in hosts. Its password is specified by a vault-encrypted variable in the Secret/become.yml file. (The -hosts and Secret/become.yml files are described in The Ansible +hosts and Secret/become.yml files are described in The Ansible Configuration.)

-
-

3.5.2. The Monkey Accounts

+
+

3.5.2. The Monkey Accounts

The institute's Core uses a special account named monkey to run @@ -511,8 +511,8 @@ account is created on Front as well.

-
-

3.6. Keys

+
+

3.6. Keys

The institute keeps its "master secrets" in an encrypted @@ -597,8 +597,8 @@ the administrator's password keep, to install a new SSH key.

-
-

3.7. Backups

+
+

3.7. Backups

The small institute backs up its data, but not so much so that nothing @@ -634,12 +634,14 @@ files mentioned in the Nextcloud database dump).

-private/backup
#!/bin/bash -e
-#
-# DO NOT EDIT.  Maintained (will be replaced) by Ansible.
-#
-# sudo backup [-n]
-
+private/backup
#!/bin/bash -e
+#
+# DO NOT EDIT.
+#
+# Maintained (will be replaced) by Ansible.
+#
+# sudo backup [-n]
+
 if [ `id -u` != "0" ]
 then
     echo "This script must be run as root."
@@ -736,8 +738,8 @@ finish
 
-
-

4. The Particulars

+
+

4. The Particulars

This chapter introduces Ansible variables intended to simplify @@ -749,13 +751,13 @@ stored in separate files: public/vars.yml a

The example settings in this document configure VirtualBox VMs as -described in the Testing chapter. For more information about how a +described in the Testing chapter. For more information about how a small institute turns the example Ansible code into a working Ansible -configuration, see chapter The Ansible Configuration. +configuration, see chapter The Ansible Configuration.

-
-

4.1. Generic Particulars

+
+

4.1. Generic Particulars

The small institute's domain name is used quite frequently in the @@ -783,7 +785,7 @@ institute. The institute's private domain name should end with one of the top-level domains set aside for this purpose: .intranet, .internal, .private, .corp, .home or .lan.1 It is -hoped that doing so will increase that chances that some abomination +hoped that doing so will increase the chances that some abomination like DNS-over-HTTPS will pass us by.

@@ -794,8 +796,8 @@ domain_priv: small.private
-
-

4.2. Subnets

+
+

4.2. Subnets

The small institute uses a private Ethernet, two VPNs, and a "wild", @@ -829,52 +831,52 @@ notation) in abbreviated form (eliding 69,624 rows). 10.0.0.0/24 -10.0.0.1 – 10.0.0.254 +10.0.0.1 – 10.0.0.254 10.0.1.0/24 -10.0.1.1 – 10.0.1.254 +10.0.1.1 – 10.0.1.254 10.0.2.0/24 -10.0.2.1 – 10.0.2.254 +10.0.2.1 – 10.0.2.254 -… -… +… +… 10.255.255.0/24 -10.255.255.1 – 10.255.255.254 +10.255.255.1 – 10.255.255.254 172.16.0.0/24 -172.16.0.1 – 172.16.0.254 +172.16.0.1 – 172.16.0.254 172.16.1.0/24 -172.16.1.1 – 172.16.1.254 +172.16.1.1 – 172.16.1.254 172.16.2.0/24 -172.16.2.1 – 172.16.2.254 +172.16.2.1 – 172.16.2.254 -… -… +… +… 172.31.255.0/24 -172.31.255.1 – 172.31.255.254 +172.31.255.1 – 172.31.255.254 @@ -895,7 +897,7 @@ example result follows the code.

-
+

=> 10.62.17.0/24

@@ -908,7 +910,7 @@ code block below. The small institute treats these addresses as sensitive information so again the code block below "tangles" into private/vars.yml rather than public/vars.yml. Two of the addresses are in 192.168 subnets because they are part of a test -configuration using mostly-default VirtualBoxes (described here). +configuration using mostly-default VirtualBoxes (described here).

@@ -1004,19 +1006,19 @@ front_wg_addr:
-
-

5. The Hardware

+
+

5. The Hardware

The small institute's network was built by its system administrator using Ansible on a trusted notebook. The Ansible configuration and scripts were generated by "tangling" the Ansible code included here. -(The Ansible Configuration describes how to do this.) The following +(The Ansible Configuration describes how to do this.) The following sections describe how Front, Gate and Core were prepared for Ansible.

-
-

5.1. The Front Machine

+
+

5.1. The Front Machine

Front is the small institute's public facing server, a virtual machine @@ -1029,8 +1031,8 @@ possible to quickly re-provision a new Front machine from a frontier Internet café using just the administrator's notebook.

-
-

5.1.1. A Digital Ocean Droplet

+
+

5.1.1. A Digital Ocean Droplet

The following example prepared a new front on a Digital Ocean droplet. @@ -1054,7 +1056,7 @@ root@ubuntu#

The freshly created Digital Ocean droplet came with just one account, root, but the small institute avoids remote access to the "super -user" account (per the policy in The Administration Accounts), so the +user" account (per the policy in The Administration Accounts), so the administrator created a sysadm account with the ability to request escalated privileges via the sudo command.

@@ -1077,7 +1079,7 @@ notebook$ The password was generated by gpw, saved in the administrator's password keep, and later added to Secret/become.yml as shown below. (Producing a working Ansible configuration with Secret/become.yml -file is described in The Ansible Configuration.) +file is described in The Ansible Configuration.)

@@ -1091,7 +1093,7 @@ notebook_     >>Secret/become.yml
 

After creating the sysadm account on the droplet, the administrator concatenated a personal public ssh key and the key found in -Secret/ssh_admin/ (created by The CA Command) into an admin_keys +Secret/ssh_admin/ (created by The CA Command) into an admin_keys file, copied it to the droplet, and installed it as the authorized_keys for sysadm.

@@ -1175,8 +1177,8 @@ address.
-
-

5.2. The Core Machine

+
+

5.2. The Core Machine

Core is the small institute's private file, email, cloud and whatnot @@ -1200,7 +1202,7 @@ The following example prepared a new core on a PC with Debian 11 freshly installed. During installation, the machine was named core, no desktop or server software was installed, no root password was set, and a privileged account named sysadm was created (per the policy in -The Administration Accounts). +The Administration Accounts).

@@ -1216,7 +1218,7 @@ Is the information correct? [Y/n]
 The password was generated by gpw, saved in the administrator's
 password keep, and later added to Secret/become.yml as shown below.
 (Producing a working Ansible configuration with Secret/become.yml
-file is described in The Ansible Configuration.)
+file is described in The Ansible Configuration.)
 

@@ -1235,13 +1237,30 @@ modem and installed them as shown below.
 
 
 $ sudo apt install netplan.io systemd-resolved unattended-upgrades \
-_                  ntp isc-dhcp-server bind9 apache2 wireguard \
+_                  chrony isc-dhcp-server bind9 apache2 wireguard \
 _                  postfix dovecot-imapd fetchmail expect rsync \
 _                  gnupg openssh-server
 

-The Nextcloud configuration requires Apache2, MariaDB and a number of +Manual installation of Postfix prompted for configuration type and +mail name. The answers given are listed here. +

+ +
    +
  • General type of mail configuration: Internet Site
  • +
  • System mail name: core.small.private
  • +
+ +

+The host then needed to be rebooted to get its name service working +again after systemd-resolved was installed. (Any help with this +will be welcome!) After rebooting and re-logging in, yet more +software packages were installed. +

+ +

+The Nextcloud configuration required Apache2, MariaDB and a number of PHP modules. Installing them while Core was on a cable modem sped up final configuration "in position" (on a frontier).

@@ -1253,7 +1272,7 @@ _ libapache2-mod-php

-Similarly, the NAGIOS configuration requires a handful of packages +Similarly, the NAGIOS configuration required a handful of packages that were pre-loaded via cable modem (to test a frontier deployment).

@@ -1264,7 +1283,7 @@ _ nagios-nrpe-plugin

Next, the administrator concatenated a personal public ssh key and the -key found in Secret/ssh_admin/ (created by The CA Command) into an +key found in Secret/ssh_admin/ (created by The CA Command) into an admin_keys file, copied it to Core, and installed it as the authorized_keys for sysadm.

@@ -1291,7 +1310,7 @@ notebook$

Note that the name core.lan should be known to the cable modem's DNS service. An IP address might be used instead, discovered with an ip -a on Core. +-4 a command on Core.

@@ -1304,7 +1323,7 @@ a new, private IP address and a default route.

In the example command lines below, the address 10.227.248.1 was generated by the random subnet address picking procedure described in -Subnets, and is named core_addr in the Ansible code. The second +Subnets, and is named core_addr in the Ansible code. The second address, 10.227.248.2, is the corresponding address for Gate's Ethernet interface, and is named gate_addr in the Ansible code. @@ -1320,8 +1339,8 @@ At this point Core was ready for provisioning with Ansible.

-
-

5.3. The Gate Machine

+
+

5.3. The Gate Machine

Gate is the small institute's route to the Internet, and the campus @@ -1337,12 +1356,11 @@ untrusted network of campus IoT appliances and Wi-Fi access point(s).

  • isp is its third network interface, connected to the campus ISP. This could be an Ethernet device connected to a cable -modem. It could be a USB port tethered to a phone, a -USB-Ethernet adapter, or a wireless adapter connected to a -campground Wi-Fi access point, etc.
  • +modem, a USB port tethered to a phone, a wireless adapter +connected to a campground Wi-Fi access point, etc. -
    +
     =============== | ==================================================
                     |                                           Premises
               (Campus ISP)                                              
    @@ -1355,8 +1373,8 @@ campground Wi-Fi access point, etc.
                     +----Ethernet switch                                
     
    -
    -

    5.3.1. Alternate Gate Topology

    +
    +

    5.3.1. Alternate Gate Topology

    While Gate and Core really need to be separate machines for security @@ -1365,7 +1383,7 @@ This avoids the need for a second Wi-Fi access point and leads to the following topology.

    -
    +
     =============== | ==================================================
                     |                                           Premises
                (House ISP)                                              
    @@ -1389,12 +1407,12 @@ its Ethernet and Wi-Fi clients are allowed to communicate).
     

    -
    -

    5.3.2. Original Gate Topology

    +
    +

    5.3.2. Original Gate Topology

    The Ansible code in this document is somewhat dependent on the -physical network shown in the Overview wherein Gate has three network +physical network shown in the Overview wherein Gate has three network interfaces.

    @@ -1403,7 +1421,7 @@ The following example prepared a new gate on a PC with Debian 11 freshly installed. During installation, the machine was named gate, no desktop or server software was installed, no root password was set, and a privileged account named sysadm was created (per the policy in -The Administration Accounts). +The Administration Accounts).

    @@ -1419,7 +1437,7 @@ Is the information correct? [Y/n]
     The password was generated by gpw, saved in the administrator's
     password keep, and later added to Secret/become.yml as shown below.
     (Producing a working Ansible configuration with Secret/become.yml
    -file is described in The Ansible Configuration.)
    +file is described in The Ansible Configuration.)
     

    @@ -1442,9 +1460,16 @@ _                  ufw isc-dhcp-server postfix wireguard \
     _                  openssh-server
     
    +

    +The host then needed to be rebooted to get its name service working +again after systemd-resolved was installed. (Any help with this will +be welcome!) After rebooting and re-logging in, the administrator was +ready to proceed. +

    +

    Next, the administrator concatenated a personal public ssh key and the -key found in Secret/ssh_admin/ (created by The CA Command) into an +key found in Secret/ssh_admin/ (created by The CA Command) into an admin_keys file, copied it to Gate, and installed it as the authorized_keys for sysadm.

    @@ -1484,7 +1509,7 @@ a new, private IP address.

    In the example command lines below, the address 10.227.248.2 was generated by the random subnet address picking procedure described in -Subnets, and is named gate_addr in the Ansible code. +Subnets, and is named gate_addr in the Ansible code.

    @@ -1493,10 +1518,11 @@ $ sudo ip address add 10.227.248.2 dev eth0
     
     

    Gate was also connected to the USB Ethernet dongles cabled to the -campus Wi-Fi access point and the campus ISP. The three network -adapters are known by their MAC addresses, the values of the variables -gate_lan_mac, gate_wild_mac, and gate_isp_mac. (For more -information, see the Gate role's Configure Netplan task.) +campus Wi-Fi access point and the campus ISP and the values of three +variables (gate_lan_mac, gate_wild_mac, and gate_isp_mac in +private/vars.yml) match the actual hardware MAC addresses of the +dongles. (For more information, see the Gate role's Configure Netplan +task.)

    @@ -1506,22 +1532,22 @@ At this point Gate was ready for provisioning with Ansible.

    -
    -

    6. The All Role

    +
    +

    6. The All Role

    The all role contains tasks that are executed on all of the institute's servers. At the moment there is just the one.

    -
    -

    6.1. Include Particulars

    +
    +

    6.1. Include Particulars

    The all role's task contains a reference to a common institute particular, the institute's domain_name, a variable found in the public/vars.yml file. Thus the first task of the all role is to -include the variables defined in this file (described in The +include the variables defined in this file (described in The Particulars). The code block below is the first to tangle into roles/all/tasks/main.yml.

    @@ -1535,8 +1561,8 @@ Particulars). The code block below is the first to tangle into
    -
    -

    6.2. Enable Systemd Resolved

    +
    +

    6.2. Enable Systemd Resolved

    The systemd-networkd and systemd-resolved service units are not @@ -1565,19 +1591,31 @@ follows these recommendations (and not the suggestion to enable - ansible_distribution == 'Debian' - 11 < ansible_distribution_major_version|int -- name: Enable/Start systemd-networkd. +- name: Start systemd-networkd. + become: yes + systemd: + service: systemd-networkd + state: started + tags: actualizer + +- name: Enable systemd-networkd. become: yes systemd: service: systemd-networkd enabled: yes + +- name: Start systemd-resolved. + become: yes + systemd: + service: systemd-resolved state: started + tags: actualizer -- name: Enable/Start systemd-resolved. +- name: Enable systemd-resolved. become: yes systemd: service: systemd-resolved enabled: yes - state: started - name: Link /etc/resolv.conf. become: yes @@ -1593,14 +1631,14 @@ follows these recommendations (and not the suggestion to enable

    -
    -

    6.3. Trust Institute Certificate Authority

    +
    +

    6.3. Trust Institute Certificate Authority

    All servers should recognize the institute's Certificate Authority as trustworthy, so its certificate is added to the set of trusted CAs on each host. More information about how the small institute manages its -X.509 certificates is available in Keys. +X.509 certificates is available in Keys.

    @@ -1618,7 +1656,7 @@ X.509 certificates is available in Keys.
    -roles_t/all/handlers/main.yml
    
    +roles_t/all/handlers/main.yml
    ---
     - name: Update CAs.
       become: yes
       command: update-ca-certificates
    @@ -1627,15 +1665,15 @@ X.509 certificates is available in Keys.
     
    -
    -

    7. The Front Role

    +
    +

    7. The Front Role

    The front role installs and configures the services expected on the institute's publicly accessible "front door": email, web, VPN. The virtual machine is prepared with an Ubuntu Server install and remote access to a privileged, administrator's account. (For details, see -The Front Machine.) +The Front Machine.)

    @@ -1650,11 +1688,11 @@ perhaps with symbolic links to, for example, /etc/letsencrypt/live/small.example.org/fullchain.pem.

    -
    -

    7.1. Include Particulars

    +
    +

    7.1. Include Particulars

    -The first task, as in The All Role, is to include the institute +The first task, as in The All Role, is to include the institute particulars. The front role refers to private variables and the membership roll, so these are included was well.

    @@ -1676,8 +1714,8 @@ membership roll, so these are included was well.
    -
    -

    7.2. Configure Hostname

    +
    +

    7.2. Configure Hostname

    This task ensures that Front's /etc/hostname and /etc/mailname are @@ -1694,21 +1732,18 @@ delivery. loop: - /etc/hostname - /etc/mailname - notify: Update hostname. - -

    -
    -roles_t/front/handlers/main.yml
    ---
     - name: Update hostname.
       become: yes
       command: hostname -F /etc/hostname
    +  when: domain_name != ansible_hostname
    +  tags: actualizer
     
    -
    -

    7.3. Add Administrator to System Groups

    +
    +

    7.3. Add Administrator to System Groups

    The administrator often needs to read (directories of) log files owned @@ -1728,8 +1763,8 @@ these groups speeds up debugging.

    -
    -

    7.4. Configure SSH

    +
    +

    7.4. Configure SSH

    The SSH service on Front needs to be known to Monkey. The following @@ -1757,18 +1792,19 @@ those stored in Secret/ssh_front/etc/ssh/

    -roles_t/front/handlers/main.yml
    
    +roles_t/front/handlers/main.yml
    ---
     - name: Reload SSH server.
       become: yes
       systemd:
         service: ssh
         state: reloaded
    +  tags: actualizer
     
    -
    -

    7.5. Configure Monkey

    +
    +

    7.5. Configure Monkey

    The small institute runs cron jobs and web scripts that generate @@ -1776,7 +1812,7 @@ reports and perform checks. The un-privileged jobs are run by a system account named monkey. One of Monkey's more important jobs on Core is to run rsync to update the public web site on Front. Monkey on Core will login as monkey on Front to synchronize the files (as -described in *Configure Apache2). To do that without needing a +described in *Configure Apache2). To do that without needing a password, the monkey account on Front should authorize Monkey's SSH key on Core.

    @@ -1787,7 +1823,7 @@ key on Core. become: yes user: name: monkey - system: yes + password: "!" - name: Authorize monkey@core. become: yes @@ -1808,8 +1844,8 @@ key on Core.
    -
    -

    7.6. Install Rsync

    +
    +

    7.6. Install Rsync

    Monkey uses Rsync to keep the institute's public web site up-to-date. @@ -1824,8 +1860,8 @@ Monkey uses Rsync to keep the institute's public web site up-to-date.

    -
    -

    7.7. Install Unattended Upgrades

    +
    +

    7.7. Install Unattended Upgrades

    The institute prefers to install security updates as soon as possible. @@ -1840,13 +1876,13 @@ The institute prefers to install security updates as soon as possible.

    -
    -

    7.8. Configure User Accounts

    +
    +

    7.8. Configure User Accounts

    User accounts are created immediately so that Postfix and Dovecot can start delivering email immediately, without returning "no such -recipient" replies. The Account Management chapter describes the +recipient" replies. The Account Management chapter describes the members and usernames variables used below.

    @@ -1884,8 +1920,8 @@ recipient" replies. The Account Management chapter de
    -
    -

    7.9. Install Server Certificate

    +
    +

    7.9. Install Server Certificate

    The servers on Front use the same certificate (and key) to @@ -1915,8 +1951,8 @@ readable by root.

    -
    -

    7.10. Configure Postfix on Front

    +
    +

    7.10. Configure Postfix on Front

    Front uses Postfix to provide the institute's public SMTP service, and @@ -1933,7 +1969,7 @@ The appropriate answers are listed here but will be checked

    -As discussed in The Email Service above, Front's Postfix configuration +As discussed in The Email Service above, Front's Postfix configuration includes site-wide support for larger message sizes, shorter queue times, the relaying configuration, and the common path to incoming emails. These and a few Front-specific Postfix configurations @@ -1946,7 +1982,7 @@ via which Core relays messages from the campus.

    -postfix-front-networks
    - p: mynetworks
    +postfix-front-networks
    - p: mynetworks
       v: >-
          {{ public_wg_net_cidr }}
          127.0.0.0/8
    @@ -1962,7 +1998,7 @@ difficult for internal hosts, who do not have (public) domain names.
     

    -postfix-front-restrictions
    - p: smtpd_recipient_restrictions
    +postfix-front-restrictions
    - p: smtpd_recipient_restrictions
       v: >-
          permit_mynetworks
          reject_unauth_pipelining
    @@ -1983,13 +2019,13 @@ messages; incoming messages are delivered locally, without
     

    -postfix-header-checks
    - p: smtp_header_checks
    +postfix-header-checks
    - p: smtp_header_checks
       v: regexp:/etc/postfix/header_checks.cf
     
    -postfix-header-checks-content
    /^Received:/    IGNORE
    +postfix-header-checks-content
    /^Received:/    IGNORE
     /^User-Agent:/  IGNORE
     
    @@ -2001,7 +2037,7 @@ Debian default for inet_interfaces.

    -postfix-front
    - { p: smtpd_tls_cert_file, v: /etc/server.crt }
    +postfix-front
    - { p: smtpd_tls_cert_file, v: /etc/server.crt }
     - { p: smtpd_tls_key_file, v: /etc/server.key }
     <<postfix-front-networks>>
     <<postfix-front-restrictions>>
    @@ -2043,12 +2079,18 @@ start and enable the service.
         dest: /etc/postfix/header_checks.cf
       notify: Postmap header checks.
     
    -- name: Enable/Start Postfix.
    +- name: Start Postfix.
       become: yes
       systemd:
         service: postfix
    -    enabled: yes
         state: started
    +  tags: actualizer
    +
    +- name: Enable Postfix.
    +  become: yes
    +  systemd:
    +    service: postfix
    +    enabled: yes
     
    @@ -2059,6 +2101,7 @@ start and enable the service. systemd: service: postfix state: restarted + tags: actualizer - name: Postmap header checks. become: yes @@ -2070,8 +2113,8 @@ start and enable the service.
    -
    -

    7.11. Configure Public Email Aliases

    +
    +

    7.11. Configure Public Email Aliases

    The institute's Front needs to deliver email addressed to a number of @@ -2108,12 +2151,13 @@ created by a more specialized role. - name: New aliases. become: yes command: newaliases + tags: actualizer

    -
    -

    7.12. Configure Dovecot IMAPd

    +
    +

    7.12. Configure Dovecot IMAPd

    Front uses Dovecot's IMAPd to allow user Fetchmail jobs on Core to @@ -2122,7 +2166,7 @@ default with POP and IMAP (without TLS) support disabled. This is a bit "over the top" given that Core accesses Front via VPN, but helps to ensure privacy even when members must, in extremis, access recent email directly from their accounts on Front. For more information -about Front's role in the institute's email services, see The Email +about Front's role in the institute's email services, see The Email Service.

    @@ -2158,12 +2202,18 @@ and enables it to start at every reboot. dest: /etc/dovecot/local.conf notify: Restart Dovecot. -- name: Enable/Start Dovecot. +- name: Start Dovecot. become: yes systemd: service: dovecot - enabled: yes state: started + tags: actualizer + +- name: Enable Dovecot. + become: yes + systemd: + service: dovecot + enabled: yes
    @@ -2174,12 +2224,13 @@ and enables it to start at every reboot. systemd: service: dovecot state: restarted + tags: actualizer
    -
    -

    7.13. Configure Apache2

    +
    +

    7.13. Configure Apache2

    This is the small institute's public web site. It is simple, static, @@ -2215,7 +2266,7 @@ taken from https://www

    -apache-ciphers
    SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
    +apache-ciphers
    SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
     SSLHonorCipherOrder on
     SSLCipherSuite {{ [ 'ECDHE-ECDSA-AES128-GCM-SHA256',
                         'ECDHE-ECDSA-AES256-GCM-SHA384',
    @@ -2270,7 +2321,7 @@ used on all of the institute's web sites.
     

    -apache-userdir-front
    UserDir /home/www-users
    +apache-userdir-front
    UserDir /home/www-users
     <Directory /home/www-users/>
             Require all granted
             AllowOverride None
    @@ -2285,7 +2336,7 @@ HTTPS URLs.
     

    -apache-redirect-front
    <VirtualHost *:80>
    +apache-redirect-front
    <VirtualHost *:80>
             Redirect permanent / https://{{ domain_name }}/
     </VirtualHost>
     
    @@ -2310,7 +2361,7 @@ the inside of a VirtualHost block. They should apply globally.

    -apache-front
    ServerName {{ domain_name }}
    +apache-front
    ServerName {{ domain_name }}
     ServerAdmin webmaster@{{ domain_name }}
     
     DocumentRoot /home/www
    @@ -2380,12 +2431,18 @@ e.g. /etc/apache2/sites-available/small.example.org.conf and runs
         creates: /etc/apache2/sites-enabled/{{ domain_name }}.conf
       notify: Restart Apache2.
     
    -- name: Enable/Start Apache2.
    +- name: Start Apache2.
       become: yes
       systemd:
         service: apache2
    -    enabled: yes
         state: started
    +  tags: actualizer
    +
    +- name: Enable Apache2.
    +  become: yes
    +  systemd:
    +    service: apache2
    +    enabled: yes
     
    @@ -2396,6 +2453,7 @@ e.g. /etc/apache2/sites-available/small.example.org.conf and runs systemd: service: apache2 state: restarted + tags: actualizer
    @@ -2469,8 +2527,8 @@ the users' ~/Public/HTML/ directories.
    -
    -

    7.14. Configure Public WireGuard™ Subnet

    +
    +

    7.14. Configure Public WireGuard™ Subnet

    Front uses WireGuard™ to provide a public (Internet accessible) VPN @@ -2479,35 +2537,35 @@ packets between it and the institute's other private networks.

    -The following example private/front-wg0.conf configuration recognizes +The following example private/front-wg0.conf configuration recognizes Core by its public key and routes the institute's private networks to it. It also recognizes Dick's notebook and his (replacement) phone, assigning them host numbers 4 and 6 on the VPN.

    -private/front-wg0.conf
    [Interface]
    +private/front-wg0.conf
    [Interface]
     Address = 10.177.87.1/24
     ListenPort = 39608
     PostUp = wg set %i private-key /etc/wireguard/private-key
     PostUp = resolvectl dns %i 192.168.56.1
     PostUp = resolvectl domain %i small.private
     
    -# Core
    -[Peer]
    +# Core
    +[Peer]
     PublicKey = lGhC51IBgZtlq4H2bsYFuKvPtV0VAEwUvVIn5fW7D0c=
     AllowedIPs = 10.177.87.2
     AllowedIPs = 192.168.56.0/24
     AllowedIPs = 192.168.57.0/24
     AllowedIPs = 10.84.139.0/24
     
    -# dick
    -[Peer]
    +# dick
    +[Peer]
     PublicKey = 4qd4xdRztZBKhFrX9jI/b4fnMzpKQ5qhg691hwYSsX8=
     AllowedIPs = 10.177.87.4
     
    -# dicks-razr
    -[Peer]
    +# dicks-razr
    +[Peer]
     PublicKey = zho0qMxoLclJSQu4GeJEcMkk0hx4Q047OcNc8vOejVw=
     AllowedIPs = 10.177.87.6
     
    @@ -2538,11 +2596,18 @@ WireGuard™ tunnel on Dick's notebook, used abroad
    
     The following tasks install WireGuard™, configure it with
    -private/front-wg0.conf, and enable the service.
    +private/front-wg0.conf, and enable the service.
     

    roles_t/front/tasks/main.yml
    
    +- name: Enable IP forwarding.
    +  become: yes
    +  sysctl:
    +    name: net.ipv4.ip_forward
    +    value: "1"
    +    state: present
    +
     - name: Install WireGuard™.
       become: yes
       apt: pkg=wireguard
    @@ -2557,12 +2622,18 @@ The following tasks install WireGuard™, configure it with
         group: root
       notify: Restart WireGuard™.
     
    -- name: Enable/Start WireGuard™ on boot.
    +- name: Start WireGuard™.
       become: yes
       systemd:
         service: wg-quick@wg0
    -    enabled: yes
         state: started
    +  tags: actualizer
    +
    +- name: Enable WireGuard™.
    +  become: yes
    +  systemd:
    +    service: wg-quick@wg0
    +    enabled: yes
     
    @@ -2573,12 +2644,13 @@ The following tasks install WireGuard™, configure it with systemd: service: wg-quick@wg0 state: restarted + tags: actualizer
    -
    -

    7.15. Configure Kamailio

    +
    +

    7.15. Configure Kamailio

    Front uses Kamailio to provide a SIP service on the public VPN so that @@ -2600,7 +2672,7 @@ specifies the actual IP, known here as front_wg_addr.

    -kamailio
    listen=udp:{{ front_wg_addr }}:5060
    +kamailio
    listen=udp:{{ front_wg_addr }}:5060
     
    @@ -2656,6 +2728,7 @@ not be started before the wg0 device has appeared. become: yes systemd: daemon-reload: yes + tags: actualizer
    @@ -2673,12 +2746,18 @@ Finally, Kamailio can be configured and started. dest: /etc/kamailio/kamailio-local.cfg notify: Restart Kamailio. -- name: Enable/Start Kamailio. +- name: Start Kamailio. become: yes systemd: service: kamailio - enabled: yes state: started + tags: actualizer + +- name: Enable Kamailio. + become: yes + systemd: + service: kamailio + enabled: yes
    @@ -2689,27 +2768,28 @@ Finally, Kamailio can be configured and started. systemd: service: kamailio state: restarted + tags: actualizer
    -
    -

    8. The Core Role

    +
    +

    8. The Core Role

    The core role configures many essential campus network services as well as the institute's private cloud, so the core machine has horsepower (CPUs and RAM) and large disks and is prepared with a Debian install and remote access to a privileged, administrator's -account. (For details, see The Core Machine.) +account. (For details, see The Core Machine.)

    -
    -

    8.1. Include Particulars

    +
    +

    8.1. Include Particulars

    -The first task, as in The Front Role, is to include the institute +The first task, as in The Front Role, is to include the institute particulars and membership roll.

    @@ -2728,8 +2808,8 @@ particulars and membership roll.
    -
    -

    8.2. Configure Hostname

    +
    +

    8.2. Configure Hostname

    This task ensures that Core's /etc/hostname and /etc/mailname are @@ -2749,21 +2829,18 @@ proper email delivery. loop: - { name: "core.{{ domain_priv }}", file: /etc/mailname } - { name: "{{ inventory_hostname }}", file: /etc/hostname } - notify: Update hostname. - -

    -
    -roles_t/core/handlers/main.yml
    ---
     - name: Update hostname.
       become: yes
       command: hostname -F /etc/hostname
    +  when: inventory_hostname != ansible_hostname
    +  tags: actualizer
     
    -
    -

    8.3. Configure Systemd Resolved

    +
    +

    8.3. Configure Systemd Resolved

    Core runs the campus name server, so Resolved is configured to use it @@ -2792,23 +2869,25 @@ list, and to disable its cache and stub listener.

    -roles_t/core/handlers/main.yml
    
    +roles_t/core/handlers/main.yml
    ---
     - name: Reload Systemd.
       become: yes
       systemd:
         daemon-reload: yes
    +  tags: actualizer
     
     - name: Restart Systemd resolved.
       become: yes
       systemd:
         service: systemd-resolved
         state: restarted
    +  tags: actualizer
     
    -
    -

    8.4. Configure Netplan

    +
    +

    8.4. Configure Netplan

    Core's network interface is statically configured using Netplan and an @@ -2849,7 +2928,9 @@ fact was an empty hash at first boot on a simulated campus Ethernet.) nameservers: search: [ {{ domain_priv }} ] addresses: [ {{ core_addr }} ] - gateway4: {{ gate_addr }} + routes: + - to: default + via: {{ gate_addr }} dest: /etc/netplan/60-core.yaml mode: u=rw,g=r,o= notify: Apply netplan. @@ -2861,12 +2942,13 @@ fact was an empty hash at first boot on a simulated campus Ethernet.) - name: Apply netplan. become: yes command: netplan apply + tags: actualizer

    -
    -

    8.5. Configure DHCP For the Private Ethernet

    +
    +

    8.5. Configure DHCP For the Private Ethernet

    Core speaks DHCP (Dynamic Host Configuration Protocol) using the @@ -2945,12 +3027,18 @@ the real private/core-dhcpd.conf (< dest: /etc/dhcp/dhcpd.conf notify: Restart DHCP server. -- name: Enable/Start DHCP server. +- name: Start DHCP server. become: yes systemd: service: isc-dhcp-server - enabled: yes state: started + tags: actualizer + +- name: Enable DHCP server. + become: yes + systemd: + service: isc-dhcp-server + enabled: yes

    @@ -2961,16 +3049,17 @@ the real private/core-dhcpd.conf (< systemd: service: isc-dhcp-server state: restarted + tags: actualizer
    -
    -

    8.6. Configure BIND9

    +
    +

    8.6. Configure BIND9

    Core uses BIND9 to provide name service for the institute as described -in The Name Service. The configuration supports reverse name lookups, +in The Name Service. The configuration supports reverse name lookups, resolving many private network addresses to private domain names.

    @@ -3008,12 +3097,18 @@ The following tasks install and configure BIND9 on Core. loop: [ domain, private, public_vpn, campus_vpn ] notify: Reload BIND9. -- name: Enable/Start BIND9. +- name: Start BIND9. become: yes systemd: service: bind9 - enabled: yes state: started + tags: actualizer + +- name: Enable BIND9. + become: yes + systemd: + service: bind9 + enabled: yes
    @@ -3024,6 +3119,7 @@ The following tasks install and configure BIND9 on Core. systemd: service: bind9 state: reloaded + tags: actualizer
    @@ -3035,7 +3131,7 @@ probably be used as forwarders rather than Google.

    -bind-options
    acl "trusted" {
    +bind-options
    acl "trusted" {
             {{ private_net_cidr }};
             {{ wild_net_cidr }};
             {{ public_wg_net_cidr }};
    @@ -3064,27 +3160,27 @@ probably be used as forwarders rather than Google.
     
    -bind-local
    include "/etc/bind/zones.rfc1918";
    +bind-local
    include "/etc/bind/zones.rfc1918";
     
     zone "{{ domain_priv }}." {
             type master;
             file "/etc/bind/db.domain";
     };
     
    -zone "{{ private_net_cidr | ansible.utils.ipaddr('revdns')
    -         | regex_replace('^0\.','') }}" {
    +zone "{{ private_net_cidr | ansible.utils.ipaddr('revdns')
    +         | regex_replace('^0\.','') }}" {
             type master;
             file "/etc/bind/db.private";
     };
     
    -zone "{{ public_wg_net_cidr | ansible.utils.ipaddr('revdns')
    -         | regex_replace('^0\.','') }}" {
    +zone "{{ public_wg_net_cidr | ansible.utils.ipaddr('revdns')
    +         | regex_replace('^0\.','') }}" {
             type master;
             file "/etc/bind/db.public_vpn";
     };
     
    -zone "{{ campus_wg_net_cidr | ansible.utils.ipaddr('revdns')
    -         | regex_replace('^0\.','') }}" {
    +zone "{{ campus_wg_net_cidr | ansible.utils.ipaddr('revdns')
    +         | regex_replace('^0\.','') }}" {
             type master;
             file "/etc/bind/db.campus_vpn";
     };
    @@ -3093,91 +3189,91 @@ probably be used as forwarders rather than Google.
     
     
    private/db.domain
    ;
    -; BIND data file for a small institute's PRIVATE domain names.
    -;
    -$TTL    604800
    -@       IN      SOA     small.private. root.small.private. (
    -                              1         ; Serial
    -                         604800         ; Refresh
    -                          86400         ; Retry
    -                        2419200         ; Expire
    -                         604800 )       ; Negative Cache TTL
    -;
    -@       IN      NS      core.small.private.
    -$TTL    7200
    -mail    IN      CNAME   core.small.private.
    -smtp    IN      CNAME   core.small.private.
    -ns      IN      CNAME   core.small.private.
    -www     IN      CNAME   core.small.private.
    -test    IN      CNAME   core.small.private.
    -live    IN      CNAME   core.small.private.
    -ntp     IN      CNAME   core.small.private.
    -sip     IN      A       10.177.87.1
    -;
    -core    IN      A       192.168.56.1
    -gate    IN      A       192.168.56.2
    +; BIND data file for a small institute's PRIVATE domain names.
    +;
    +$TTL    604800
    +@       IN      SOA     small.private. root.small.private. (
    +                              1         ; Serial
    +                         604800         ; Refresh
    +                          86400         ; Retry
    +                        2419200         ; Expire
    +                         604800 )       ; Negative Cache TTL
    +;
    +@       IN      NS      core.small.private.
    +$TTL    7200
    +mail    IN      CNAME   core.small.private.
    +smtp    IN      CNAME   core.small.private.
    +ns      IN      CNAME   core.small.private.
    +www     IN      CNAME   core.small.private.
    +test    IN      CNAME   core.small.private.
    +live    IN      CNAME   core.small.private.
    +ntp     IN      CNAME   core.small.private.
    +sip     IN      A       10.177.87.1
    +;
    +core    IN      A       192.168.56.1
    +gate    IN      A       192.168.56.2
     
    private/db.private
    ;
    -; BIND reverse data file for a small institute's private Ethernet.
    -;
    -$TTL    604800
    -@       IN      SOA     small.private. root.small.private. (
    -                              1         ; Serial
    -                         604800         ; Refresh
    -                          86400         ; Retry
    -                        2419200         ; Expire
    -                         604800 )       ; Negative Cache TTL
    -;
    -@       IN      NS      core.small.private.
    -$TTL    7200
    -1       IN      PTR     core.small.private.
    -2       IN      PTR     gate.small.private.
    +; BIND reverse data file for a small institute's private Ethernet.
    +;
    +$TTL    604800
    +@       IN      SOA     small.private. root.small.private. (
    +                              1         ; Serial
    +                         604800         ; Refresh
    +                          86400         ; Retry
    +                        2419200         ; Expire
    +                         604800 )       ; Negative Cache TTL
    +;
    +@       IN      NS      core.small.private.
    +$TTL    7200
    +1       IN      PTR     core.small.private.
    +2       IN      PTR     gate.small.private.
     
    private/db.public_vpn
    ;
    -; BIND reverse data file for a small institute's public VPN.
    -;
    -$TTL    604800
    -@       IN      SOA     small.private. root.small.private. (
    -                              1         ; Serial
    -                         604800         ; Refresh
    -                          86400         ; Retry
    -                        2419200         ; Expire
    -                         604800 )       ; Negative Cache TTL
    -;
    -@       IN      NS      core.small.private.
    -$TTL    7200
    -1       IN      PTR     front-p.small.private.
    -2       IN      PTR     core-p.small.private.
    +; BIND reverse data file for a small institute's public VPN.
    +;
    +$TTL    604800
    +@       IN      SOA     small.private. root.small.private. (
    +                              1         ; Serial
    +                         604800         ; Refresh
    +                          86400         ; Retry
    +                        2419200         ; Expire
    +                         604800 )       ; Negative Cache TTL
    +;
    +@       IN      NS      core.small.private.
    +$TTL    7200
    +1       IN      PTR     front-p.small.private.
    +2       IN      PTR     core-p.small.private.
     
    private/db.campus_vpn
    ;
    -; BIND reverse data file for a small institute's campus VPN.
    -;
    -$TTL    604800
    -@       IN      SOA     small.private. root.small.private. (
    -                              1         ; Serial
    -                         604800         ; Refresh
    -                          86400         ; Retry
    -                        2419200         ; Expire
    -                         604800 )       ; Negative Cache TTL
    -;
    -@       IN      NS      core.small.private.
    -$TTL    7200
    -1       IN      PTR     gate-c.small.private.
    +; BIND reverse data file for a small institute's campus VPN.
    +;
    +$TTL    604800
    +@       IN      SOA     small.private. root.small.private. (
    +                              1         ; Serial
    +                         604800         ; Refresh
    +                          86400         ; Retry
    +                        2419200         ; Expire
    +                         604800 )       ; Negative Cache TTL
    +;
    +@       IN      NS      core.small.private.
    +$TTL    7200
    +1       IN      PTR     gate-c.small.private.
     
    -
    -

    8.7. Add Administrator to System Groups

    +
    +

    8.7. Add Administrator to System Groups

    The administrator often needs to read (directories of) log files owned @@ -3197,15 +3293,15 @@ these groups speeds up debugging.

    -
    -

    8.8. Configure Monkey

    +
    +

    8.8. Configure Monkey

    The small institute runs cron jobs and web scripts that generate reports and perform checks. The un-privileged jobs are run by a system account named monkey. One of Monkey's more important jobs on Core is to run rsync to update the public web site on Front (as -described in *Configure Apache2). +described in *Configure Apache2).

    @@ -3214,7 +3310,7 @@ described in *Configure Apache2). become: yes user: name: monkey - system: yes + password: "!" append: yes groups: staff @@ -3265,8 +3361,8 @@ described in *Configure Apache2).
    -
    -

    8.9. Install Unattended Upgrades

    +
    +

    8.9. Install Unattended Upgrades

    The institute prefers to install security updates as soon as possible. @@ -3281,11 +3377,11 @@ The institute prefers to install security updates as soon as possible.

    -
    -

    8.10. Install Expect

    +
    +

    8.10. Install Expect

    -The expect program is used by The Institute Commands to interact +The expect program is used by The Institute Commands to interact with Nextcloud on the command line.

    @@ -3298,12 +3394,12 @@ with Nextcloud on the command line.
    -
    -

    8.11. Configure User Accounts

    +
    +

    8.11. Configure User Accounts

    User accounts are created immediately so that backups can begin -restoring as soon as possible. The Account Management chapter +restoring as soon as possible. The Account Management chapter describes the members and usernames variables.

    @@ -3341,8 +3437,8 @@ describes the members and usernames variables.
    -
    -

    8.12. Install Server Certificate

    +
    +

    8.12. Install Server Certificate

    The servers on Core use the same certificate (and key) to authenticate @@ -3370,25 +3466,44 @@ themselves to institute clients. They share the /etc/server.crt and

    -
    -

    8.13. Install NTP

    +
    +

    8.13. Install Chrony

    -Core uses NTP to provide a time synchronization service to the campus. +Core uses Chrony to provide a time synchronization service to the campus. The default daemon's default configuration is fine.

    roles_t/core/tasks/main.yml
    
    -- name: Install NTP.
    +- name: Install Chrony.
    +  become: yes
    +  apt: pkg=chrony
    +
    +- name: Configure NTP service.
       become: yes
    -  apt: pkg=ntp
    +  copy:
    +    content: |
    +      allow {{ private_net_cidr }}
    +      allow {{ public_wg_net_cidr }}
    +      allow {{ campus_wg_net_cidr }}
    +    dest: /etc/chrony/conf.d/institute.conf
    +  notify: Restart Chrony.
     
    + +
    +roles_t/core/handlers/main.yml
    
    +- name: Restart Chrony.
    +  systemd:
    +    service: chrony
    +    state: restarted
    +
    -
    -

    8.14. Configure Postfix on Core

    +
    +
    +

    8.14. Configure Postfix on Core

    Core uses Postfix to provide SMTP service to the campus. The default @@ -3404,7 +3519,7 @@ The appropriate answers are listed here but will be checked

    -As discussed in The Email Service above, Core delivers email addressed +As discussed in The Email Service above, Core delivers email addressed to any internal domain name locally, and uses its smarthost Front to relay the rest. Core is reachable only on institute networks, so there is little benefit in enabling TLS, but it does need to handle @@ -3417,7 +3532,7 @@ Core relays messages from any institute network.

    -postfix-core-networks
    - p: mynetworks
    +postfix-core-networks
    - p: mynetworks
       v: >-
          {{ private_net_cidr }}
          {{ public_wg_net_cidr }}
    @@ -3433,7 +3548,7 @@ Core uses Front to relay messages to the Internet.
     

    -postfix-core-relayhost
    - { p: relayhost, v: "[{{ front_wg_addr }}]" }
    +postfix-core-relayhost
    - { p: relayhost, v: "[{{ front_wg_addr }}]" }
     
    @@ -3445,7 +3560,7 @@ file.

    -postfix-transport
    .{{ domain_name }}      local:$myhostname
    +postfix-transport
    .{{ domain_name }}      local:$myhostname
     .{{ domain_priv }}      local:$myhostname
     
    @@ -3456,7 +3571,7 @@ The complete list of Core's Postfix settings for

    -postfix-core
    <<postfix-relaying>>
    +postfix-core
    <<postfix-relaying>>
     - { p: smtpd_tls_security_level, v: none }
     - { p: smtp_tls_security_level, v: none }
     <<postfix-message-size>>
    @@ -3500,12 +3615,18 @@ enable the service.  Whenever /etc/postfix/transport is changed, the
         dest: /etc/postfix/transport
       notify: Postmap transport.
     
    -- name: Enable/Start Postfix.
    +- name: Start Postfix.
       become: yes
       systemd:
         service: postfix
    -    enabled: yes
         state: started
    +  tags: actualizer
    +
    +- name: Enable Postfix.
    +  become: yes
    +  systemd:
    +    service: postfix
    +    enabled: yes
     
    @@ -3516,6 +3637,7 @@ enable the service. Whenever /etc/postfix/transport is changed, the systemd: service: postfix state: restarted + tags: actualizer - name: Postmap transport. become: yes @@ -3527,8 +3649,8 @@ enable the service. Whenever /etc/postfix/transport is changed, the
    -
    -

    8.15. Configure Private Email Aliases

    +
    +

    8.15. Configure Private Email Aliases

    The institute's Core needs to deliver email addressed to institute @@ -3545,11 +3667,9 @@ installed by more specialized roles. become: yes blockinfile: block: | - webmaster: root admin: root www-data: root monkey: root - root: {{ ansible_user }} path: /etc/aliases marker: "# {mark} INSTITUTE MANAGED BLOCK" notify: New aliases. @@ -3561,12 +3681,13 @@ installed by more specialized roles. - name: New aliases. become: yes command: newaliases + tags: actualizer

    -
    -

    8.16. Configure Dovecot IMAPd

    +
    +

    8.16. Configure Dovecot IMAPd

    Core uses Dovecot's IMAPd to store and serve member emails. As on @@ -3576,7 +3697,7 @@ top" given that Core is only accessed from private (encrypted) networks, but helps to ensure privacy even when members accidentally attempt connections from outside the private networks. For more information about Core's role in the institute's email services, see -The Email Service. +The Email Service.

    @@ -3584,7 +3705,7 @@ The institute follows the recommendation in the package README.Debian (in /usr/share/dovecot-core/) but replaces the default "snake oil" certificate with another, signed by the institute. (For more information about the institute's X.509 certificates, see -Keys.) +Keys.)

    @@ -3610,12 +3731,18 @@ and enables it to start at every reboot. dest: /etc/dovecot/local.conf notify: Restart Dovecot. -- name: Enable/Start Dovecot. +- name: Start Dovecot. become: yes systemd: service: dovecot - enabled: yes state: started + tags: actualizer + +- name: Enable Dovecot. + become: yes + systemd: + service: dovecot + enabled: yes

    @@ -3626,12 +3753,13 @@ and enables it to start at every reboot. systemd: service: dovecot state: restarted + tags: actualizer
    -
    -

    8.17. Configure Fetchmail

    +
    +

    8.17. Configure Fetchmail

    Core runs a fetchmail for each member of the institute. Individual @@ -3648,13 +3776,13 @@ the username. The template is only used when the record has a

    -fetchmail-config
    # Permissions on this file may be no greater than 0600.
    -
    +fetchmail-config
    # Permissions on this file may be no greater than 0600.
    +
     set no bouncemail
     set no spambounce
     set no syslog
    -#set logfile /home/{{ item }}/.fetchmail.log
    -
    +#set logfile /home/{{ item }}/.fetchmail.log
    +
     poll {{ front_wg_addr }} protocol imap timeout 15
         username {{ item }}
         password "{{ members[item].password_fetchmail }}" fetchall
    @@ -3667,7 +3795,7 @@ The Systemd service description.
     

    -fetchmail-service
    [Unit]
    +fetchmail-service
    [Unit]
     Description=Fetchmail --idle task for {{ item }}.
     AssertPathExists=/home/{{ item }}/.fetchmailrc
     After=wg-quick@wg0.service
    @@ -3737,7 +3865,7 @@ provided the Core service.
       when:
       - members[item].status == 'current'
       - members[item].password_fetchmail is defined
    -  tags: accounts
    +  tags: accounts, actualizer
     
    @@ -3784,12 +3912,12 @@ Otherwise the following task might be appropriate.
    -
    -

    8.18. Configure Apache2

    +
    +

    8.18. Configure Apache2

    This is the small institute's campus web server. It hosts several web -sites as described in The Web Services. +sites as described in The Web Services.

    @@ -3860,7 +3988,7 @@ naming a sub-directory in the member's home directory on Core. The

    -apache-userdir-core
    UserDir Public/HTML
    +apache-userdir-core
    UserDir Public/HTML
     <Directory /home/*/Public/HTML/>
             Require all granted
             AllowOverride None
    @@ -3875,7 +4003,7 @@ redirect, the encryption ciphers and certificates.
     

    -apache-live
    <VirtualHost *:80>
    +apache-live
    <VirtualHost *:80>
             ServerName live
             ServerAlias live.{{ domain_priv }}
             ServerAdmin webmaster@core.{{ domain_priv }}
    @@ -3902,7 +4030,7 @@ familiar.
     

    -apache-test
    <VirtualHost *:80>
    +apache-test
    <VirtualHost *:80>
             ServerName test
             ServerAlias test.{{ domain_priv }}
             ServerAdmin webmaster@core.{{ domain_priv }}
    @@ -3931,7 +4059,7 @@ trained staffers, monitored by a revision control system, etc.
     

    -apache-campus
    <VirtualHost *:80>
    +apache-campus
    <VirtualHost *:80>
             ServerName www
             ServerAlias www.{{ domain_priv }}
             ServerAdmin webmaster@core.{{ domain_priv }}
    @@ -4028,12 +4156,18 @@ The a2ensite command enables them.
       loop: [ live, test, www, default-ssl ]
       notify: Restart Apache2.
     
    -- name: Enable/Start Apache2.
    +- name: Start Apache2.
       become: yes
       systemd:
         service: apache2
    -    enabled: yes
         state: started
    +  tags: actualizer
    +
    +- name: Enable Apache2.
    +  become: yes
    +  systemd:
    +    service: apache2
    +    enabled: yes
     
    @@ -4044,12 +4178,13 @@ The a2ensite command enables them. systemd: service: apache2 state: restarted + tags: actualizer
    -
    -

    8.19. Configure Website Updates

    +
    +

    8.19. Configure Website Updates

    Monkey on Core runs /usr/local/sbin/webupdate every 15 minutes via a @@ -4058,10 +4193,12 @@ Monkey on Core runs /usr/local/sbin/webupdate every 15 minutes via a

    -private/webupdate
    #!/bin/bash -e
    -#
    -# DO NOT EDIT.  This file was tangled from institute.org.
    -
    +private/webupdate
    #!/bin/bash -e
    +#
    +# DO NOT EDIT.
    +#
    +# This file was tangled from a small institute's README.org.
    +
     cd /WWW/live/
     
     rsync -avz --delete --chmod=g-w         \
    @@ -4074,7 +4211,7 @@ rsync -avz --delete --chmod=g-w         \
     The following tasks install the webupdate script from private/,
     and create Monkey's cron job.  An example webupdate script is
    -provided here.
    +provided here.
     

    @@ -4099,12 +4236,12 @@ provided here.
    -
    -

    8.20. Configure Core WireGuard™ Interface

    +
    +

    8.20. Configure Core WireGuard™ Interface

    Core connects to Front's WireGuard™ service to provide members abroad -with a route to the campus networks. As described in Configure Public +with a route to the campus networks. As described in Configure Public WireGuard™ Subnet for Front, Core is expected to forward packets from/to the private networks.

    @@ -4120,8 +4257,8 @@ public IP address and a special port. Address = 10.177.87.2 PostUp = wg set %i private-key /etc/wireguard/private-key -# Front -[Peer] +# Front +[Peer] EndPoint = 192.168.15.5:39608 PublicKey = S+6HaTnOwwhWgUGXjSBcPAvifKw+j8BDTRfq534gNW4= AllowedIPs = 10.177.87.1 @@ -4136,6 +4273,13 @@ The following tasks install WireGuard™, configure it with
    roles_t/core/tasks/main.yml
    
    +- name: Enable IP forwarding.
    +  become: yes
    +  sysctl:
    +    name: net.ipv4.ip_forward
    +    value: "1"
    +    state: present
    +
     - name: Install WireGuard™.
       become: yes
       apt: pkg=wireguard
    @@ -4150,12 +4294,18 @@ The following tasks install WireGuard™, configure it with
         group: root
       notify: Restart WireGuard™.
     
    -- name: Enable/Start WireGuard™ on boot.
    +- name: Start WireGuard™.
       become: yes
       systemd:
         service: wg-quick@wg0
    -    enabled: yes
         state: started
    +  tags: actualizer
    +
    +- name: Enable WireGuard™.
    +  become: yes
    +  systemd:
    +    service: wg-quick@wg0
    +    enabled: yes
     
    @@ -4166,12 +4316,13 @@ The following tasks install WireGuard™, configure it with systemd: service: wg-quick@wg0 state: restarted + tags: actualizer
    -
    -

    8.21. Configure NAGIOS

    +
    +

    8.21. Configure NAGIOS

    Core runs a nagios4 server to monitor "services" on institute hosts. @@ -4234,12 +4385,18 @@ Core and Campus (and thus Gate) machines. dest: /etc/nagios4/conf.d/institute.cfg notify: Reload NAGIOS4. -- name: Enable/Start NAGIOS4. +- name: Start NAGIOS4. become: yes systemd: service: nagios4 - enabled: yes state: started + tags: actualizer + +- name: Enable NAGIOS4. + become: yes + systemd: + service: nagios4 + enabled: yes

    @@ -4250,11 +4407,12 @@ Core and Campus (and thus Gate) machines. systemd: service: nagios4 state: reloaded + tags: actualizer
    -
    -

    8.21.1. Configure NAGIOS Monitors for Core

    +
    +

    8.21.1. Configure NAGIOS Monitors for Core

    The first block in nagios.cfg specifies monitors for services on @@ -4329,8 +4487,8 @@ used here may specify plugin arguments.

    -
    -

    8.21.2. Custom NAGIOS Monitor inst_sensors

    +
    +

    8.21.2. Custom NAGIOS Monitor inst_sensors

    The check_sensors plugin is included in the package @@ -4340,8 +4498,8 @@ small institute substitutes a slightly modified version,

    -roles_t/core/files/inst_sensors
    #!/bin/sh
    -
    +roles_t/core/files/inst_sensors
    #!/bin/sh
    +
     PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin"
     export PATH
     PROGNAME=`basename $0`
    @@ -4366,9 +4524,9 @@ small institute substitutes a slightly modified version,
     }
     
     brief_data() {
    -    echo "$1" | sed -n -E -e '
    -  /^ *Core [0-9]+:/ { s/^ *Core [0-9]+: +([-+]?[0-9.]+).*/ \1/; H }
    -  $ { x; s/\n//g; p }'
    +    echo "$1" | sed -n -E -e '
    +  /^ *Core [0-9]+:/ { s/^ *Core [0-9]+: +([-+]?[0-9.]+).*/ \1/; H }
    +  $ { x; s/\n//g; p }'
     }
     
     case "$1" in
    @@ -4442,8 +4600,8 @@ Core.
     
    -
    -

    8.21.3. Configure NAGIOS Monitors for Remote Hosts

    +
    +

    8.21.3. Configure NAGIOS Monitors for Remote Hosts

    The following sections contain code blocks specifying monitors for @@ -4460,12 +4618,12 @@ plugin with pre-defined arguments appropriate for the institute. The commands are defined in code blocks interleaved with the blocks that monitor them. The command blocks are appended to nrpe.cfg and the monitoring blocks to nagios.cfg. The nrpe.cfg file is installed -on each campus host by the campus role's Configure NRPE tasks. +on each campus host by the campus role's Configure NRPE tasks.

    -
    -

    8.21.4. Configure NAGIOS Monitors for Gate

    +
    +

    8.21.4. Configure NAGIOS Monitors for Gate

    Define the monitored host, gate. Monitor its response to network @@ -4596,12 +4754,12 @@ Monitor inst_sensors on Gate.

    -
    -

    8.22. Configure Backups

    +
    +

    8.22. Configure Backups

    The following task installs the backup script from private/. An -example script is provided in here. +example script is provided in here.

    @@ -4616,20 +4774,20 @@ example script is provided in here.
    -
    -

    8.23. Configure Nextcloud

    +
    +

    8.23. Configure Nextcloud

    Core runs Nextcloud to provide a private institute cloud, as described -in The Cloud Service. Installing, restoring (from backup), and +in The Cloud Service. Installing, restoring (from backup), and upgrading Nextcloud are manual processes documented in The Nextcloud Admin Manual, Maintenance. However Ansible can help prepare Core before an install or restore, and perform basic security checks afterwards.

    -
    -

    8.23.1. Prepare Core For Nextcloud

    +
    +

    8.23.1. Prepare Core For Nextcloud

    The Ansible code contained herein prepares Core to run Nextcloud by @@ -4716,8 +4874,8 @@ virtual host's document root. <Directory /var/www/html/> <IfModule mod_rewrite.c> RewriteEngine on - # LogLevel alert rewrite:trace3 - RewriteRule ^\.well-known/carddav \ + # LogLevel alert rewrite:trace3 + RewriteRule ^\.well-known/carddav \ /nextcloud/remote.php/dav [R=301,L] RewriteRule ^\.well-known/caldav \ /nextcloud/remote.php/dav [R=301,L] @@ -4819,14 +4977,6 @@ such a user, the nextcloud database and nextclouduser created manually.

    -

    -The following task would work (mysql_user supports -check_implicit_admin) but the nextcloud database was not created -above. Thus both database and user are created manually, with SQL -given in the 8.23.5 subsection below, before occ -maintenance:install can run. -

    -
    
     - name: Create Nextcloud DB user.
    @@ -4840,6 +4990,24 @@ maintenance:install can run.
     
    +

    +The task above would work (mysql_user supports +check_implicit_admin) but the nextcloud database was not created +first. Thus both database and user are created manually, with the +following SQL, before occ maintenance:install can run. +

    + +
    +
    create database nextcloud
    +    character set utf8mb4
    +    collate utf8mb4_general_ci;
    +grant all on nextcloud.*
    +    to 'nextclouduser'@'localhost'
    +    identified by 'ippAgmaygyobwyt5';
    +flush privileges;
    +
    +
    +

    Finally, a symbolic link positions /Nextcloud/nextcloud/ at /var/www/nextcloud/ as expected by the Apache2 configuration above. @@ -4861,8 +5029,8 @@ its document root.

    -
    -

    8.23.2. Configure PHP

    +
    +

    8.23.2. Configure PHP

    The following tasks set a number of PHP parameters for better @@ -4905,8 +5073,8 @@ performance, as recommended by Nextcloud.

    -
    -

    8.23.3. Create /Nextcloud/

    +
    +

    8.23.3. Create /Nextcloud/

    The Ansible tasks up to this point have completed Core's LAMP stack @@ -4964,8 +5132,8 @@ sudo mount /Nextcloud

    -
    -

    8.23.4. Restore Nextcloud

    +
    +

    8.23.4. Restore Nextcloud

    Restoring Nextcloud in the newly created /Nextcloud/ presumably @@ -4997,12 +5165,20 @@ make it so. The database is restored with the following commands, which assume the last dump was made February 20th 2022 and thus was saved in /Nextcloud/20220220.bak. The database will need to be -created first as when installing Nextcloud. The appropriate SQL are -given in Install Nextcloud below. +created first as when installing Nextcloud.

    cd /Nextcloud/
    +sudo mysql
    +create database nextcloud
    +    character set utf8mb4
    +    collate utf8mb4_general_ci;
    +grant all on nextcloud.*
    +    to 'nextclouduser'@'localhost'
    +    identified by 'ippAgmaygyobwyt5';
    +flush privileges;
    +exit;
     sudo mysql --defaults-file=dbbackup.cnf nextcloud < 20220220.bak
     cd nextcloud/
     sudo -u www-data php occ maintenance:data-fingerprint
    @@ -5016,8 +5192,8 @@ Overview web page.
     

    -
    -

    8.23.5. Install Nextcloud

    +
    +

    8.23.5. Install Nextcloud

    Installing Nextcloud in the newly created /Nextcloud/ starts with @@ -5087,8 +5263,8 @@ Administration > Overview page.

    -
    -

    8.23.6. Afterwards

    +
    +

    8.23.6. Afterwards

    Whether Nextcloud was restored or installed, there are a few things @@ -5270,14 +5446,14 @@ run before the next backup.

    -
    -

    9. The Gate Role

    +
    +

    9. The Gate Role

    The gate role configures the services expected at the campus gate: access to the private Ethernet from the untrusted Ethernet (e.g. a campus Wi-Fi AP) via VPN, and access to the Internet via NAT. The -gate machine uses three network interfaces (see The Gate Machine) +gate machine uses three network interfaces (see The Gate Machine) configured with persistent names used in its firewall rules.

    @@ -5299,8 +5475,8 @@ applied first, by which Gate gets a campus machine's DNS and Postfix configurations, etc.

    -
    -

    9.1. Include Particulars

    +
    +

    9.1. Include Particulars

    The following should be familiar boilerplate by now. @@ -5321,8 +5497,8 @@ The following should be familiar boilerplate by now.

    -
    -

    9.2. Configure Netplan

    +
    +

    9.2. Configure Netplan

    Gate's network interfaces are configured using Netplan and two files. @@ -5408,18 +5584,19 @@ new network plan. - name: Apply netplan. become: yes command: netplan apply + tags: actualizer

    Note that the 60-isp.yaml file is only updated (created) if it does -not already exists, so that it can be easily modified to debug a new +not already exist so that it can be easily modified to debug a new campus ISP without interference from Ansible.

    -
    -

    9.3. UFW Rules

    +
    +

    9.3. UFW Rules

    Gate uses the Uncomplicated FireWall (UFW) to install its packet @@ -5443,7 +5620,7 @@ should not be routing their Internet traffic through their VPN.

    -ufw-nat
    -A POSTROUTING -s {{ private_net_cidr }} -o isp -j MASQUERADE
    +ufw-nat
    -A POSTROUTING -s {{ private_net_cidr }} -o isp -j MASQUERADE
     -A POSTROUTING -s {{    wild_net_cidr }} -o isp -j MASQUERADE
     
    @@ -5459,7 +5636,7 @@ connection tracking).

    -ufw-forward-nat
    -A ufw-user-forward -i lan  -o isp -j ACCEPT
    +ufw-forward-nat
    -A ufw-user-forward -i lan  -o isp -j ACCEPT
     -A ufw-user-forward -i wild -o isp -j ACCEPT
     
    @@ -5479,7 +5656,7 @@ public and campus VPNs is also allowed.

    -ufw-forward-private
    -A ufw-user-forward -i lan  -o wg0 -j ACCEPT
    +ufw-forward-private
    -A ufw-user-forward -i lan  -o wg0 -j ACCEPT
     -A ufw-user-forward -i wg0  -o lan -j ACCEPT
     -A ufw-user-forward -i wg0  -o wg0 -j ACCEPT
     
    @@ -5498,15 +5675,15 @@ the wild device to the lan device, just the wg0<

    -
    -

    9.4. Configure UFW

    +
    +

    9.4. Configure UFW

    The following tasks install the Uncomplicated Firewall (UFW), set its policy in /etc/default/ufw, install the NAT rules in /etc/ufw/before.rules, and the Forward rules in /etc/ufw/user.rules (where the ufw-user-forward chain -is… mentioned?). +is… mentioned?).

    @@ -5574,8 +5751,8 @@ sudo ufw enable

    -
    -

    9.5. Configure DHCP For The Wild Ethernet

    +
    +

    9.5. Configure DHCP For The Wild Ethernet

    To accommodate commodity Wi-Fi access points, as well as wired IoT @@ -5660,12 +5837,18 @@ addresses (or perhaps finding no wild interface at all?). dest: /etc/systemd/system/isc-dhcp-server.service.d/depend.conf notify: Reload Systemd. -- name: Enable/Start DHCP server. +- name: Start DHCP server. become: yes systemd: service: isc-dhcp-server - enabled: yes state: started + tags: actualizer + +- name: Enable DHCP server. + become: yes + systemd: + service: isc-dhcp-server + enabled: yes

    @@ -5676,11 +5859,13 @@ addresses (or perhaps finding no wild interface at all?). systemd: service: isc-dhcp-server state: restarted + tags: actualizer - name: Reload Systemd. become: yes systemd: daemon-reload: yes + tags: actualizer
    @@ -5701,8 +5886,8 @@ command would not be necessary.

    -
    -

    9.6. Configure Campus WireGuard™ Subnet

    +
    +

    9.6. Configure Campus WireGuard™ Subnet

    Gate uses WireGuard™ to provide a campus VPN service. Gate's routes @@ -5714,29 +5899,29 @@ additional route Gate needs is to the public VPN via Core. The rest

    -The following example private/gate-wg0.conf configuration recognizes +The following example private/gate-wg0.conf configuration recognizes a wired IoT appliance, Dick's notebook and his replacement phone, assigning them the host numbers 3, 4 and 6 respectively.

    -private/gate-wg0.conf
    [Interface]
    +private/gate-wg0.conf
    [Interface]
     Address = 10.84.139.1/24
     ListenPort = 51820
     PostUp = wg set %i private-key /etc/wireguard/private-key
     
    -# thing
    -[Peer]
    +# thing
    +[Peer]
     PublicKey = LdsCsgfjKCfd5+VKS+Q/dQhWO8NRNygByDO2VxbXlSQ=
     AllowedIPs = 10.84.139.3
     
    -# dick
    -[Peer]
    +# dick
    +[Peer]
     PublicKey = 4qd4xdRztZBKhFrX9jI/b4fnMzpKQ5qhg691hwYSsX8=
     AllowedIPs = 10.84.139.4
     
    -# dicks-razr
    -[Peer]
    +# dicks-razr
    +[Peer]
     PublicKey = zho0qMxoLclJSQu4GeJEcMkk0hx4Q047OcNc8vOejVw=
     AllowedIPs = 10.84.139.6
     
    @@ -5754,8 +5939,8 @@ WireGuard™ tunnel on an IoT appliance
    [DNS = 192.168.56.1
     Domain = small.private
     
    -# Gate
    -[Peer]
    +# Gate
    +[Peer]
     EndPoint = 192.168.57.1:51820
     PublicKey = y3cjFnvQbylmH4lGTujpqc8rusIElmJ4Gu9hh6iR7QI=
     AllowedIPs = 10.84.139.1
    @@ -5777,8 +5962,8 @@ WireGuard™ tunnel on Dick's notebook, used on campus
     PostUp = resolvectl dns wg0 192.168.56.1
     PostUp = resolvectl domain wg0 small.private
     
    -# Gate
    -[Peer]
    +# Gate
    +[Peer]
     EndPoint = 192.168.57.1:51820
     PublicKey = y3cjFnvQbylmH4lGTujpqc8rusIElmJ4Gu9hh6iR7QI=
     AllowedIPs = 10.84.139.1
    @@ -5790,11 +5975,18 @@ WireGuard™ tunnel on Dick's notebook, used on campus
     
     

    The following tasks install WireGuard™, configure it with -private/gate-wg0.conf, and enable the service. +private/gate-wg0.conf, and enable the service.

    roles_t/gate/tasks/main.yml
    
    +- name: Enable IP forwarding.
    +  become: yes
    +  sysctl:
    +    name: net.ipv4.ip_forward
    +    value: "1"
    +    state: present
    +
     - name: Install WireGuard™.
       become: yes
       apt: pkg=wireguard
    @@ -5809,12 +6001,18 @@ The following tasks install WireGuard™, configure it with
         group: root
       notify: Restart WireGuard™.
     
    -- name: Enable/Start WireGuard™ on boot.
    +- name: Start WireGuard™.
       become: yes
       systemd:
         service: wg-quick@wg0
    -    enabled: yes
         state: started
    +  tags: actualizer
    +
    +- name: Enable WireGuard™.
    +  become: yes
    +  systemd:
    +    service: wg-quick@wg0
    +    enabled: yes
     
    @@ -5825,13 +6023,14 @@ The following tasks install WireGuard™, configure it with systemd: service: wg-quick@wg0 state: restarted + tags: actualizer
    -
    -

    10. The Campus Role

    +
    +

    10. The Campus Role

    The campus role configures generic campus server machines: network @@ -5847,8 +6046,8 @@ Wireless campus devices register their public keys using the ./inst client command which updates the WireGuard™ configuration on Gate.

    -
    -

    10.1. Include Particulars

    +
    +

    10.1. Include Particulars

    The following should be familiar boilerplate by now. @@ -5864,8 +6063,8 @@ The following should be familiar boilerplate by now.

    -
    -

    10.2. Configure Hostname

    +
    +

    10.2. Configure Hostname

    Clients should be using the expected host name. @@ -5888,12 +6087,13 @@ Clients should be using the expected host name. become: yes command: hostname -F /etc/hostname when: inventory_hostname != ansible_hostname + tags: actualizer

    -
    -

    10.3. Configure Systemd Timesyncd

    +
    +

    10.3. Configure Systemd Timesyncd

    The institute uses a common time reference throughout the campus. @@ -5919,12 +6119,13 @@ and file timestamps. systemd: service: systemd-timesyncd state: restarted + tags: actualizer

    -
    -

    10.4. Add Administrator to System Groups

    +
    +

    10.4. Add Administrator to System Groups

    The administrator often needs to read (directories of) log files owned @@ -5944,8 +6145,8 @@ these groups speeds up debugging.

    -
    -

    10.5. Install Unattended Upgrades

    +
    +

    10.5. Install Unattended Upgrades

    The institute prefers to install security updates as soon as possible. @@ -5960,8 +6161,8 @@ The institute prefers to install security updates as soon as possible.

    -
    -

    10.6. Configure Postfix on Campus

    +
    +

    10.6. Configure Postfix on Campus

    The Postfix settings used by the campus include message size, queue @@ -6002,12 +6203,18 @@ tasks below. - { p: inet_interfaces, v: loopback-only } notify: Restart Postfix. -- name: Enable/Start Postfix. +- name: Start Postfix. become: yes systemd: service: postfix - enabled: yes state: started + tags: actualizer + +- name: Enable Postfix. + become: yes + systemd: + service: postfix + enabled: yes

    @@ -6018,12 +6225,13 @@ tasks below. systemd: service: postfix state: restarted + tags: actualizer
    -
    -

    10.7. Set Domain Name

    +
    +

    10.7. Set Domain Name

    The host's fully qualified (private) domain name (FQDN) is set by an @@ -6046,13 +6254,13 @@ manpage.)

    -
    -

    10.8. Configure NRPE

    +
    +

    10.8. Configure NRPE

    Each campus host runs an NRPE (a NAGIOS Remote Plugin Executor) server so that the NAGIOS4 server on Core can collect statistics. The -NAGIOS service is discussed in the Configure NRPE section of The Core +NAGIOS service is discussed in the Configure NRPE section of The Core Role.

    @@ -6085,12 +6293,18 @@ Role. dest: /etc/nagios/nrpe.d/institute.cfg notify: Reload NRPE server. -- name: Enable/Start NRPE server. +- name: Start NRPE server. become: yes systemd: service: nagios-nrpe-server - enabled: yes state: started + tags: actualizer + +- name: Enable NRPE server. + become: yes + systemd: + service: nagios-nrpe-server + enabled: yes
    @@ -6101,13 +6315,14 @@ Role. systemd: service: nagios-nrpe-server state: reloaded + tags: actualizer
    -
    -

    11. The Ansible Configuration

    +
    +

    11. The Ansible Configuration

    The small institute uses Ansible to maintain the configuration of its @@ -6116,7 +6331,7 @@ runs playbook site.yml to apply the appro role(s) to each host. Examples of these files are included here, and are used to test the roles. The example configuration applies the institutional roles to VirtualBox machines prepared according to -chapter Testing. +chapter Testing.

    @@ -6129,13 +6344,13 @@ while changes to the institute's particulars are committed to a separate revision history.

    -
    -

    11.1. ansible.cfg

    +
    +

    11.1. ansible.cfg

    The Ansible configuration file ansible.cfg contains just a handful of settings, some included just to create a test jig as described in -Testing. +Testing.

      @@ -6144,7 +6359,7 @@ of settings, some included just to create a test jig as described in that Python 3 can be expected on all institute hosts.
    • vault_password_file is set to suppress prompts for the vault password. The institute keeps its vault password in Secret/ (as -described in Keys) and thus sets this parameter to +described in Keys) and thus sets this parameter to Secret/vault-password.
    • inventory is set to avoid specifying it on the command line.
    • roles_path is set to the recently tangled roles files in @@ -6161,8 +6376,8 @@ described in Keys) and thus sets this parameter to
    -
    -

    11.2. hosts

    +
    +

    11.2. hosts

    The Ansible inventory file hosts describes all of the institute's @@ -6174,7 +6389,7 @@ describes three test servers named front, core and

    -hosts
    all:
    +hosts
    all:
       vars:
         ansible_user: sysadm
         ansible_ssh_extra_args: -i Secret/ssh_admin/id_rsa
    @@ -6239,8 +6454,8 @@ the Secret/vault-password file.
     

    -
    -

    11.3. playbooks/site.yml

    +
    +

    11.3. playbooks/site.yml

    The example playbooks/site.yml playbook (below) applies the @@ -6273,8 +6488,8 @@ the example inventory: hosts.

    -
    -

    11.4. Secret/vault-password

    +
    +

    11.4. Secret/vault-password

    As already mentioned, the small institute keeps its Ansible vault @@ -6286,17 +6501,17 @@ example password matches the example encryptions above.

    -Secret/vault-password
    alitysortstagess
    +Secret/vault-password
    alitysortstagess
     
    -
    -

    11.5. Creating A Working Ansible Configuration

    +
    +

    11.5. Creating A Working Ansible Configuration

    A working Ansible configuration can be "tangled" from this document to -produce the test configuration described in the Testing chapter. The +produce the test configuration described in the Testing chapter. The tangling is done by Emacs's org-babel-tangle function and has already been performed with the resulting tangle included in the distribution with this document. @@ -6339,7 +6554,7 @@ would be copied, with appropriate changes, into new subdirectories public/ and private/.

  • ~/net/Secret would be a symbolic link to the (auto-mounted?) location of the administrator's encrypted USB drive, as described in -section Keys.
  • +section Keys.

    @@ -6375,8 +6590,8 @@ super-project's directory.

    -
    -

    11.6. Maintaining A Working Ansible Configuration

    +
    +

    11.6. Maintaining A Working Ansible Configuration

    The Ansible roles currently tangle into the roles_t/ directory to @@ -6395,8 +6610,8 @@ their way back to the code block in this document.

    -
    -

    12. The Institute Commands

    +
    +

    12. The Institute Commands

    The institute's administrator uses a convenience script to reliably @@ -6406,8 +6621,8 @@ Ansible configuration. The Ansible commands it executes are expected to get their defaults from ./ansible.cfg.

    -
    -

    12.1. Sub-command Blocks

    +
    +

    12.1. Sub-command Blocks

    The code blocks in this chapter tangle into the inst script. Each @@ -6421,18 +6636,20 @@ The first code block is the header of the ./inst script.

    -inst
    #!/usr/bin/perl -w
    -#
    -# DO NOT EDIT.  This file was tangled from an institute.org file.
    -
    +inst
    #!/usr/bin/perl -w
    +#
    +# DO NOT EDIT.
    +#
    +# This file was tangled from a small institute's README.org.
    +
     use strict;
     use IO::File;
     
    -
    -

    12.2. Sanity Check

    +
    +

    12.2. Sanity Check

    The next code block does not implement a sub-command; it implements @@ -6492,8 +6709,8 @@ permissions. It probes past the Secret/ mount poin

    -
    -

    12.3. Importing Ansible Variables

    +
    +

    12.3. Importing Ansible Variables

    To ensure that Ansible and ./inst are sympatico vis-a-vi certain @@ -6640,8 +6857,8 @@ the test client to give it different personae.

    -
    -

    12.4. The CA Command

    +
    +

    12.4. The CA Command

    The next code block implements the CA sub-command, which creates a @@ -6699,8 +6916,8 @@ config. umask 077; mysystem "cd Secret/CA; ./easyrsa init-pki"; mysystem "cd Secret/CA; ./easyrsa build-ca nopass"; - # Common Name: small.example.org - + # Common Name: small.example.org + my $dom = $domain_name; my $pvt = $domain_priv; mysystem "cd Secret/CA; ./easyrsa build-server-full $dom nopass"; @@ -6741,8 +6958,8 @@ config.

    -
    -

    12.5. The Config Command

    +
    +

    12.5. The Config Command

    The next code block implements the config sub-command, which @@ -6792,12 +7009,12 @@ Example command lines:

    -
    -

    12.6. Account Management

    +
    +

    12.6. Account Management

    For general information about members and their Unix accounts, see -Accounts. The account management sub-commands maintain a mapping +Accounts. The account management sub-commands maintain a mapping associating member "usernames" (Unix account names) with their records. The mapping is stored among other things in private/members.yml as the value associated with the key members. @@ -6902,8 +7119,8 @@ read from the file. The dump subroutine is another story (below). my $old_umask = umask 077; my $path = "private/members.yml"; print "$path: "; STDOUT->flush; - eval { #DumpFile ("$path.tmp", $yaml); - dump_members_yaml ("$path.tmp", $yaml); + eval { #DumpFile ("$path.tmp", $yaml); + dump_members_yaml ("$path.tmp", $yaml); rename ("$path.tmp", $path) or die "Could not rename $path.tmp: $!\n"; }; my $err = $@; @@ -6997,8 +7214,8 @@ each record.

    -
    -

    12.7. The New Command

    +
    +

    12.7. The New Command

    The next code block implements the new sub-command. It adds a new @@ -7099,8 +7316,8 @@ initial, generated password.

    -
    -

    12.8. The Pass Command

    +
    +

    12.8. The Pass Command

    The institute's passwd command on Core securely emails root with a @@ -7114,8 +7331,8 @@ Ansible site.yml playbook to update the message is sent to member@core.

    -
    -

    12.8.1. Less Aggressive passwd.

    +
    +

    12.8.1. Less Aggressive passwd.

    The next code block implements the less aggressive passwd command. @@ -7129,8 +7346,8 @@ in Secret/.

    -roles_t/core/templates/passwd
    #!/bin/perl -wT
    -
    +roles_t/core/templates/passwd
    #!/bin/perl -wT
    +
     use strict;
     
     $ENV{PATH} = "/usr/sbin:/usr/bin:/bin";
    @@ -7184,35 +7401,35 @@ close $TMP;
     open $O, ("| gpg --encrypt --armor"
               ." --trust-model always --recipient root\@core"
               ." > $tmp") or die "Error running gpg > $tmp: $!\n";
    -print $O <<EOD;
    -username: $username
    -password: $epass
    -EOD
    -close $O or die "Error closing pipe to gpg: $!\n";
    +print $O <<EOD;
    +username: $username
    +password: $epass
    +EOD
    +close $O or die "Error closing pipe to gpg: $!\n";
     
     use File::Copy;
     open ($O, "| sendmail root");
    -print $O <<EOD;
    -From: root
    -To: root
    -Subject: New password.
    +print $O <<EOD;
    +From: root
    +To: root
    +Subject: New password.
     
    -EOD
    -$O->flush;
    +EOD
    +$O->flush;
     copy $tmp, $O;
    -#print $O `cat $tmp`;
    -close $O or die "Error closing pipe to sendmail: $!\n";
    +#print $O `cat $tmp`;
    +close $O or die "Error closing pipe to sendmail: $!\n";
     
    -print "
    -Your request was sent to Root.  PLEASE WAIT for email confirmation
    -that the change was completed.\n";
    +print "
    +Your request was sent to Root.  PLEASE WAIT for email confirmation
    +that the change was completed.\n";
     exit;
     
    -
    -

    12.8.2. Less Aggressive Pass Command

    +
    +

    12.8.2. Less Aggressive Pass Command

    The following code block implements the ./inst pass command, used by @@ -7261,13 +7478,13 @@ the administrator to update private/members.yml before running my $O = new IO::File; open ($O, "| sendmail $user\@$domain_priv") or die "Could not pipe to sendmail: $!\n"; - print $O "From: <root> -To: <$user> -Subject: Password change. + print $O "From: <root> +To: <$user> +Subject: Password change. -Your new password has been distributed to the servers. +Your new password has been distributed to the servers. -As always: please email root with any questions or concerns.\n"; +As always: please email root with any questions or concerns.\n"; close $O or die "pipe to sendmail failed: $!\n"; exit; } @@ -7309,8 +7526,8 @@ users:resetpassword command using expect(1).

    -
    -

    12.8.3. Installing the Less Aggressive passwd

    +
    +

    12.8.3. Installing the Less Aggressive passwd

    The following Ansible tasks install the less aggressive passwd @@ -7378,8 +7595,8 @@ configuration so that the email to root can be encrypted.

    -
    -

    12.9. The Old Command

    +
    +

    12.9. The Old Command

    The old command disables a member's account (and thus their clients). @@ -7422,8 +7639,8 @@ The old command disables a member's account (and thus their clients

    -
    -

    12.10. The Client Command

    +
    +

    12.10. The Client Command

    The client command registers the public key of a client wishing to @@ -7508,8 +7725,8 @@ better support in NetworkManager soon.) die "$user: does not exist\n" if !defined $member && $type ne "campus"; - my @campus_peers # [ name, hostnum, type, pubkey, user|"" ] - = map { [ (split / /), "" ] } @{$yaml->{"clients"}}; + my @campus_peers # [ name, hostnum, type, pubkey, user|"" ] + = map { [ (split / /), "" ] } @{$yaml->{"clients"}}; my @member_peers = (); for my $u (sort keys %$members) { @@ -7548,17 +7765,17 @@ better support in NetworkManager soon.) } my $core_wg_addr = hostnum_to_ipaddr (2, $public_wg_net_cidr); - my $extra_front_config = " -PostUp = resolvectl dns %i $core_addr -PostUp = resolvectl domain %i $domain_priv - -# Core -[Peer] -PublicKey = $core_wg_pubkey -AllowedIPs = $core_wg_addr -AllowedIPs = $private_net_cidr -AllowedIPs = $wild_net_cidr -AllowedIPs = $campus_wg_net_cidr\n"; + my $extra_front_config = " +PostUp = resolvectl dns %i $core_addr +PostUp = resolvectl domain %i $domain_priv + +# Core +[Peer] +PublicKey = $core_wg_pubkey +AllowedIPs = $core_wg_addr +AllowedIPs = $private_net_cidr +AllowedIPs = $wild_net_cidr +AllowedIPs = $campus_wg_net_cidr\n"; write_wg_server ("private/front-wg0.conf", \@member_peers, hostnum_to_ipaddr_cidr (1, $public_wg_net_cidr), @@ -7590,19 +7807,19 @@ better support in NetworkManager soon.) my ($file, $peers, $addr_cidr, $port, $extra) = @_; my $O = new IO::File; open ($O, ">$file.tmp") or die "Could not open $file.tmp: $!\n"; - print $O "[Interface] -Address = $addr_cidr -ListenPort = $port -PostUp = wg set %i private-key /etc/wireguard/private-key$extra"; + print $O "[Interface] +Address = $addr_cidr +ListenPort = $port +PostUp = wg set %i private-key /etc/wireguard/private-key$extra"; for my $p (@$peers) { my ($n, $h, $t, $k, $u) = @$p; next if $k =~ /^-/; my $ip = hostnum_to_ipaddr ($h, $addr_cidr); - print $O " -# $n -[Peer] -PublicKey = $k -AllowedIPs = $ip\n"; + print $O " +# $n +[Peer] +PublicKey = $k +AllowedIPs = $ip\n"; } close $O or die "Could not close $file.tmp: $!\n"; rename ("$file.tmp", $file) @@ -7616,29 +7833,29 @@ better support in NetworkManager soon.) open ($O, ">$file.tmp") or die "Could not open $file.tmp: $!\n"; my $DNS = ($type eq "android" - ? " -DNS = $core_addr -Domain = $domain_priv" - : " -PostUp = wg set %i private-key /etc/wireguard/private-key -PostUp = resolvectl dns %i $core_addr -PostUp = resolvectl domain %i $domain_priv"); + ? " +DNS = $core_addr +Domain = $domain_priv" + : " +PostUp = wg set %i private-key /etc/wireguard/private-key +PostUp = resolvectl dns %i $core_addr +PostUp = resolvectl domain %i $domain_priv"); my $WILD = ($file eq "public.conf" - ? " -AllowedIPs = $wild_net_cidr" + ? " +AllowedIPs = $wild_net_cidr" : ""); - print $O "[Interface] -Address = $addr$DNS + print $O "[Interface] +Address = $addr$DNS -[Peer] -PublicKey = $pubkey -EndPoint = $endpt -AllowedIPs = $server_addr -AllowedIPs = $private_net_cidr$WILD -AllowedIPs = $public_wg_net_cidr -AllowedIPs = $campus_wg_net_cidr\n"; +[Peer] +PublicKey = $pubkey +EndPoint = $endpt +AllowedIPs = $server_addr +AllowedIPs = $private_net_cidr$WILD +AllowedIPs = $public_wg_net_cidr +AllowedIPs = $campus_wg_net_cidr\n"; close $O or die "Could not close $file.tmp: $!\n"; rename ("$file.tmp", $file) or die "Could not rename $file.tmp: $!\n"; @@ -7648,31 +7865,31 @@ better support in NetworkManager soon.) { my ($hostnum, $net_cidr) = @_; - # Assume 24bit subnet, 8bit hostnum. - # Find a Perl library for more generality? - die "$hostnum: hostnum too large\n" if $hostnum > 255; - my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$"; - die if !$prefix; - return "$prefix.$hostnum"; -} + # Assume 24bit subnet, 8bit hostnum. + # Find a Perl library for more generality? + die "$hostnum: hostnum too large\n" if $hostnum > 255; + my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$"; + die if !$prefix; + return "$prefix.$hostnum"; +} -sub hostnum_to_ipaddr_cidr ($$) -{ - my ($hostnum, $net_cidr) = @_; +sub hostnum_to_ipaddr_cidr ($$) +{ + my ($hostnum, $net_cidr) = @_; - # Assume 24bit subnet, 8bit hostnum. - # Find a Perl library for more generality? - die "$hostnum: hostnum too large\n" if $hostnum > 255; - my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$"; - die if !$prefix; - return "$prefix.$hostnum/24"; -} + # Assume 24bit subnet, 8bit hostnum. + # Find a Perl library for more generality? + die "$hostnum: hostnum too large\n" if $hostnum > 255; + my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$"; + die if !$prefix; + return "$prefix.$hostnum/24"; +}

    -
    -

    12.11. Institute Command Help

    +
    +

    12.11. Institute Command Help

    This should be the last block tangled into the inst script. It @@ -7688,8 +7905,8 @@ above.

    -
    -

    13. Testing

    +
    +

    13. Testing

    The example files in this document, ansible.cfg and hosts as well @@ -7708,7 +7925,7 @@ simulation is the VirtualBox host.

    The next two sections list the steps taken to create the simulated Core, Gate and Front machines, and connect them to their networks. -The process is similar to that described in The (Actual) Hardware, but +The process is similar to that described in The (Actual) Hardware, but is covered in detail here where the VirtualBox hypervisor can be assumed and exact command lines can be given (and copied during re-testing). The remaining sections describe the manual testing @@ -7724,8 +7941,8 @@ HTML version of the latest revision can be found on the official web site at https://www.virtualbox.org/manual/UserManual.html.

    -
    -

    13.1. The Test Networks

    +
    +

    13.1. The Test Networks

    The networks used in the test: @@ -7767,11 +7984,11 @@ following VBoxManage commands. --network 192.168.15.0/24 \ --enable --dhcp on --ipv6 off VBoxManage natnetwork start --netname premises -VBoxManage hostonlyif create # vboxnet0 -VBoxManage hostonlyif ipconfig vboxnet0 --ip=192.168.56.10 +VBoxManage hostonlyif create # vboxnet0 +VBoxManage hostonlyif ipconfig vboxnet0 --ip=192.168.56.10 VBoxManage dhcpserver modify --interface=vboxnet0 --disable -VBoxManage hostonlyif create # vboxnet1 -VBoxManage hostonlyif ipconfig vboxnet1 --ip=192.168.57.2 +VBoxManage hostonlyif create # vboxnet1 +VBoxManage hostonlyif ipconfig vboxnet1 --ip=192.168.57.2

    @@ -7789,15 +8006,15 @@ on the private 192.168.15.0/24 network.

    -
    -

    13.2. The Test Machines

    +
    +

    13.2. The Test Machines

    The virtual machines are created by VBoxManage command lines in the following sub-sections. They each start with a recent Debian release (e.g. debian-12.5.0-amd64-netinst.iso) in their simulated DVD -drives. As in The Hardware preparation process being simulated, a few -additional software packages are installed. Unlike in The Hardware +drives. As in The Hardware preparation process being simulated, a few +additional software packages are installed. Unlike in The Hardware preparation, machines are moved to their final networks and then remote access is authorized. (They are not accessible via ssh on the VirtualBox NAT network where they first boot.) @@ -7809,8 +8026,8 @@ privileged accounts on the virtual machines, they are prepared for configuration by Ansible.

    -
    -

    13.2.1. A Test Machine

    +
    +

    13.2.1. A Test Machine

    The following shell function contains most of the VBoxManage @@ -7904,7 +8121,7 @@ appropriate responses to the prompts are given in the list below.

  • Partition disks
    • Partitioning method: Guided - use entire disk
    • -
    • Select disk to partition: SCSI3 (0,0,0) (sda) - …
    • +
    • Select disk to partition: SCSI3 (0,0,0) (sda) - …
    • Partitioning scheme: All files in one partition
    • Finish partitioning and write changes to disk: Continue
    • Write the changes to disks? Yes
    • @@ -7929,7 +8146,7 @@ appropriate responses to the prompts are given in the list below.
    • Install the GRUB boot loader
      • Install the GRUB boot loader to your primary drive? Yes
      • -
      • Device for boot loader installation: /dev/sda (ata-VBOX…
      • +
      • Device for boot loader installation: /dev/sda (ata-VBOX…
    @@ -7941,8 +8158,8 @@ preparation (below).

  • -
    -

    13.2.2. The Test Front Machine

    +
    +

    13.2.2. The Test Front Machine

    The front machine is created with 512MiB of RAM, 4GiB of disk, and @@ -7955,7 +8172,7 @@ After Debian is installed (as detailed above) front is shut down an its primary network interface moved to the simulated Internet, the NAT network premises. front also gets a second network interface, on the host-only network vboxnet1, to make it directly accessible to -the administrator's notebook (as described in The Test Networks). +the administrator's notebook (as described in The Test Networks).

    @@ -7987,13 +8204,13 @@ Note that there is no pre-provisioning for front, which is never deployed on a frontier, always in the cloud. Additional Debian packages are assumed to be readily available. Thus Ansible installs them as necessary, but first the administrator authorizes remote -access by following the instructions in the final section: Ansible +access by following the instructions in the final section: Ansible Test Authorization.

    -
    -

    13.2.3. The Test Gate Machine

    +
    +

    13.2.3. The Test Gate Machine

    The gate machine is created with the same amount of RAM and disk as @@ -8008,7 +8225,7 @@ create_vm

    -After Debian is installed (as detailed in A Test Machine) and the +After Debian is installed (as detailed in A Test Machine) and the machine rebooted, the administrator logs in and installs several additional software packages.

    @@ -8109,12 +8326,12 @@ Ethernet interface is temporarily configured with an IP address.

    Finally, the administrator authorizes remote access by following the -instructions in the final section: Ansible Test Authorization. +instructions in the final section: Ansible Test Authorization.

    -
    -

    13.2.4. The Test Core Machine

    +
    +

    13.2.4. The Test Core Machine

    The core machine is created with 1GiB of RAM and 6GiB of disk. @@ -8131,7 +8348,7 @@ create_vm

    -After Debian is installed (as detailed in A Test Machine) and the +After Debian is installed (as detailed in A Test Machine) and the machine rebooted, the administrator logs in and installs several additional software packages.

    @@ -8197,12 +8414,12 @@ Netplan soon.)

    Finally, the administrator authorizes remote access by following the -instructions in the next section: Ansible Test Authorization. +instructions in the next section: Ansible Test Authorization.

    -
    -

    13.2.5. Ansible Test Authorization

    +
    +

    13.2.5. Ansible Test Authorization

    To authorize Ansible's access to the three test machines, they must @@ -8213,9 +8430,9 @@ key to each test machine.

    SRC=Secret/ssh_admin/id_rsa.pub
    -scp $SRC sysadm@192.168.57.3:admin_key # Front
    -scp $SRC sysadm@192.168.56.2:admin_key # Gate
    -scp $SRC sysadm@192.168.56.1:admin_key # Core
    +scp $SRC sysadm@192.168.57.3:admin_key # Front
    +scp $SRC sysadm@192.168.56.2:admin_key # Gate
    +scp $SRC sysadm@192.168.56.1:admin_key # Core
     
    @@ -8268,8 +8485,8 @@ ssh-keygen -f ~/.ssh/known_hosts -R 192.168.57.3
    -
    -

    13.3. Configure Test Machines

    +
    +

    13.3. Configure Test Machines

    At this point the three test machines core, gate, and front are @@ -8287,8 +8504,8 @@ not.

    -
    -

    13.4. Test Basics

    +
    +

    13.4. Test Basics

    At this point the test institute is just core, gate and front, @@ -8309,8 +8526,8 @@ forwarding (and NATing). On core (and gate):

    -
    ping -c 1 8.8.4.4      # dns.google
    -ping -c 1 192.168.15.5 # front_addr
    +
    ping -c 1 8.8.4.4      # dns.google
    +ping -c 1 192.168.15.5 # front_addr
     
    @@ -8350,12 +8567,12 @@ instant attention).

    -
    -

    13.5. The Test Nextcloud

    +
    +

    13.5. The Test Nextcloud

    Further tests involve Nextcloud account management. Nextcloud is -installed on core as described in Configure Nextcloud. Once +installed on core as described in Configure Nextcloud. Once /Nextcloud/ is created, ./inst config core will validate or update its configuration files.

    @@ -8377,8 +8594,8 @@ with the ./inst client command.

    -
    -

    13.6. Test New Command

    +
    +

    13.6. Test New Command

    A member must be enrolled so that a member's client machine can be @@ -8398,8 +8615,8 @@ Take note of Dick's initial password.

    -
    -

    13.7. The Test Member Notebook

    +
    +

    13.7. The Test Member Notebook

    A test member's notebook is created next, much like the servers, @@ -8427,7 +8644,7 @@ behind) the access point.

    -Debian is installed much as detailed in A Test Machine except that +Debian is installed much as detailed in A Test Machine except that the SSH server option is not needed and the GNOME desktop option is. When the machine reboots, the administrator logs into the desktop and installs a couple additional software packages (which @@ -8440,8 +8657,8 @@ require several more).

    -
    -

    13.8. Test Client Command

    +
    +

    13.8. Test Client Command

    The ./inst client command is used to register the public key of a @@ -8469,11 +8686,11 @@ command, generating campus.conf and public.conf files.

    -
    -

    13.9. Test Campus WireGuard™ Subnet

    +
    +

    13.9. Test Campus WireGuard™ Subnet

    -The campus.conf WireGuard™ configuration file (generated in Test +The campus.conf WireGuard™ configuration file (generated in Test Client Command) is transferred to dick, which is at the Wi-Fi access point's IP address, host 2 on the wild Ethernet.

    @@ -8505,17 +8722,17 @@ A few basic tests are then performed in a terminal.
    systemctl status
    -ping -c 1 8.8.8.8      # dns.google
    -ping -c 1 192.168.56.1 # core
    -host dns.google
    +ping -c 1 8.8.8.8      # dns.google
    +ping -c 1 192.168.56.1 # core
    +host dns.google
     host core.small.private
     host www
     
    -
    -

    13.10. Test Web Pages

    +
    +

    13.10. Test Web Pages

    Next, the administrator copies Backup/WWW/ (included in the @@ -8554,8 +8771,8 @@ will warn but allow the luser to continue.

    -
    -

    13.11. Test Web Update

    +
    +

    13.11. Test Web Update

    Modify /WWW/live/index.html on core and wait 15 minutes for it to @@ -8569,8 +8786,8 @@ Hack /home/www/index.html on front and observe the result at

    -
    -

    13.12. Test Nextcloud

    +
    +

    13.12. Test Nextcloud

    Nextcloud is typically installed and configured after the first @@ -8578,9 +8795,9 @@ Ansible run, when core has Internet access via gate. installation directory /Nextcloud/nextcloud/ appears, the Ansible code skips parts of the Nextcloud configuration. The same installation (or restoration) process used on Core is used on core -to create /Nextcloud/. The process starts with Create -/Nextcloud/, involves Restore Nextcloud or Install Nextcloud, -and runs ./inst config core again 8.23.6. When the ./inst +to create /Nextcloud/. The process starts with Create +/Nextcloud/, involves Restore Nextcloud or Install Nextcloud, +and runs ./inst config core again 8.23.6. When the ./inst config core command is happy with the Nextcloud configuration on core, the administrator uses Dick's notebook to test it, performing the following tests on dick's desktop. @@ -8658,8 +8875,8 @@ the calendar.

    -
    -

    13.13. Test Email

    +
    +

    13.13. Test Email

    With Evolution running on the member notebook dick, one second email @@ -8687,8 +8904,8 @@ Outgoing email is also tested. A message to

    -
    -

    13.14. Test Public VPN

    +
    +

    13.14. Test Public VPN

    At this point, dick can move abroad, from the campus Wi-Fi @@ -8717,9 +8934,9 @@ Again, some basics are tested in a terminal.

    -
    ping -c 1 8.8.4.4      # dns.google
    -ping -c 1 192.168.56.1 # core
    -host dns.google
    +
    ping -c 1 8.8.4.4      # dns.google
    +ping -c 1 192.168.56.1 # core
    +host dns.google
     host core.small.private
     host www
     
    @@ -8746,8 +8963,8 @@ calendar events.

    -
    -

    13.15. Test Pass Command

    +
    +

    13.15. Test Pass Command

    To test the ./inst pass command, the administrator logs in to core @@ -8794,8 +9011,8 @@ Finally, the administrator verifies that dick can login on co

    -
    -

    13.16. Test Old Command

    +
    +

    13.16. Test Old Command

    One more institute command is left to exercise. The administrator @@ -8815,16 +9032,16 @@ fail.

    -
    -

    14. Future Work

    +
    +

    14. Future Work

    The small institute's network, as currently defined in this doocument, is lacking in a number of respects.

    -
    -

    14.1. Deficiencies

    +
    +

    14.1. Deficiencies

    The current network monitoring is rudimentary. It could use some @@ -8850,16 +9067,16 @@ not available on Front, yet.

    -
    -

    14.2. More Tests

    +
    +

    14.2. More Tests

    The testing process described in the previous chapter is far from complete. Additional tests are needed.

    -
    -

    14.2.1. Backup

    +
    +

    14.2.1. Backup

    The backup command has not been tested. It needs an encrypted @@ -8868,8 +9085,8 @@ partition with which to sync? And then some way to compare that to

    -
    -

    14.2.2. Restore

    +
    +

    14.2.2. Restore

    The restore process has not been tested. It might just copy Backup/ @@ -8879,11 +9096,11 @@ perhaps permissions too. It could also use an example

    -
    -

    14.2.3. Campus Disconnect

    +
    +

    14.2.3. Campus Disconnect

    -Email access (IMAPS) on front is… difficult to test unless +Email access (IMAPS) on front is… difficult to test unless core's fetchmails are disconnected, i.e. the whole campus is disconnected, so that new email stays on front long enough to be seen. @@ -8904,8 +9121,8 @@ could be used.

    -
    -

    15. Appendix: The Bootstrap

    +
    +

    15. Appendix: The Bootstrap

    Creating the private network from whole cloth (machines with recent @@ -8925,11 +9142,11 @@ etc.: quite a bit of temporary, manual localnet configuration just to get to the additional packages.

    -
    -

    15.1. The Current Strategy

    +
    +

    15.1. The Current Strategy

    -The strategy pursued in The Hardware is two phase: prepare the servers +The strategy pursued in The Hardware is two phase: prepare the servers on the Internet where additional packages are accessible, then connect them to the campus facilities (the private Ethernet switch, Wi-Fi AP, ISP), manually configure IP addresses (while the DHCP client silently @@ -8937,8 +9154,8 @@ fails), and avoid names until BIND9 is configured.

    -
    -

    15.2. Starting With Gate

    +
    +

    15.2. Starting With Gate

    The strategy of Starting With Gate concentrates on configuring Gate's @@ -8982,8 +9199,8 @@ ansible-playbook -l core site.yml

    -
    -

    15.3. Pre-provision With Ansible

    +
    +

    15.3. Pre-provision With Ansible

    A refinement of the current strategy might avoid the need to maintain @@ -9028,7 +9245,7 @@ done, and is left as a manual exercise.

    4

    Front is accessible via Gate but routing from the host address on vboxnet0 through Gate requires extensive interference with the -routes on Front and Gate, making the simulation less… similar. +routes on Front and Gate, making the simulation less… similar.

    @@ -9036,7 +9253,7 @@ routes on Front and Gate, making the simulation less… similar.

    Author: Matt Birkholz

    -

    Created: 2025-06-28 Sat 10:50

    +

    Created: 2025-09-18 Thu 17:59

    Validate

    -- 2.25.1