From: Matt Birkholz
This small institute has a public server on the Internet, Front, that @@ -48,7 +48,7 @@ connects to Front making the institute email, cloud, etc. available to members off campus.
-
+
=
_|||_
=-The-Institute-=
@@ -95,8 +95,8 @@ uses OpenPGP encryption to secure message content.
This small institute prizes its privacy, so there is little or no @@ -144,8 +144,8 @@ month) because of this assumption.
The small institute's network is designed to provide a number of @@ -157,8 +157,8 @@ policies. On first reading, those subsections should be skipped; they reference particulars first introduced in the following chapter.
The institute has a public domain, e.g. small.example.org, and a
@@ -172,8 +172,8 @@ names like core.
Front provides the public SMTP (Simple Mail Transfer Protocol) service
@@ -247,8 +247,8 @@ setting for the maximum message size is given in a code block labeled
configurations wherever <<postfix-message-size>> appears.
The institute aims to accommodate encrypted email containing short @@ -263,7 +263,7 @@ handle maxi-messages.
postfix-message-size- { p: message_size_limit, v: 104857600 }
+postfix-message-size- { p: message_size_limit, v: 104857600 }
postfix-queue-times- { p: delay_warning_time, v: 1h }
+postfix-queue-times- { p: delay_warning_time, v: 1h }
- { p: maximal_queue_lifetime, v: 4h }
- { p: bounce_queue_lifetime, v: 4h }
@@ -292,7 +292,7 @@ disables relaying (other than for the local networks).
-postfix-relaying- p: smtpd_relay_restrictions
+postfix-relaying- p: smtpd_relay_restrictions
v: permit_mynetworks reject_unauth_destination
@@ -304,7 +304,7 @@ effect.
-postfix-maildir- { p: home_mailbox, v: Maildir/ }
+postfix-maildir- { p: home_mailbox, v: Maildir/ }
@@ -315,8 +315,8 @@ in the respective roles below.
The Dovecot settings on both Front and Core disable POP and require @@ -330,7 +330,7 @@ The official documentation for Dovecot once was a Wiki but now is
dovecot-tlsprotocols = imap
+dovecot-tlsprotocols = imap
ssl = required
dovecot-portsservice imap-login {
+dovecot-portsservice imap-login {
inet_listener imap {
port = 0
}
@@ -356,7 +356,7 @@ directories.
-dovecot-maildirmail_location = maildir:~/Maildir
+dovecot-maildirmail_location = maildir:~/Maildir
@@ -368,15 +368,15 @@ common settings with host specific settings for ssl_cert and
Front provides the public HTTP service that serves institute web pages
at e.g. https://small.example.org/. The small institute initially
runs with a self-signed, "snake oil" server certificate, causing
browsers to warn of possible fraud, but this certificate is easily
-replaced by one signed by a recognized authority, as discussed in The
+replaced by one signed by a recognized authority, as discussed in The
Front Role.
Core runs Nextcloud to provide a private institute cloud at
https://core.small.private/nextcloud/. It is managed manually per
-The Nextcloud Server Administration Guide. The code and data,
+The Nextcloud Server Administration Guide. The code and data,
including especially database dumps, are stored in /Nextcloud/
which
-is included in Core's backup procedure as described in Backups. The
+is included in Core's backup procedure as described in Backups. The
default Apache2 configuration expects to find the web scripts in
/var/www/nextcloud/
, so the institute symbolically links this to
/Nextcloud/nextcloud/
.
@@ -453,15 +453,15 @@ private network.
A small institute has just a handful of members. For simplicity (and
thus security) static configuration files are preferred over complex
account management systems, LDAP, Active Directory, and the like. The
Ansible scripts configure the same set of user accounts on Core and
-Front. The Institute Commands (e.g. ./inst new dick) capture the
+Front. The Institute Commands (e.g. ./inst new dick) capture the
processes of enrolling, modifying and retiring members of the
institute. They update the administrator's membership roll, and run
Ansible to create (and disable) accounts on Core, Front, Nextcloud,
@@ -476,8 +476,8 @@ accomplished via the campus cloud and the resulting desktop files can
all be private (readable and writable only by the owner) by default.
The institute avoids the use of the root account (uid 0) because
@@ -486,21 +486,21 @@ command is used to consciously (conscientiously!) run specific scripts
and programs as root. When installation of a Debian OS leaves the
host with no user accounts, just the root account, the next step is
to create a system administrator's account named sysadm and to give
-it permission to use the sudo command (e.g. as described in The
+it permission to use the sudo command (e.g. as described in The
Front Machine). When installation prompts for the name of an
initial, privileged user account the same name is given (e.g. as
-described in The Core Machine). Installation may not prompt and
+described in The Core Machine). Installation may not prompt and
still create an initial user account with a distribution specific name
(e.g. pi). Any name can be used as long as it is provided as the
value of ansible_user in hosts
. Its password is specified by a
vault-encrypted variable in the Secret/become.yml
file. (The
-hosts
and Secret/become.yml
files are described in The Ansible
+hosts
and Secret/become.yml
files are described in The Ansible
Configuration.)
The institute's Core uses a special account named monkey to run
@@ -511,8 +511,8 @@ account is created on Front as well.
The institute keeps its "master secrets" in an encrypted @@ -597,8 +597,8 @@ the administrator's password keep, to install a new SSH key.
The small institute backs up its data, but not so much so that nothing @@ -634,7 +634,7 @@ files mentioned in the Nextcloud database dump).
private/backup
#!/bin/bash -e
+private/backup
#!/bin/bash -e
#
# DO NOT EDIT.
#
@@ -738,8 +738,8 @@ finish
This chapter introduces Ansible variables intended to simplify
@@ -751,13 +751,13 @@ stored in separate files: public/vars.yml
a
The example settings in this document configure VirtualBox VMs as -described in the Testing chapter. For more information about how a +described in the Testing chapter. For more information about how a small institute turns the example Ansible code into a working Ansible -configuration, see chapter The Ansible Configuration. +configuration, see chapter The Ansible Configuration.
The small institute's domain name is used quite frequently in the @@ -796,8 +796,8 @@ domain_priv: small.private
The small institute uses a private Ethernet, two VPNs, and a "wild", @@ -897,7 +897,7 @@ example result follows the code.
=> 10.62.17.0/24
@@ -910,7 +910,7 @@ code block below. The small institute treats these addresses as sensitive information so again the code block below "tangles" intoprivate/vars.ymlrather than
public/vars.yml. Two of the addresses are in
192.168 subnets because they are part of a test
-configuration using mostly-default VirtualBoxes (described here).
+configuration using mostly-default VirtualBoxes (described here).
_net_and_mask rather than _net_cidr.
private/vars.yml
private_net:
+network-varsprivate_net:
"{{ private_net_cidr | ansible.utils.ipaddr('network') }}"
private_net_mask:
"{{ private_net_cidr | ansible.utils.ipaddr('netmask') }}"
@@ -953,10 +953,18 @@ campus_wg_net:
campus_wg_net_mask:
"{{ campus_wg_net_cidr | ansible.utils.ipaddr('netmask') }}"
campus_wg_net_and_mask:
- "{{ campus_wg_net }} {{ campus_wg_net_mask }}"
+ "{{ campus_wg_net }} {{ campus_wg_net_mask }}"
+This is obvious, site-independent, non-private boilerplate and so goes
+in a defaults/main.yml
file in each role. The variables can then be
+overridden by adding them to the site-specific private/vars.yml
.
+The block is referenced with <<network-vars>> and tangled into each
+role's defaults/main.yml
file.
+
The institute prefers to configure its services with IP addresses rather than domain names, and one of the most important for secure and @@ -965,35 +973,33 @@ the institute's Internet domain name.
public/vars.yml
front_addr: 192.168.15.5
+public/vars.yml
front_addr: 192.168.15.4
The example address is a private network address because the example
configuration is intended to run in a test jig made up of VirtualBox
-virtual machines and networks, and the VirtualBox user manual uses
-192.168.15.0 in its example configuration of a "NAT Network"
-(simulating Front's ISP's network).
+virtual machines and networks.
-Finally, four host addresses are needed frequently in the Ansible
-code. The first two are Core's and Gate's addresses on the private
-Ethernet. The other two are Gate's and the campus Wi-Fi's addresses
-on the wild Ethernet. The following code block chooses host 1 for
-Core and host 2 for Gate on the private Ethernet. On the wild
-Ethernet, host 1 is Gate and host 2 is the access point (or wired
-IoT appliance).
+Finally, five host addresses are needed frequently in the Ansible
+code. Each is made available in both CIDR and IPv4 address formats.
+Again this is site-independent, non-private boilerplate referenced
+with address-vars in the default/main.yml
files.
private/vars.yml
core_addr_cidr: "{{ private_net_cidr | ansible.utils.ipaddr('1') }}"
+address-vars
+core_addr_cidr: "{{ private_net_cidr | ansible.utils.ipaddr('1') }}"
gate_addr_cidr: "{{ private_net_cidr | ansible.utils.ipaddr('2') }}"
gate_wild_addr_cidr:
"{{ wild_net_cidr | ansible.utils.ipaddr('1') }}"
front_wg_addr_cidr:
"{{ public_wg_net_cidr | ansible.utils.ipaddr('1') }}"
+core_wg_addr_cidr:
+ "{{ public_wg_net_cidr | ansible.utils.ipaddr('2') }}"
core_addr: "{{ core_addr_cidr | ansible.utils.ipaddr('address') }}"
gate_addr: "{{ gate_addr_cidr | ansible.utils.ipaddr('address') }}"
@@ -1001,24 +1007,26 @@ gate_wild_addr:
"{{ gate_wild_addr_cidr | ansible.utils.ipaddr('address') }}"
front_wg_addr:
"{{ front_wg_addr_cidr | ansible.utils.ipaddr('address') }}"
+core_wg_addr:
+ "{{ core_wg_addr_cidr | ansible.utils.ipaddr('address') }}"
The small institute's network was built by its system administrator using Ansible on a trusted notebook. The Ansible configuration and scripts were generated by "tangling" the Ansible code included here. -(The Ansible Configuration describes how to do this.) The following +(The Ansible Configuration describes how to do this.) The following sections describe how Front, Gate and Core were prepared for Ansible.
Front is the small institute's public facing server, a virtual machine @@ -1031,8 +1039,8 @@ possible to quickly re-provision a new Front machine from a frontier Internet café using just the administrator's notebook.
The following example prepared a new front on a Digital Ocean droplet. @@ -1056,7 +1064,7 @@ root@ubuntu#
The freshly created Digital Ocean droplet came with just one account,
root, but the small institute avoids remote access to the "super
-user" account (per the policy in The Administration Accounts), so the
+user" account (per the policy in The Administration Accounts), so the
administrator created a sysadm account with the ability to request
escalated privileges via the sudo command.
gpw, saved in the administrator's
password keep, and later added to Secret/become.ymlas shown below. (Producing a working Ansible configuration with
Secret/become.yml-file is described in The Ansible Configuration.) +file is described in The Ansible Configuration.)
@@ -1093,7 +1101,7 @@ notebook_ >>Secret/become.ymlAfter creating the
@@ -1137,27 +1145,50 @@ notebook$ ssh-keygen -f ~/.ssh/known_hosts -R 159.65.75.60sysadmaccount on the droplet, the administrator concatenated a personal public ssh key and the key found in -Secret/ssh_admin/(created by The CA Command) into anadmin_keys+Secret/ssh_admin/(created by The CA Command) into anadmin_keysfile, copied it to the droplet, and installed it as theauthorized_keysforsysadm.
-The last command removes the old host key from the administrator's
-known_hosts
file. The next SSH connection should ask to confirm the
-new host identity.
+The last command removed the old host key from the administrator's
+known_hosts
file. The next few commands served to test
+password-less login as well as the privilege escalation command
+sudo.
-The administrator then tested the password-less ssh login as well as
-the privilege escalation command.
+The Droplet needed a couple additional software packages immediately.
+The wireguard package was needed to generate the Droplet's private
+key. The systemd-resolved package was installed so that the
+subsequent reboot gets ResolveD configured properly (else resolvectl
+hangs, causing wg-quick@wg0 to hang…). The rest are included just
+to speed up (re)testing of "prepared" test machines, e.g. prepared as
+described in The Test Front Machine.
notebook$ ssh sysadm@159.65.75.60 -sysadm@ubuntu$ sudo head -1 /etc/shadow -[sudo] password for sysadm: -root:*:18355:0:99999:7::: +sysadm@ubuntu$ sudo apt install wireguard systemd-resolved \ + unattended-upgrades postfix dovecot-imapd rsync apache2 kamailio ++ +
+With WireGuard⢠installed, the following commands generated a new +private key, and displayed its public key. +
+ ++sysadm@ubuntu$ umask 077 +susadm@ubuntu$ wg genkey \ +sysadm@ubuntu_ | sudo tee /etc/wireguard/private-key \ +sysadm@ubuntu_ | wg pubkey +S+6HaTnOwwhWgUGXjSBcPAvifKw+j8BDTRfq534gNW4=
-After passing the above test, the administrator disabled root logins
-on the droplet. The last command below tested that root logins were
-indeed denied.
+The public key is copied and pasted into private/vars.yml
as the
+value of front_wg_pubkey (as in the example here).
+
+After collecting Front's public key, the administrator disabled root +logins on the droplet. The last command below tested that root logins +were indeed denied.
@@ -1177,8 +1208,8 @@ address.
Core is the small institute's private file, email, cloud and whatnot
@@ -1202,7 +1233,7 @@ The following example prepared a new core on a PC with Debian 11
freshly installed. During installation, the machine was named core,
no desktop or server software was installed, no root password was set,
and a privileged account named sysadm was created (per the policy in
-The Administration Accounts).
+The Administration Accounts).
@@ -1218,7 +1249,7 @@ Is the information correct? [Y/n] The password was generated bygpw, saved in the administrator's password keep, and later added toSecret/become.ymlas shown below. (Producing a working Ansible configuration withSecret/become.yml-file is described in The Ansible Configuration.) +file is described in The Ansible Configuration.)@@ -1236,10 +1267,9 @@ modem and installed them as shown below.-$ sudo apt install netplan.io systemd-resolved unattended-upgrades \ -_ chrony isc-dhcp-server bind9 apache2 wireguard \ -_ postfix dovecot-imapd fetchmail expect rsync \ -_ gnupg openssh-server +$ sudo apt install wireguard systemd-resolved unattended-upgrades \ +_ chrony isc-dhcp-server bind9 apache2 postfix \ +_ dovecot-imapd fetchmail rsync gnupg@@ -1268,7 +1298,7 @@ final configuration "in position" (on a frontier).
$ sudo apt install mariadb-server php php-{apcu,bcmath,curl,gd,gmp}\ _ php-{json,mysql,mbstring,intl,imagick,xml,zip} \ -_ libapache2-mod-php +_ imagemagick libapache2-mod-php@@ -1283,7 +1313,7 @@ _ nagios-nrpe-plugin
Next, the administrator concatenated a personal public ssh key and the -key found in
@@ -1323,7 +1353,7 @@ a new, private IP address and a default route.Secret/ssh_admin/(created by The CA Command) into an +key found inSecret/ssh_admin/(created by The CA Command) into anadmin_keysfile, copied it to Core, and installed it as theauthorized_keysforsysadm.In the example command lines below, the address
10.227.248.1was generated by the random subnet address picking procedure described in -Subnets, and is namedcore_addrin the Ansible code. The second +Subnets, and is namedcore_addrin the Ansible code. The second address,10.227.248.2, is the corresponding address for Gate's Ethernet interface, and is namedgate_addrin the Ansible code. @@ -1339,8 +1369,8 @@ At this point Core was ready for provisioning with Ansible.
Gate is the small institute's route to the Internet, and the campus @@ -1360,7 +1390,7 @@ modem, a USB port tethered to a phone, a wireless adapter connected to a campground Wi-Fi access point, etc. -
+
=============== | ==================================================
| Premises
(Campus ISP)
@@ -1373,8 +1403,8 @@ connected to a campground Wi-Fi access point, etc.
+----Ethernet switch
While Gate and Core really need to be separate machines for security @@ -1383,7 +1413,7 @@ This avoids the need for a second Wi-Fi access point and leads to the following topology.
-
+
=============== | ==================================================
| Premises
(House ISP)
@@ -1407,12 +1437,12 @@ its Ethernet and Wi-Fi clients are allowed to communicate).
The Ansible code in this document is somewhat dependent on the -physical network shown in the Overview wherein Gate has three network +physical network shown in the Overview wherein Gate has three network interfaces.
@@ -1421,7 +1451,7 @@ The following example prepared a new gate on a PC with Debian 11 freshly installed. During installation, the machine was namedgate,
no desktop or server software was installed, no root password was set,
and a privileged account named sysadm was created (per the policy in
-The Administration Accounts).
+The Administration Accounts).
@@ -1437,7 +1467,7 @@ Is the information correct? [Y/n] The password was generated bygpw, saved in the administrator's password keep, and later added toSecret/become.ymlas shown below. (Producing a working Ansible configuration withSecret/become.yml-file is described in The Ansible Configuration.) +file is described in The Ansible Configuration.)@@ -1455,9 +1485,9 @@ cable modem and installed them as shown below.-$ sudo apt install netplan.io systemd-resolved unattended-upgrades \ -_ ufw isc-dhcp-server postfix wireguard \ -_ openssh-server +$ sudo apt install systemd-resolved unattended-upgrades \ +_ ufw postfix wireguard lm-sensors \ +_ nagios-nrpe-server@@ -1469,7 +1499,7 @@ ready to proceed.
Next, the administrator concatenated a personal public ssh key and the -key found in
@@ -1509,7 +1539,7 @@ a new, private IP address.Secret/ssh_admin/(created by The CA Command) into an +key found inSecret/ssh_admin/(created by The CA Command) into anadmin_keysfile, copied it to Gate, and installed it as theauthorized_keysforsysadm.In the example command lines below, the address
10.227.248.2was generated by the random subnet address picking procedure described in -Subnets, and is namedgate_addrin the Ansible code. +Subnets, and is namedgate_addrin the Ansible code.@@ -1521,8 +1551,7 @@ Gate was also connected to the USB Ethernet dongles cabled to the campus Wi-Fi access point and the campus ISP and the values of three variables (gate_lan_mac,gate_wild_mac, andgate_isp_macinprivate/vars.yml) match the actual hardware MAC addresses of the -dongles. (For more information, see the Gate role's Configure Netplan -task.) +dongles. (For more information, see the tasks in section 9.3.)@@ -1532,37 +1561,36 @@ At this point Gate was ready for provisioning with Ansible.
The all role contains tasks that are executed on all of the
institute's servers. At the moment there is just the one.
The all role's task contains a reference to a common institute
particular, the institute's domain_name, a variable found in the
public/vars.yml
file. Thus the first task of the all role is to
-include the variables defined in this file (described in The
+include the variables defined in this file (described in The
Particulars). The code block below is the first to tangle into
roles/all/tasks/main.yml
.
roles/all/tasks/main.yml
---
+roles_t/all/tasks/main.yml
---
- name: Include public variables.
include_vars: ../public/vars.yml
- tags: accounts
The systemd-networkd and systemd-resolved service units are not
@@ -1631,14 +1659,14 @@ follows these recommendations (and not the suggestion to enable
All servers should recognize the institute's Certificate Authority as trustworthy, so its certificate is added to the set of trusted CAs on each host. More information about how the small institute manages its -X.509 certificates is available in Keys. +X.509 certificates is available in Keys.
The front role installs and configures the services expected on the
institute's publicly accessible "front door": email, web, VPN. The
virtual machine is prepared with an Ubuntu Server install and remote
access to a privileged, administrator's account. (For details, see
-The Front Machine.)
+The Front Machine.)
@@ -1688,24 +1716,45 @@ perhaps with symbolic links to, for example,
/etc/letsencrypt/live/small.example.org/fullchain.pem
.
-The first task, as in The All Role, is to include the institute
+The front role sets a number of variables to default values in its
+defaults/main.yml
file.
+
roles_t/front/defaults/main.yml
---
+<<network-vars>>
+<<address-vars>>
+<<membership-rolls>>
+
+
+The membership-rolls reference defines membership_rolls which is
+used to select an empty membership roll if one has not been written
+yet. (See section 12.7.)
+
+The first task, as in The All Role, is to include the institute
particulars. The front role refers to private variables and the
membership roll, so these are included was well.
roles/front/tasks/main.yml
---
+roles_t/front/tasks/main.yml
---
- name: Include public variables.
include_vars: ../public/vars.yml
- tags: accounts
- name: Include private variables.
include_vars: ../private/vars.yml
- tags: accounts
- name: Include members.
include_vars: "{{ lookup('first_found', membership_rolls) }}"
@@ -1714,9 +1763,9 @@ membership roll, so these are included was well.
This task ensures that Front's /etc/hostname
and /etc/mailname
are
correct. The correct /etc/mailname
is essential to proper email
@@ -1724,7 +1773,8 @@ delivery.
roles_t/front/tasks/main.yml
- name: Configure hostname.
+roles_t/front/tasks/main.yml
+- name: Configure hostname.
become: yes
copy:
content: "{{ domain_name }}\n"
@@ -1736,15 +1786,15 @@ delivery.
- name: Update hostname.
become: yes
command: hostname -F /etc/hostname
- when: domain_name != ansible_hostname
+ when: domain_name != ansible_fqdn
tags: actualizer
The administrator often needs to read (directories of) log files owned
by groups root and adm. Adding the administrator's account to
@@ -1763,9 +1813,9 @@ these groups speeds up debugging.
The SSH service on Front needs to be known to Monkey. The following
tasks ensure this by replacing the automatically generated keys with
@@ -1803,16 +1853,16 @@ those stored in Secret/ssh_front/etc/ssh/
The small institute runs cron jobs and web scripts that generate
reports and perform checks. The un-privileged jobs are run by a
system account named monkey. One of Monkey's more important jobs on
Core is to run rsync to update the public web site on Front. Monkey
on Core will login as monkey on Front to synchronize the files (as
-described in *Configure Apache2). To do that without needing a
+described in *Configure Apache2). To do that without needing a
password, the monkey account on Front should authorize Monkey's SSH
key on Core.
Monkey uses Rsync to keep the institute's public web site up-to-date.
@@ -1860,9 +1910,9 @@ Monkey uses Rsync to keep the institute's public web site up-to-date.The institute prefers to install security updates as soon as possible.
@@ -1876,13 +1926,13 @@ The institute prefers to install security updates as soon as possible.
User accounts are created immediately so that Postfix and Dovecot can
start delivering email immediately, without returning "no such
-recipient" replies. The Account Management chapter describes the
+recipient" replies. The Account Management chapter describes the
members and usernames variables used below.
The servers on Front use the same certificate (and key) to
authenticate themselves to institute clients. They share the
@@ -1951,9 +2001,9 @@ readable by root.
Front uses Postfix to provide the institute's public SMTP service, and uses the institute's domain name for its host name. The default @@ -1969,7 +2019,7 @@ The appropriate answers are listed here but will be checked
-As discussed in The Email Service above, Front's Postfix configuration +As discussed in The Email Service above, Front's Postfix configuration includes site-wide support for larger message sizes, shorter queue times, the relaying configuration, and the common path to incoming emails. These and a few Front-specific Postfix configurations @@ -1982,7 +2032,7 @@ via which Core relays messages from the campus.
postfix-front-networks- p: mynetworks
+postfix-front-networks- p: mynetworks
v: >-
{{ public_wg_net_cidr }}
127.0.0.0/8
@@ -1998,7 +2048,7 @@ difficult for internal hosts, who do not have (public) domain names.
-postfix-front-restrictions- p: smtpd_recipient_restrictions
+postfix-front-restrictions- p: smtpd_recipient_restrictions
v: >-
permit_mynetworks
reject_unauth_pipelining
@@ -2019,13 +2069,13 @@ messages; incoming messages are delivered locally, without
-postfix-header-checks- p: smtp_header_checks
+postfix-header-checks- p: smtp_header_checks
v: regexp:/etc/postfix/header_checks.cf
-postfix-header-checks-content/^Received:/ IGNORE
+postfix-header-checks-content/^Received:/ IGNORE
/^User-Agent:/ IGNORE
@@ -2037,7 +2087,7 @@ Debian default for inet_interfaces.
-postfix-front- { p: smtpd_tls_cert_file, v: /etc/server.crt }
+postfix-front- { p: smtpd_tls_cert_file, v: /etc/server.crt }
- { p: smtpd_tls_key_file, v: /etc/server.key }
<<postfix-front-networks>>
<<postfix-front-restrictions>>
@@ -2113,9 +2163,9 @@ start and enable the service.
The institute's Front needs to deliver email addressed to a number of common aliases as well as those advertised on the web site. System @@ -2156,9 +2206,9 @@ created by a more specialized role.
Front uses Dovecot's IMAPd to allow user Fetchmail jobs on Core to pick up messages. Front's Dovecot configuration is largely the Debian @@ -2166,7 +2216,7 @@ default with POP and IMAP (without TLS) support disabled. This is a bit "over the top" given that Core accesses Front via VPN, but helps to ensure privacy even when members must, in extremis, access recent email directly from their accounts on Front. For more information -about Front's role in the institute's email services, see The Email +about Front's role in the institute's email services, see The Email Service.
@@ -2229,9 +2279,9 @@ and enables it to start at every reboot.This is the small institute's public web site. It is simple, static, and thus (hopefully) difficult to subvert. There are no server-side @@ -2266,7 +2316,7 @@ taken from https://www
apache-ciphersSSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
+apache-ciphersSSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLHonorCipherOrder on
SSLCipherSuite {{ [ 'ECDHE-ECDSA-AES128-GCM-SHA256',
'ECDHE-ECDSA-AES256-GCM-SHA384',
@@ -2321,7 +2371,7 @@ used on all of the institute's web sites.
-apache-userdir-frontUserDir /home/www-users
+apache-userdir-frontUserDir /home/www-users
<Directory /home/www-users/>
Require all granted
AllowOverride None
@@ -2336,7 +2386,7 @@ HTTPS URLs.
-apache-redirect-front<VirtualHost *:80>
+apache-redirect-front<VirtualHost *:80>
Redirect permanent / https://{{ domain_name }}/
</VirtualHost>
@@ -2361,7 +2411,7 @@ the inside of a VirtualHost block. They should apply globally.
-apache-frontServerName {{ domain_name }}
+apache-frontServerName {{ domain_name }}
ServerAdmin webmaster@{{ domain_name }}
DocumentRoot /home/www
@@ -2511,6 +2561,7 @@ the users' ~/Public/HTML/
directories.
src: /home/{{ item }}/Public/HTML
state: link
force: yes
+ follow: false
loop: "{{ usernames }}"
when: members[item].status == 'current'
tags: accounts
@@ -2527,9 +2578,9 @@ the users' ~/Public/HTML/
directories.
-
-7.14. Configure Public WireGuard⢠Subnet
-
+
+7.15. Configure Public WireGuard⢠Subnet
+
Front uses WireGuard⢠to provide a public (Internet accessible) VPN
service. Core has an interface on this VPN and is expected to forward
@@ -2537,14 +2588,98 @@ packets between it and the institute's other private networks.
-The following example private/front-wg0.conf
configuration recognizes
-Core by its public key and routes the institute's private networks to
-it. It also recognizes Dick's notebook and his (replacement) phone,
-assigning them host numbers 4 and 6 on the VPN.
+The following tasks install WireGuardâ¢, configure it with
+private/front-wg0.conf
(or private/front-wg0-empty.conf
if it does
+not exist), and enable the service.
+
+
+
+roles_t/front/tasks/main.yml
+- name: Enable IP forwarding.
+ become: yes
+ sysctl:
+ name: net.ipv4.ip_forward
+ value: "1"
+ state: present
+
+- name: Install WireGuard™.
+ become: yes
+ apt: pkg=wireguard
+
+- name: Configure WireGuard™.
+ become: yes
+ vars:
+ srcs:
+ - ../private/front-wg0.conf
+ - ../private/front-wg0-empty.conf
+ copy:
+ src: "{{ lookup('first_found', srcs) }}"
+ dest: /etc/wireguard/wg0.conf
+ mode: u=r,g=,o=
+ owner: root
+ group: root
+ notify: Restart WireGuard™.
+ tags: accounts
+
+- name: Start WireGuard™.
+ become: yes
+ systemd:
+ service: wg-quick@wg0
+ state: started
+ tags: actualizer
+
+- name: Enable WireGuard™.
+ become: yes
+ systemd:
+ service: wg-quick@wg0
+ enabled: yes
+
+
+
+
+roles_t/front/handlers/main.yml
+- name: Restart WireGuard™.
+ become: yes
+ systemd:
+ service: wg-quick@wg0
+ state: restarted
+ tags: actualizer
+
+
+
+
+The "empty" WireGuard⢠configuration file (below) is used until the
+./inst client command adds the first client, and generates an actual
+private/front-wg0.conf
.
+
+
+
+private/front-wg0-empty.conf
[Interface]
+Address = 10.177.87.1/24
+ListenPort = 39608
+PostUp = wg set %i private-key /etc/wireguard/private-key
+PostUp = resolvectl dns %i 192.168.56.1
+PostUp = resolvectl domain %i small.private
+
+
+
+
+7.15.1. Example private/front-wg0.conf
+
+
+The example private/front-wg0.conf
below recognizes Core by its
+public key and routes the institute's private networks to it. It also
+recognizes Dick's notebook and his (replacement) phone, assigning them
+host numbers 4 and 6 on the VPN.
+
+
+
+This is just an example. The actual file is edited by the ./inst
+client command and so is not tangled from the following block.
-private/front-wg0.conf
[Interface]
+Example private/front-wg0.conf
[Interface]
Address = 10.177.87.1/24
ListenPort = 39608
PostUp = wg set %i private-key /etc/wireguard/private-key
@@ -2584,7 +2719,7 @@ WireGuard⢠tunnel on Dick's notebook, used abroadPostUp = resolvectl domain %i small.private
[Peer]
-EndPoint = 192.168.15.5:39608
+EndPoint = 192.168.15.4:39608
PublicKey = S+6HaTnOwwhWgUGXjSBcPAvifKw+j8BDTRfq534gNW4=
AllowedIPs = 10.177.87.1
AllowedIPs = 192.168.56.0/24
@@ -2593,65 +2728,12 @@ WireGuard⢠tunnel on Dick's notebook, used abroadAllowedIPs = 10.84.139.0/24
-
-
-The following tasks install WireGuardâ¢, configure it with
-private/front-wg0.conf
, and enable the service.
-
-
-
-roles_t/front/tasks/main.yml
-- name: Enable IP forwarding.
- become: yes
- sysctl:
- name: net.ipv4.ip_forward
- value: "1"
- state: present
-
-- name: Install WireGuard™.
- become: yes
- apt: pkg=wireguard
-
-- name: Configure WireGuard™.
- become: yes
- copy:
- src: ../private/front-wg0.conf
- dest: /etc/wireguard/wg0.conf
- mode: u=r,g=,o=
- owner: root
- group: root
- notify: Restart WireGuard™.
-
-- name: Start WireGuard™.
- become: yes
- systemd:
- service: wg-quick@wg0
- state: started
- tags: actualizer
-
-- name: Enable WireGuard™.
- become: yes
- systemd:
- service: wg-quick@wg0
- enabled: yes
-
-
-
-
-roles_t/front/handlers/main.yml
-- name: Restart WireGuard™.
- become: yes
- systemd:
- service: wg-quick@wg0
- state: restarted
- tags: actualizer
-
-
-7.15. Configure Kamailio
-
+
+7.16. Configure Kamailio
+
Front uses Kamailio to provide a SIP service on the public VPN so that
members abroad can chat privately. This is a connection-less UDP
@@ -2672,7 +2754,7 @@ specifies the actual IP, known here as front_wg_addr.
-kamailiolisten=udp:{{ front_wg_addr }}:5060
+kamailiolisten=udp:{{ front_wg_addr }}:5060
@@ -2715,8 +2797,8 @@ not be started before the wg0 device has appeared.
copy:
content: |
[Unit]
+ After=wg-quick@wg0.service
Requires=sys-devices-virtual-net-wg0.device
- After=sys-devices-virtual-net-wg0.device
dest: /etc/systemd/system/kamailio.service.d/depend.conf
notify: Reload Systemd.
@@ -2774,22 +2856,39 @@ Finally, Kamailio can be configured and started.
-
-8. The Core Role
+
+8. The Core Role
The core role configures many essential campus network services as
well as the institute's private cloud, so the core machine has
horsepower (CPUs and RAM) and large disks and is prepared with a
Debian install and remote access to a privileged, administrator's
-account. (For details, see The Core Machine.)
+account. (For details, see The Core Machine.)
-
-8.1. Include Particulars
+
+8.1. Role Defaults
-The first task, as in The Front Role, is to include the institute
+As in The Front Role, the core role sets a number of variables to
+default values in its defaults/main.yml
file.
+
+
+
+roles_t/core/defaults/main.yml
---
+<<network-vars>>
+<<address-vars>>
+<<membership-rolls>>
+
+
+
+
+
+8.2. Include Particulars
+
+
+The first task, as in The Front Role, is to include the institute
particulars and membership roll.
@@ -2798,9 +2897,11 @@ particulars and membership roll.
- name: Include public variables.
include_vars: ../public/vars.yml
tags: accounts
+
- name: Include private variables.
include_vars: ../private/vars.yml
tags: accounts
+
- name: Include members.
include_vars: "{{ lookup('first_found', membership_rolls) }}"
tags: accounts
@@ -2808,9 +2909,9 @@ particulars and membership roll.
-
-8.2. Configure Hostname
-
+
+8.3. Configure Hostname
+
This task ensures that Core's /etc/hostname
and /etc/mailname
are
correct. Core accepts email addressed to the institute's public or
@@ -2839,9 +2940,9 @@ proper email delivery.
-
-8.3. Configure Systemd Resolved
-
+
+8.4. Configure Systemd Resolved
+
Core runs the campus name server, so Resolved is configured to use it
(or dns.google), to include the institute's domain in its search
@@ -2886,70 +2987,83 @@ list, and to disable its cache and stub listener.
-
-8.4. Configure Netplan
-
+
+8.5. Configure Core NetworkD
+
+
+Core's network interface is statically configured using the
+systemd-networkd configuration files 10-lan.link
and
+10-lan.network
installed in /etc/systemd/network/
. Those files
+statically assign Core's IP address (as well as the campus name server
+and search domain), and its default route through Gate. A second
+route, through Core itself to Front, is advertised to other hosts, and
+is routed through a WireGuard⢠interface connected to Front's public
+WireGuard⢠VPN.
+
+
-Core's network interface is statically configured using Netplan and an
-/etc/netplan/60-core.yaml
file. That file provides Core's address
-on the private Ethernet, the campus name server and search domain, and
-the default route through Gate to the campus ISP. A second route,
-through Core itself to Front, is advertised to other hosts.
+Note that the [Match] sections of the .network
files should
+specify only a MACAddress. Getting systemd-udevd to rename
+interfaces has thusfar been futile (short of a reboot), so specifying
+a Name means the interface does not match, leaving it un-configured
+(until the next reboot).
-Core's Netplan needs the name of its main (only) Ethernet interface,
-an example of which is given here. (A clever way to extract that name
+The configuration needs the MAC address of the primary (only) NIC, an
+example of which is given here. (A clever way to extract that name
from ansible_facts would be appreciated. The ansible_default_ipv4
fact was an empty hash at first boot on a simulated campus Ethernet.)
-private/vars.yml
core_ethernet: enp0s3
+private/vars.yml
core_lan_mac: 08:00:27:b3:e5:5f
roles_t/core/tasks/main.yml
-- name: Install netplan.
+- name: Install 10-lan.link.
become: yes
- apt: pkg=netplan.io
+ copy:
+ content: |
+ [Match]
+ MACAddress={{ core_lan_mac }}
-- name: Configure netplan.
+ [Link]
+ Name=lan
+ dest: /etc/systemd/network/10-lan.link
+
+- name: Install 10-lan.network.
become: yes
copy:
content: |
- network:
- renderer: networkd
- ethernets:
- {{ core_ethernet }}:
- dhcp4: false
- addresses: [ {{ core_addr_cidr }} ]
- nameservers:
- search: [ {{ domain_priv }} ]
- addresses: [ {{ core_addr }} ]
- routes:
- - to: default
- via: {{ gate_addr }}
- dest: /etc/netplan/60-core.yaml
- mode: u=rw,g=r,o=
- notify: Apply netplan.
+ [Match]
+ MACAddress={{ core_lan_mac }}
+
+ [Network]
+ Address={{ core_addr_cidr }}
+ Gateway={{ gate_addr }}
+ DNS={{ core_addr }}
+ Domains={{ domain_priv }}
+ dest: /etc/systemd/network/10-lan.network
+ notify: Reload networkd.
roles_t/core/handlers/main.yml
-- name: Apply netplan.
+- name: Reload networkd.
become: yes
- command: netplan apply
+ command: networkctl reload
tags: actualizer
-
-8.5. Configure DHCP For the Private Ethernet
-
+
+8.6. Configure DHCP For the Private Ethernet
+
Core speaks DHCP (Dynamic Host Configuration Protocol) using the
Internet Software Consortium's DHCP server. The server assigns unique
@@ -2963,9 +3077,9 @@ The example configuration file, private/cor
RFC3442's extension to encode a second (non-default) static route.
The default route is through the campus ISP at Gate. A second route
directs campus traffic to the Front VPN through Core. This is just an
-example file, with MAC addresses chosen to (probably?) match
-VirtualBox test machines. In actual use private/core-dhcpd.conf
-refers to a replacement file.
+example file, with MAC addresses chosen to match VirtualBox test
+machines. In actual use private/core-dhcpd.conf
refers to a
+replacement file.
@@ -2992,12 +3106,8 @@ log-facility daemon;
0, 192,168,56,2;
}
-host core {
- hardware ethernet 08:00:27:45:3b:a2; fixed-address 192.168.56.1; }
-host gate {
- hardware ethernet 08:00:27:e0:79:ab; fixed-address 192.168.56.2; }
-host server {
- hardware ethernet 08:00:27:f3:41:66; fixed-address 192.168.56.3; }
+host dick {
+ hardware ethernet 08:00:27:dc:54:b5; fixed-address 192.168.56.4; }
@@ -3016,8 +3126,8 @@ the real private/core-dhcpd.conf
(<
become: yes
lineinfile:
path: /etc/default/isc-dhcp-server
- line: INTERFACESv4="{{ core_ethernet }}"
- regexp: ^INTERFACESv4=
+ regexp: "^INTERFACESv4="
+ line: "INTERFACESv4=\"lan\""
notify: Restart DHCP server.
- name: Configure DHCP subnet.
@@ -3054,12 +3164,12 @@ the real private/core-dhcpd.conf
(<
-
-8.6. Configure BIND9
-
+
+8.7. Configure BIND9
+
Core uses BIND9 to provide name service for the institute as described
-in The Name Service. The configuration supports reverse name lookups,
+in The Name Service. The configuration supports reverse name lookups,
resolving many private network addresses to private domain names.
@@ -3131,7 +3241,7 @@ probably be used as forwarders rather than Google.
-bind-optionsacl "trusted" {
+bind-optionsacl "trusted" {
{{ private_net_cidr }};
{{ wild_net_cidr }};
{{ public_wg_net_cidr }};
@@ -3151,6 +3261,8 @@ probably be used as forwarders rather than Google.
allow-recursion { trusted; };
allow-query-cache { trusted; };
+ dnssec-validation yes;
+
listen-on {
{{ core_addr }};
localhost;
@@ -3160,7 +3272,7 @@ probably be used as forwarders rather than Google.
-bind-localinclude "/etc/bind/zones.rfc1918";
+bind-localinclude "/etc/bind/zones.rfc1918";
zone "{{ domain_priv }}." {
type master;
@@ -3272,9 +3384,9 @@ $TTL 7200
-
-8.7. Add Administrator to System Groups
-
+
+8.8. Add Administrator to System Groups
+
The administrator often needs to read (directories of) log files owned
by groups root and adm. Adding the administrator's account to
@@ -3293,15 +3405,15 @@ these groups speeds up debugging.
-
-8.8. Configure Monkey
-
+
+8.9. Configure Monkey
+
The small institute runs cron jobs and web scripts that generate
reports and perform checks. The un-privileged jobs are run by a
system account named monkey. One of Monkey's more important jobs on
Core is to run rsync to update the public web site on Front (as
-described in *Configure Apache2).
+described in *Configure Apache2).
@@ -3361,9 +3473,9 @@ described in *Configure Apache2).
-
-8.9. Install Unattended Upgrades
-
+
+8.10. Install Unattended Upgrades
+
The institute prefers to install security updates as soon as possible.
@@ -3377,29 +3489,12 @@ The institute prefers to install security updates as soon as possible.
-
-8.10. Install Expect
-
-
-The expect program is used by The Institute Commands to interact
-with Nextcloud on the command line.
-
-
-
-roles_t/core/tasks/main.yml
-- name: Install expect.
- become: yes
- apt: pkg=expect
-
-
-
-
-
-8.11. Configure User Accounts
+
+8.11. Configure User Accounts
User accounts are created immediately so that backups can begin
-restoring as soon as possible. The Account Management chapter
+restoring as soon as possible. The Account Management chapter
describes the members and usernames variables.
@@ -3437,8 +3532,8 @@ describes the members and usernames variables.
-
-8.12. Install Server Certificate
+
+8.12. Install Server Certificate
The servers on Core use the same certificate (and key) to authenticate
@@ -3466,8 +3561,8 @@ themselves to institute clients. They share the /etc/server.crt
and
-
-8.13. Install Chrony
+
+8.13. Install Chrony
Core uses Chrony to provide a time synchronization service to the campus.
@@ -3495,6 +3590,7 @@ The default daemon's default configuration is fine.
roles_t/core/handlers/main.yml
- name: Restart Chrony.
+ become: yes
systemd:
service: chrony
state: restarted
@@ -3502,8 +3598,8 @@ The default daemon's default configuration is fine.
-
-8.14. Configure Postfix on Core
+
+8.14. Configure Postfix on Core
Core uses Postfix to provide SMTP service to the campus. The default
@@ -3519,7 +3615,7 @@ The appropriate answers are listed here but will be checked
-As discussed in The Email Service above, Core delivers email addressed
+As discussed in The Email Service above, Core delivers email addressed
to any internal domain name locally, and uses its smarthost Front to
relay the rest. Core is reachable only on institute networks, so
there is little benefit in enabling TLS, but it does need to handle
@@ -3532,7 +3628,7 @@ Core relays messages from any institute network.
-postfix-core-networks- p: mynetworks
+postfix-core-networks- p: mynetworks
v: >-
{{ private_net_cidr }}
{{ public_wg_net_cidr }}
@@ -3548,7 +3644,7 @@ Core uses Front to relay messages to the Internet.
-postfix-core-relayhost- { p: relayhost, v: "[{{ front_wg_addr }}]" }
+postfix-core-relayhost- { p: relayhost, v: "[{{ front_wg_addr }}]" }
@@ -3560,7 +3656,7 @@ file.
-postfix-transport.{{ domain_name }} local:$myhostname
+postfix-transport.{{ domain_name }} local:$myhostname
.{{ domain_priv }} local:$myhostname
@@ -3571,7 +3667,7 @@ The complete list of Core's Postfix settings for
-postfix-core<<postfix-relaying>>
+postfix-core<<postfix-relaying>>
- { p: smtpd_tls_security_level, v: none }
- { p: smtp_tls_security_level, v: none }
<<postfix-message-size>>
@@ -3649,8 +3745,8 @@ enable the service. Whenever /etc/postfix/transport
is changed, the
-
-8.15. Configure Private Email Aliases
+
+8.15. Configure Private Email Aliases
The institute's Core needs to deliver email addressed to institute
@@ -3670,6 +3766,7 @@ installed by more specialized roles.
admin: root
www-data: root
monkey: root
+ root: {{ ansible_user }}
path: /etc/aliases
marker: "# {mark} INSTITUTE MANAGED BLOCK"
notify: New aliases.
@@ -3686,8 +3783,8 @@ installed by more specialized roles.
-
-8.16. Configure Dovecot IMAPd
+
+8.16. Configure Dovecot IMAPd
Core uses Dovecot's IMAPd to store and serve member emails. As on
@@ -3697,7 +3794,7 @@ top" given that Core is only accessed from private (encrypted)
networks, but helps to ensure privacy even when members accidentally
attempt connections from outside the private networks. For more
information about Core's role in the institute's email services, see
-The Email Service.
+The Email Service.
@@ -3705,7 +3802,7 @@ The institute follows the recommendation in the package
README.Debian
(in /usr/share/dovecot-core/
) but replaces the
default "snake oil" certificate with another, signed by the institute.
(For more information about the institute's X.509 certificates, see
-Keys.)
+Keys.)
@@ -3758,8 +3855,8 @@ and enables it to start at every reboot.
-
-8.17. Configure Fetchmail
+
+8.17. Configure Fetchmail
Core runs a fetchmail for each member of the institute. Individual
@@ -3776,7 +3873,7 @@ the username. The template is only used when the record has a
-fetchmail-config# Permissions on this file may be no greater than 0600.
+fetchmail-config# Permissions on this file may be no greater than 0600.
set no bouncemail
set no spambounce
@@ -3795,7 +3892,7 @@ The Systemd service description.
-fetchmail-service[Unit]
+fetchmail-service[Unit]
Description=Fetchmail --idle task for {{ item }}.
AssertPathExists=/home/{{ item }}/.fetchmailrc
After=wg-quick@wg0.service
@@ -3912,12 +4009,12 @@ Otherwise the following task might be appropriate.
-
-8.18. Configure Apache2
+
+8.18. Configure Apache2
This is the small institute's campus web server. It hosts several web
-sites as described in The Web Services.
+sites as described in The Web Services.
@@ -3988,7 +4085,7 @@ naming a sub-directory in the member's home directory on Core. The
-apache-userdir-coreUserDir Public/HTML
+apache-userdir-coreUserDir Public/HTML
<Directory /home/*/Public/HTML/>
Require all granted
AllowOverride None
@@ -4003,7 +4100,7 @@ redirect, the encryption ciphers and certificates.
-apache-live<VirtualHost *:80>
+apache-live<VirtualHost *:80>
ServerName live
ServerAlias live.{{ domain_priv }}
ServerAdmin webmaster@core.{{ domain_priv }}
@@ -4030,7 +4127,7 @@ familiar.
-apache-test<VirtualHost *:80>
+apache-test<VirtualHost *:80>
ServerName test
ServerAlias test.{{ domain_priv }}
ServerAdmin webmaster@core.{{ domain_priv }}
@@ -4059,7 +4156,7 @@ trained staffers, monitored by a revision control system, etc.
-apache-campus<VirtualHost *:80>
+apache-campus<VirtualHost *:80>
ServerName www
ServerAlias www.{{ domain_priv }}
ServerAdmin webmaster@core.{{ domain_priv }}
@@ -4183,8 +4280,8 @@ The a2ensite command enables them.
-
-8.19. Configure Website Updates
+
+8.19. Configure Website Updates
Monkey on Core runs /usr/local/sbin/webupdate
every 15 minutes via a
@@ -4193,7 +4290,7 @@ Monkey on Core runs /usr/local/sbin/webupdate
every 15 minutes via a
-private/webupdate
#!/bin/bash -e
+private/webupdate
#!/bin/bash -e
#
# DO NOT EDIT.
#
@@ -4204,14 +4301,14 @@ Monkey on Core runs /usr/local/sbin/webupdate
every 15 minutes via a
rsync -avz --delete --chmod=g-w \
--filter='exclude *~' \
--filter='exclude .git*' \
- ./ {{ domain_name }}:/home/www/
+ ./ 192.168.15.4:/home/www/
The following tasks install the webupdate
script from private/
,
and create Monkey's cron job. An example webupdate
script is
-provided here.
+provided here.
@@ -4236,39 +4333,19 @@ provided here.
-
-8.20. Configure Core WireGuard⢠Interface
+
+8.20. Configure Core WireGuard⢠Interface
Core connects to Front's WireGuard⢠service to provide members abroad
-with a route to the campus networks. As described in Configure Public
+with a route to the campus networks. As described in Configure Public
WireGuard⢠Subnet for Front, Core is expected to forward packets from/to the
private networks.
-The following example private/core-wg0.conf
configuration recognizes
-Front by its public key, S+6HaT, looking for it at the institute's
-public IP address and a special port.
-
-
-
-private/core-wg0.conf
[Interface]
-Address = 10.177.87.2
-PostUp = wg set %i private-key /etc/wireguard/private-key
-
-# Front
-[Peer]
-EndPoint = 192.168.15.5:39608
-PublicKey = S+6HaTnOwwhWgUGXjSBcPAvifKw+j8BDTRfq534gNW4=
-AllowedIPs = 10.177.87.1
-AllowedIPs = 10.177.87.0/24
-
-
-
-
-The following tasks install WireGuardâ¢, configure it with
-private/core-wg0.conf
, and enable the service.
+The following tasks install WireGuardâ¢, configure it and enable the
+service.
@@ -4287,7 +4364,17 @@ The following tasks install WireGuardâ¢, configure it with
- name: Configure WireGuard™.
become: yes
copy:
- src: ../private/core-wg0.conf
+ content: |
+ [Interface]
+ Address = {{ core_wg_addr }}
+ PostUp = wg set %i private-key /etc/wireguard/private-key
+
+ # Front
+ [Peer]
+ EndPoint = {{ front_addr }}:{{ public_wg_port }}
+ PublicKey = {{ front_wg_pubkey }}
+ AllowedIPs = {{ front_wg_addr }}
+ AllowedIPs = {{ public_wg_net_cidr }}
dest: /etc/wireguard/wg0.conf
mode: u=r,g=,o=
owner: root
@@ -4321,16 +4408,15 @@ The following tasks install WireGuardâ¢, configure it with
-
-8.21. Configure NAGIOS
+
+8.21. Configure NAGIOS
Core runs a nagios4 server to monitor "services" on institute hosts.
The following tasks install the necessary packages and configure the
-server. The last task installs the monitoring configuration in
-/etc/nagios4/conf.d/institute.cfg
. This configuration file,
-nagios.cfg
, is tangled from code blocks described in subsequent
-subsections.
+server via edits to /etc/nagios4/nagios.cfg
. The monitors are
+installed in /etc/nagios4/conf.d/institute.cfg
which is tangled from
+code blocks described in the following subsections.
@@ -4364,9 +4450,10 @@ Core and Campus (and thus Gate) machines.
line: "{{ item.line }}"
backrefs: yes
loop:
- - { regexp: "^( *cfg_file *= *localhost.cfg)", line: "# \\1" }
- - { regexp: "^( *admin_email *= *)",
- line: "\\1{{ ansible_user }}@localhost" }
+ - regexp: "^( *cfg_file *=.*/localhost.cfg)"
+ line: "#\\1"
+ - regexp: "^( *admin_email *= *)"
+ line: "\\1{{ ansible_user }}@localhost"
notify: Reload NAGIOS4.
- name: Configure NAGIOS4 contacts.
@@ -4411,8 +4498,8 @@ Core and Campus (and thus Gate) machines.
-
-8.21.1. Configure NAGIOS Monitors for Core
+
+8.21.1. Configure NAGIOS Monitors for Core
The first block in nagios.cfg
specifies monitors for services on
@@ -4487,8 +4574,8 @@ used here may specify plugin arguments.
-
-8.21.2. Custom NAGIOS Monitor inst_sensors
+
+8.21.2. Custom NAGIOS Monitor inst_sensors
The check_sensors plugin is included in the package
@@ -4600,8 +4687,8 @@ Core.
-
-8.21.3. Configure NAGIOS Monitors for Remote Hosts
+
+8.21.3. Configure NAGIOS Monitors for Remote Hosts
The following sections contain code blocks specifying monitors for
@@ -4618,12 +4705,12 @@ plugin with pre-defined arguments appropriate for the institute. The
commands are defined in code blocks interleaved with the blocks that
monitor them. The command blocks are appended to nrpe.cfg
and the
monitoring blocks to nagios.cfg
. The nrpe.cfg
file is installed
-on each campus host by the campus role's Configure NRPE tasks.
+on each campus host by the campus role's Configure NRPE tasks.
-
-8.21.4. Configure NAGIOS Monitors for Gate
+
+8.21.4. Configure NAGIOS Monitors for Gate
Define the monitored host, gate. Monitor its response to network
@@ -4754,12 +4841,12 @@ Monitor inst_sensors on Gate.
-
-8.22. Configure Backups
+
+8.22. Configure Backups
-
-8.23. Configure Nextcloud
+
+8.23. Configure Nextcloud
Core runs Nextcloud to provide a private institute cloud, as described
-in The Cloud Service. Installing, restoring (from backup), and
-upgrading Nextcloud are manual processes documented in The Nextcloud
+in The Cloud Service. Installing, restoring (from backup), and
+upgrading Nextcloud are manual processes documented in The Nextcloud
Admin Manual, Maintenance. However Ansible can help prepare Core
before an install or restore, and perform basic security checks
afterwards.
-
-8.23.1. Prepare Core For Nextcloud
+
+8.23.1. Prepare Core For Nextcloud
The Ansible code contained herein prepares Core to run Nextcloud by
@@ -4803,7 +4890,7 @@ installing a cron job.
pkg: [ apache2, mariadb-server, php, php-apcu, php-bcmath,
php-curl, php-gd, php-gmp, php-json, php-mysql,
php-mbstring, php-intl, php-imagick, php-xml, php-zip,
- libapache2-mod-php ]
+ imagemagick, libapache2-mod-php ]
@@ -4826,7 +4913,7 @@ The Apache2 configuration is then extended with the following
/etc/apache2/sites-available/nextcloud.conf
file, which is installed
and enabled with a2ensite. The same configuration lines are given
in the "Installation on Linux" section of the Nextcloud Server
-Administration Guide (sub-section Apache Web server configuration).
+Administration Guide (sub-section Apache Web server configuration).
@@ -4949,7 +5036,7 @@ the apg -n 1 -x 12 -m 12 command.
-private/vars.yml
nextcloud_dbpass: ippAgmaygyob
+private/vars.yml
nextcloud_dbpass: ippAgmaygyobwyt5
@@ -5029,8 +5116,8 @@ its document root.
-
-8.23.2. Configure PHP
+
+8.23.2. Configure PHP
The following tasks set a number of PHP parameters for better
@@ -5043,8 +5130,8 @@ performance, as recommended by Nextcloud.
become: yes
lineinfile:
path: /etc/php/8.2/apache2/php.ini
- regexp: memory_limit *=
- line: memory_limit = 768M
+ regexp: "memory_limit *="
+ line: "memory_limit = 768M"
- name: Include PHP parameters for Nextcloud.
become: yes
@@ -5073,8 +5160,8 @@ performance, as recommended by Nextcloud.
-
-8.23.3. Create /Nextcloud/
+
+8.23.3. Create /Nextcloud/
The Ansible tasks up to this point have completed Core's LAMP stack
@@ -5132,8 +5219,8 @@ sudo mount /Nextcloud
-
-8.23.4. Restore Nextcloud
+
+8.23.4. Restore Nextcloud
Restoring Nextcloud in the newly created /Nextcloud/
presumably
@@ -5157,7 +5244,7 @@ make it so.
-sudo chown -R www-data.www-data /Nextcloud/nextcloud/
+sudo chown -R www-data:www-data /Nextcloud/nextcloud/
@@ -5192,34 +5279,35 @@ Overview web page.
-
-8.23.5. Install Nextcloud
+
+8.23.5. Install Nextcloud
Installing Nextcloud in the newly created /Nextcloud/
starts with
downloading and verifying a recent release tarball. The following
-example command lines unpacked Nextcloud 23 in nextcloud/
in
+example command lines unpacked Nextcloud 31 in nextcloud/
in
/Nextcloud/
and set the ownerships and permissions of the new
directories and files.
cd /Nextcloud/
-tar xzf ~/Downloads/nextcloud-23.0.0.tar.bz2
-sudo chown -R www-data.www-data nextcloud
+tar xjf ~/Downloads/nextcloud-31.0.2.tar.bz2
+sudo chown -R www-data:www-data nextcloud
sudo find nextcloud -type d -exec chmod 750 {} \;
sudo find nextcloud -type f -exec chmod 640 {} \;
-According to the latest installation instructions in version 24's
-administration guide, after unpacking and setting file permissions,
-the following occ command takes care of everything. This command
-currently expects Nextcloud's database and user to exist. The
-following SQL commands create the database and user (entered at the
-SQL prompt of the sudo mysql command). The shell command then runs
-occ.
+According to the latest installation instructions in the Admin Manual
+for version 31 (section "Installation and server configuration",
+subsection "Installing from command line", here), after unpacking and
+setting file permissions, the following occ command takes care of
+everything. This command currently expects Nextcloud's database and
+user to exist. The following SQL commands create the database and
+user (entered at the SQL prompt of the sudo mysql command). The
+shell command then runs occ.
@@ -5236,11 +5324,9 @@ flush privileges;
cd /var/www/nextcloud/
sudo -u www-data php occ maintenance:install \
- --data-dir=/var/www/nextcloud/data \
- --database=mysql --database-name=nextcloud \
- --database-user=nextclouduser \
- --database-pass=ippAgmaygyobwyt5 \
- --admin-user=sysadm --admin-pass=PASSWORD
+--database='mysql' --database-name='nextcloud' \
+--database-user='nextclouduser' --database-pass='ippAgmaygyobwyt5' \
+--admin-user='sysadm' --admin-pass='fubar'
@@ -5263,8 +5349,8 @@ Administration > Overview page.
-
-8.23.6. Afterwards
+
+8.23.6. Afterwards
Whether Nextcloud was restored or installed, there are a few things
@@ -5354,7 +5440,7 @@ enables it.
-The institute implements Pretty URLs as described in the Pretty URLs
+The institute implements Pretty URLs as described in the Pretty URLs
subsection of the "Installation on Linux" section of the "Installation
and server configuration" chapter in the Nextcloud 22 Server
Administration Guide. Two settings are updated: overwrite.cli.url
@@ -5392,16 +5478,46 @@ a complaint on the Settings > Administration > Overview web page.
+
+It sets Nextcloud's "maintenance window" to start at 02:00MST
+(09:00UTC). The interval is 4 hours, so ends at 06:00MST. The
+documentation for the setting was found here.
+
+
+
+It also configures Nextcloud to send email with /usr/sbin/sendmail
+From: webmaster@core.small.private. The documentation for the
+settings was found here though just two parameters are set here, not
+the 9 suggested in sub-sub-subsection "Sendmail", of sub-subsection
+"Setting mail server parameters in config.php", seemed a simple,
+unedited copy of the parameters SMTP and not by Sendmail nor Qmail.
+
+
roles_t/core/tasks/main.yml
-- name: Configure Nextcloud phone region.
+- name: Configure Nextcloud settings.
become: yes
lineinfile:
path: /var/www/nextcloud/config/config.php
- regexp: "^ *'default_phone_region' *=> *'.*', *$"
- line: " 'default_phone_region' => '{{ nextcloud_region }}',"
+ regexp: "{{ item.regexp }}"
+ line: "{{ item.line }}"
insertbefore: "^[)];"
firstmatch: yes
+ loop:
+ - regexp: "^ *'default_phone_region' *=> *'.*', *$"
+ line: " 'default_phone_region' => '{{ nextcloud_region }}',"
+
+ - regexp: "^ *'maintenance_window_start' *=> "
+ line: " 'maintenance_window_start' => 9,"
+
+ - regexp: "^ *'mail_smtpmode' *=>"
+ line: " 'mail_smtpmode' => 'sendmail',"
+ - regexp: "^ *'mail_sendmailmode' *=>"
+ line: " 'mail_sendmailmode' => 'pipe',"
+ - regexp: "^ *'mail_from_address' *=>"
+ line: " 'mail_from_address' => 'webmaster',"
+ - regexp: "^ *'mail_domain' *=>"
+ line: " 'mail_domain' => 'core.small.private',"
when: nextcloud.stat.exists
@@ -5446,14 +5562,14 @@ run before the next backup.
-
-9. The Gate Role
+
+9. The Gate Role
The gate role configures the services expected at the campus gate:
access to the private Ethernet from the untrusted Ethernet (e.g. a
campus Wi-Fi AP) via VPN, and access to the Internet via NAT. The
-gate machine uses three network interfaces (see The Gate Machine)
+gate machine uses three network interfaces (see The Gate Machine)
configured with persistent names used in its firewall rules.
@@ -5475,42 +5591,48 @@ applied first, by which Gate gets a campus machine's DNS and Postfix
configurations, etc.
-
-9.1. Include Particulars
+
+9.1. Role Defaults
-The following should be familiar boilerplate by now.
+As in The Core Role, the gate role sets a number of variables to
+default values in its defaults/main.yml
file.
-roles_t/gate/tasks/main.yml
---
-- name: Include public variables.
- include_vars: ../public/vars.yml
- tags: accounts
-- name: Include private variables.
- include_vars: ../private/vars.yml
- tags: accounts
-- name: Include members.
- include_vars: "{{ lookup('first_found', membership_rolls) }}"
- tags: accounts
+roles_t/gate/defaults/main.yml
---
+<<network-vars>>
+<<address-vars>>
-
-9.2. Configure Netplan
+
+9.2. Include Particulars
-Gate's network interfaces are configured using Netplan and two files.
-/etc/netplan/60-gate.yaml
describes the static interfaces, to the
-campus Ethernet and Wi-Fi. /etc/netplan/60-isp.yaml
is expected to
-be revised more frequently as the campus ISP changes.
+The following should be familiar boilerplate by now.
+
+roles_t/gate/tasks/main.yml
---
+- name: Include public variables.
+ include_vars: ../public/vars.yml
+
+- name: Include private variables.
+ include_vars: ../private/vars.yml
+
+
+
+
+
+9.3. Configure Gate NetworkD
+
-Netplan is configured to identify the interfaces by their MAC
-addresses, which must be provided in private/vars.yml
, as in the
-example code here.
+Gate's network interfaces are configured using SystemD NetworkD
+configuration files that specify their MAC addresses. (One or more
+might be plug-and-play USB dongles.) These addresses are provided by
+the private/vars.yml
file as in the example code here.
@@ -5521,83 +5643,255 @@ gate_isp_mac: 08:00:27:3d:42:e5
-The following tasks install the two configuration files and apply the
-new network plan.
+The tasks in the following sections install the necessary
+configuration files.
+
+
+
+9.3.1. Gate's lan Interface
+
+
+The campus Ethernet interface is named lan and configured by
+10-lan.link
and 10-lan.network
files in /etc/systemd/network/
.
roles_t/gate/tasks/main.yml
-- name: Install netplan (gate).
- become: yes
- apt: pkg=netplan.io
-
-- name: Configure netplan (gate).
+- name: Install 10-lan.link.
become: yes
copy:
content: |
- network:
- ethernets:
- lan:
- match:
- macaddress: {{ gate_lan_mac }}
- addresses: [ {{ gate_addr_cidr }} ]
- set-name: lan
- dhcp4: false
- nameservers:
- addresses: [ {{ core_addr }} ]
- search: [ {{ domain_priv }} ]
- routes:
- - to: {{ public_wg_net_cidr }}
- via: {{ core_addr }}
- wild:
- match:
- macaddress: {{ gate_wild_mac }}
- addresses: [ {{ gate_wild_addr_cidr }} ]
- set-name: wild
- dhcp4: false
- dest: /etc/netplan/60-gate.yaml
- mode: u=rw,g=r,o=
- notify: Apply netplan.
-
-- name: Install netplan (ISP).
+ [Match]
+ MACAddress={{ gate_lan_mac }}
+
+ [Link]
+ Name=lan
+ dest: /etc/systemd/network/10-lan.link
+ notify: Reload networkd.
+
+- name: Install 10-lan.network.
become: yes
copy:
content: |
- network:
- ethernets:
- isp:
- match:
- macaddress: {{ gate_isp_mac }}
- set-name: isp
- dhcp4: true
- dhcp4-overrides:
- use-dns: false
- dest: /etc/netplan/60-isp.yaml
- mode: u=rw,g=r,o=
- force: no
- notify: Apply netplan.
+ [Match]
+ MACAddress={{ gate_lan_mac }}
+
+ [Network]
+ Address={{ gate_addr_cidr }}
+ DNS={{ core_addr }}
+ Domains={{ domain_priv }}
+
+ [Route]
+ Destination={{ public_wg_net_cidr }}
+ Gateway={{ core_addr }}
+ dest: /etc/systemd/network/10-lan.network
+ notify: Reload networkd.
roles_t/gate/handlers/main.yml
---
-- name: Apply netplan.
+- name: Reload networkd.
become: yes
- command: netplan apply
+ command: networkctl reload
tags: actualizer
+
+
+
+9.3.2. Gate's wild Interface
+
+
+The institute keeps the wild ones off the campus Ethernet. Its wild
+subnet is connected to Gate via a separate physical interface. To
+accommodate the wild ones without re-configuring them, the institute
+attempts to look like an up-link, e.g. a cable modem. A wild one is
+expected to chirp for DHCP service and use the private subnet address
+in its lease. Thus Gate's wild interface configuration enables the
+built-in DHCP server and lists the authorized lessees.
+
+
+
+The wild ones are not expected to number in the dozens, so they are
+simply a list of hashes in private/vars.yml
, as in the example code
+here. Note that host number 1 is Gate. Wild ones are assigned unique
+host numbers greater than 1.
+
+
+
+private/vars.yml
wild_ones:
+- { MAC: "08:00:27:dc:54:b5", num: 2, name: wifi-ap }
+- { MAC: "94:83:c4:19:7d:58", num: 3, name: appliance }
+
+
+
+
+As with the lan interface, this interface is named wild and
+configured by 10-wild.link
and 10-wild.network
files in
+/etc/systemd/network/
. The latter is generated from the hashes in
+wild_ones and the wild.network
template file.
+
+
+
+roles_t/gate/tasks/main.yml
+- name: Install 10-wild.link.
+ become: yes
+ copy:
+ content: |
+ [Match]
+ MACAddress={{ gate_wild_mac }}
+
+ [Link]
+ Name=wild
+ dest: /etc/systemd/network/10-wild.link
+ notify: Reload networkd.
+
+- name: Install 10-wild.network.
+ become: yes
+ template:
+ src: wild.network
+ dest: /etc/systemd/network/10-wild.network
+ notify: Reload networkd.
+
+
+
+
+roles_t/gate/templates/wild.network
[Match]
+MACAddress={{ gate_wild_mac }}
+
+[Network]
+Address={{ gate_wild_addr_cidr }}
+DHCPServer=yes
+
+[DHCPServer]
+EmitDNS=yes
+EmitNTP=yes
+NTP={{ core_addr }}
+EmitSMTP=yes
+SMTP={{ core_addr }}
+{% for wild in wild_ones %}
+
+# {{ wild.name }}
+[DHCPServerStaticLease]
+MACAddress={{ wild.MAC }}
+Address={{ wild_net_cidr |ansible.utils.ipaddr(wild.num) }}
+{% endfor %}
+
+
+
+
+
+9.3.3. Gate's isp Interface
+
+
+The interface to the campus ISP is named isp and configured by
+10-isp.link
and 10-isp.network
files in /etc/systemd/network/
.
+The latter is not automatically generated, as it varies quite a bit
+depending on the connection to the ISP: Ethernet interface, USB
+tether, Wi-Fi connection, etc.
+
+
+
+roles_t/gate/tasks/main.yml
+- name: Install 10-isp.link.
+ become: yes
+ copy:
+ content: |
+ [Match]
+ MACAddress={{ gate_isp_mac }}
+
+ [Link]
+ Name=isp
+ dest: /etc/systemd/network/10-isp.link
+ notify: Reload networkd.
+
+- name: Install 10-isp.network.
+ become: yes
+ copy:
+ src: ../private/gate-isp.network
+ dest: /etc/systemd/network/10-isp.network
+ force: no
+ notify: Reload networkd.
+
+
Note that the 60-isp.yaml
file is only updated (created) if it does
not already exist so that it can be easily modified to debug a new
campus ISP without interference from Ansible.
+
+
+The following example gate-isp.network
file recognizes an Ethernet
+interface by its MAC address.
+
+
+
+private/gate-isp.network
[Match]
+MACAddress=08:00:27:3d:42:e5
+
+[Network]
+DHCP=ipv4
+
+[DHCP]
+RouteMetric=100
+UseMTU=true
+UseDNS=false
+
-
-9.3. UFW Rules
-
+
+
+
+9.4. Configure Gate ResolveD
+
+
+Gate provides name service on the wild Ethernet by having its "stub
+listener" listen there. That stub should not read /etc/hosts
lest
+gate resolve to 127.0.1.1, nonsense to the wild.
+
+
+
+roles_t/gate/tasks/main.yml
+- name: Configure resolved.
+ become: yes
+ lineinfile:
+ path: /etc/systemd/resolved.conf
+ regexp: "{{ item.regexp }}"
+ line: "{{ item.line }}"
+ loop:
+ - regexp: '^ *DNSStubListenerExtra *='
+ line: "DNSStubListenerExtra={{ gate_wild_addr }}"
+ - regexp: '^ *ReadEtcHosts *='
+ line: "ReadEtcHosts=no"
+ notify:
+ - Reload Systemd.
+ - Restart Systemd resolved.
+
+
+
+
+roles_t/gate/handlers/main.yml
+- name: Reload Systemd.
+ become: yes
+ systemd:
+ daemon-reload: yes
+ tags: actualizer
+
+- name: Restart Systemd resolved.
+ become: yes
+ systemd:
+ service: systemd-resolved
+ state: restarted
+ tags: actualizer
+
+
+
+
+
+9.5. UFW Rules
+
Gate uses the Uncomplicated FireWall (UFW) to install its packet
filters at boot-time. The institute does not use a firewall except to
@@ -5620,7 +5914,7 @@ should not be routing their Internet traffic through their VPN.
-ufw-nat-A POSTROUTING -s {{ private_net_cidr }} -o isp -j MASQUERADE
+ufw-nat-A POSTROUTING -s {{ private_net_cidr }} -o isp -j MASQUERADE
-A POSTROUTING -s {{ wild_net_cidr }} -o isp -j MASQUERADE
@@ -5636,17 +5930,11 @@ connection tracking).
-ufw-forward-nat-A ufw-user-forward -i lan -o isp -j ACCEPT
--A ufw-user-forward -i wild -o isp -j ACCEPT
+ufw-forward-nat-A ufw-before-forward -i lan -o isp -j ACCEPT
+-A ufw-before-forward -i wild -o isp -j ACCEPT
-
-If "the standard iptables-restore syntax" as it is described in the
-ufw-framework manual page, allows continuation lines, please let us
-know!
-
-
Forwarding rules are also needed to route packets from the campus VPN
(the wg0 WireGuard⢠tunnel device) to the institute's LAN and back.
@@ -5656,9 +5944,9 @@ public and campus VPNs is also allowed.
-ufw-forward-private-A ufw-user-forward -i lan -o wg0 -j ACCEPT
--A ufw-user-forward -i wg0 -o lan -j ACCEPT
--A ufw-user-forward -i wg0 -o wg0 -j ACCEPT
+ufw-forward-private-A ufw-before-forward -i lan -o wg0 -j ACCEPT
+-A ufw-before-forward -i wg0 -o lan -j ACCEPT
+-A ufw-before-forward -i wg0 -o wg0 -j ACCEPT
@@ -5675,35 +5963,15 @@ the wild device to the lan device, just the wg0<
-
-9.4. Configure UFW
-
+
+9.6. Configure UFW
+
The following tasks install the Uncomplicated Firewall (UFW), set its
-policy in /etc/default/ufw
, install the NAT rules in
-/etc/ufw/before.rules
, and the Forward rules in
-/etc/ufw/user.rules
(where the ufw-user-forward chain
-is… mentioned?).
+policy in /etc/default/ufw
, and install the institute's rules in
+/etc/ufw/before.rules
.
-
-When Gate is configured by ./abbey config gate as in the example
-bootstrap, enabling the firewall should not be a problem. But when
-configuring a new gate with ./abbey config new-gate, enabling the
-firewall could break Ansible's current and future ssh sessions. For
-this reason, Ansible does not enable the firewall.
-
-
-
-The administrator must login and execute the following command after
-Gate is configured or new gate is "in position" (connected to old
-Gate's wild and isp networks).
-
-
-
-sudo ufw enable
-
-
roles_t/gate/tasks/main.yml
- name: Install UFW.
@@ -5717,14 +5985,14 @@ sudo ufw enable
line: "{{ item.line }}"
regexp: "{{ item.regexp }}"
loop:
- - { line: "DEFAULT_INPUT_POLICY=\"ACCEPT\"",
- regexp: "^DEFAULT_INPUT_POLICY=" }
- - { line: "DEFAULT_OUTPUT_POLICY=\"ACCEPT\"",
- regexp: "^DEFAULT_OUTPUT_POLICY=" }
- - { line: "DEFAULT_FORWARD_POLICY=\"DROP\"",
- regexp: "^DEFAULT_FORWARD_POLICY=" }
+ - line: "DEFAULT_INPUT_POLICY=\"ACCEPT\""
+ regexp: "^DEFAULT_INPUT_POLICY="
+ - line: "DEFAULT_OUTPUT_POLICY=\"ACCEPT\""
+ regexp: "^DEFAULT_OUTPUT_POLICY="
+ - line: "DEFAULT_FORWARD_POLICY=\"DROP\""
+ regexp: "^DEFAULT_FORWARD_POLICY="
-- name: Configure UFW NAT rules.
+- name: Configure UFW rules.
become: yes
blockinfile:
block: |
@@ -5732,180 +6000,124 @@ sudo ufw enable
:POSTROUTING ACCEPT [0:0]
<<ufw-nat>>
COMMIT
- dest: /etc/ufw/before.rules
- insertafter: EOF
- prepend_newline: yes
-
-- name: Configure UFW FORWARD rules.
- become: yes
- blockinfile:
- block: |
*filter
<<ufw-forward-nat>>
<<ufw-forward-private>>
COMMIT
- dest: /etc/ufw/user.rules
+ dest: /etc/ufw/before.rules
insertafter: EOF
prepend_newline: yes
+
+- name: Enable UFW.
+ become: yes
+ ufw: state=enabled
+ tags: actualizer
-
-9.5. Configure DHCP For The Wild Ethernet
-
-
-To accommodate commodity Wi-Fi access points, as well as wired IoT
-appliances, without re-configuring them, the institute attempts to
-look like an up-link, an ISP, e.g. a cable modem (aka "router"). It
-expects a wireless AP (or IoT appliance) to route non-local traffic
-out its WAN (or only) Ethernet port, and to get an IP address for that
-port using DHCP. Thus Gate runs ISC's DHCP daemon configured to
-listen on one network interface, recognize a specific list of clients,
-and provide each with an IP address and customary network parameters
-(default route, time server, etc.), much as was done on Core for the
-private Ethernet.
-
-
+
+9.7. Configure Campus WireGuard⢠Subnet
+
-The example configuration file, private/gate-dhcpd.conf
, unlike
-private/core-dhcpd.conf
, does not need RFC3442 (Classless static
-routes). The wild, wired or wireless IoT need know nothing about the
-private network(s). This is just an example file, with a MAC address
-chosen to (probably?) match a VirtualBox test machine. In actual use
-private/core-dhcpd.conf
refers to a replacement file.
+Gate uses WireGuard⢠to provide a campus VPN service. Gate's routes
+and firewall rules allow packets to be forwarded to/from the
+institute's private networks: the private Ethernet and the public VPN.
+(It should not forward packets to/from the wild Ethernet.) The only
+additional route Gate needs is to the public VPN via Core. The rest
+(private Ethernet and campus VPN) are directly connected.
-
-private/gate-dhcpd.conf
default-lease-time 3600;
-max-lease-time 7200;
-
-ddns-update-style none;
-
-authoritative;
-
-log-facility daemon;
-
-subnet 192.168.57.0 netmask 255.255.255.0 {
- option subnet-mask 255.255.255.0;
- option broadcast-address 192.168.57.255;
- option routers 192.168.57.1;
-}
-
-host campus-wifi-ap {
- hardware ethernet 94:83:c4:19:7d:57;
- fixed-address 192.168.57.2;
-}
-
-
-
-Installation and configuration of the DHCP daemon follows. Note that
-the daemon listens only on the wild network interface. Also note
-the drop-in Requires dependency, without which the DHCP server
-intermittently fails, finding the wild interface has no IPv4
-addresses (or perhaps finding no wild interface at all?).
+The following tasks install WireGuardâ¢, configure it with
+private/gate-wg0.conf
(or private/gate-wg0-empty.conf
if it does
+not exist), and enable the service.
roles_t/gate/tasks/main.yml
-- name: Install DHCP server.
- become: yes
- apt: pkg=isc-dhcp-server
-
-- name: Configure DHCP interface.
+- name: Enable IP forwarding.
become: yes
- lineinfile:
- path: /etc/default/isc-dhcp-server
- line: INTERFACESv4="wild"
- regexp: ^INTERFACESv4=
- notify: Restart DHCP server.
+ sysctl:
+ name: net.ipv4.ip_forward
+ value: "1"
+ state: present
-- name: Configure DHCP subnet.
+- name: Install WireGuard™.
become: yes
- copy:
- src: ../private/gate-dhcpd.conf
- dest: /etc/dhcp/dhcpd.conf
- notify: Restart DHCP server.
+ apt: pkg=wireguard
-- name: Configure DHCP server dependence on interface.
+- name: Configure WireGuard™.
become: yes
+ vars:
+ srcs:
+ - ../private/gate-wg0.conf
+ - ../private/gate-wg0-empty.conf
copy:
- content: |
- [Unit]
- Requires=network-online.target
- dest: /etc/systemd/system/isc-dhcp-server.service.d/depend.conf
- notify: Reload Systemd.
+ src: "{{ lookup('first_found', srcs) }}"
+ dest: /etc/wireguard/wg0.conf
+ mode: u=r,g=,o=
+ owner: root
+ group: root
+ notify: Restart WireGuard™.
+ tags: accounts
-- name: Start DHCP server.
+- name: Start WireGuard™.
become: yes
systemd:
- service: isc-dhcp-server
+ service: wg-quick@wg0
state: started
tags: actualizer
-- name: Enable DHCP server.
+- name: Enable WireGuard™.
become: yes
systemd:
- service: isc-dhcp-server
+ service: wg-quick@wg0
enabled: yes
roles_t/gate/handlers/main.yml
-- name: Restart DHCP server.
+- name: Restart WireGuard™.
become: yes
systemd:
- service: isc-dhcp-server
+ service: wg-quick@wg0
state: restarted
tags: actualizer
-
-- name: Reload Systemd.
- become: yes
- systemd:
- daemon-reload: yes
- tags: actualizer
-If Gate is configured with ./abbey config gate and then connected to
-actual networks (i.e. not rebooted), the following command is
-executed. If a new gate was configured with ./abbey config new-gate
-and not rebooted, the following command would also be executed.
+The "empty" WireGuard⢠configuration file (below) is used until the
+./inst client command adds the first client, and generates an actual
+private/gate-wg0.conf
.
-
-sudo systemctl start isc-dhcp-server
-
-
-
-If physically moved or rebooted for some other reason, the above
-command would not be necessary.
-
+
+private/gate-wg0-empty.conf
[Interface]
+Address = 10.84.139.1/24
+ListenPort = 51820
+PostUp = wg set %i private-key /etc/wireguard/private-key
+
-
-9.6. Configure Campus WireGuard⢠Subnet
-
+
+9.7.1. Example private/gate-wg0.conf
+
-Gate uses WireGuard⢠to provide a campus VPN service. Gate's routes
-and firewall rules allow packets to be forwarded to/from the
-institute's private networks: the private Ethernet and the public VPN.
-(It should not forward packets to/from the wild Ethernet.) The only
-additional route Gate needs is to the public VPN via Core. The rest
-(private Ethernet and campus VPN) are directly connected.
+The example private/gate-wg0.conf
below recognizes a wired IoT
+appliance, Dick's notebook and his replacement phone, assigning them
+the host numbers 3, 4 and 6 respectively.
-The following example private/gate-wg0.conf
configuration recognizes
-a wired IoT appliance, Dick's notebook and his replacement phone,
-assigning them the host numbers 3, 4 and 6 respectively.
+This is just an example. The actual file is edited by the ./inst
+client command and so should not be tangled from the following block.
-private/gate-wg0.conf
[Interface]
+Example private/gate-wg0.conf
[Interface]
Address = 10.84.139.1/24
ListenPort = 51820
PostUp = wg set %i private-key /etc/wireguard/private-key
@@ -5966,71 +6178,18 @@ WireGuard⢠tunnel on Dick's notebook, used on campus
[Peer]
EndPoint = 192.168.57.1:51820
PublicKey = y3cjFnvQbylmH4lGTujpqc8rusIElmJ4Gu9hh6iR7QI=
-AllowedIPs = 10.84.139.1
-AllowedIPs = 192.168.56.0/24
-AllowedIPs = 10.177.87.0/24
-AllowedIPs = 10.84.139.0/24
-
-
-
-
-The following tasks install WireGuardâ¢, configure it with
-private/gate-wg0.conf
, and enable the service.
-
-
-
-roles_t/gate/tasks/main.yml
-- name: Enable IP forwarding.
- become: yes
- sysctl:
- name: net.ipv4.ip_forward
- value: "1"
- state: present
-
-- name: Install WireGuard™.
- become: yes
- apt: pkg=wireguard
-
-- name: Configure WireGuard™.
- become: yes
- copy:
- src: ../private/gate-wg0.conf
- dest: /etc/wireguard/wg0.conf
- mode: u=r,g=,o=
- owner: root
- group: root
- notify: Restart WireGuard™.
-
-- name: Start WireGuard™.
- become: yes
- systemd:
- service: wg-quick@wg0
- state: started
- tags: actualizer
-
-- name: Enable WireGuard™.
- become: yes
- systemd:
- service: wg-quick@wg0
- enabled: yes
-
-
-
-
-roles_t/gate/handlers/main.yml
-- name: Restart WireGuard™.
- become: yes
- systemd:
- service: wg-quick@wg0
- state: restarted
- tags: actualizer
+AllowedIPs = 10.84.139.1
+AllowedIPs = 192.168.56.0/24
+AllowedIPs = 10.177.87.0/24
+AllowedIPs = 10.84.139.0/24
-
-10. The Campus Role
+
+
+10. The Campus Role
The campus role configures generic campus server machines: network
@@ -6046,10 +6205,26 @@ Wireless campus devices register their public keys using the ./inst
client command which updates the WireGuard⢠configuration on Gate.
-
-10.1. Include Particulars
+
+10.1. Role Defaults
+As in The Gate Role, the campus role sets a number of variables to
+default values in its defaults/main.yml
file.
+
+
+
+roles_t/campus/defaults/main.yml
---
+<<network-vars>>
+<<address-vars>>
+
+
+
+
+
+10.2. Include Particulars
+
+
The following should be familiar boilerplate by now.
@@ -6057,15 +6232,16 @@ The following should be familiar boilerplate by now.
roles_t/campus/tasks/main.yml
---
- name: Include public variables.
include_vars: ../public/vars.yml
+
- name: Include private variables.
include_vars: ../private/vars.yml
-
-10.2. Configure Hostname
-
+
+10.3. Configure Hostname
+
Clients should be using the expected host name.
@@ -6092,9 +6268,9 @@ Clients should be using the expected host name.
-
-10.3. Configure Systemd Timesyncd
-
+
+10.4. Configure Systemd Timesyncd
+
The institute uses a common time reference throughout the campus.
This is essential to campus security, improving the accuracy of log
@@ -6124,9 +6300,9 @@ and file timestamps.
-
-10.4. Add Administrator to System Groups
-
+
+10.5. Add Administrator to System Groups
+
The administrator often needs to read (directories of) log files owned
by groups root and adm. Adding the administrator's account to
@@ -6145,9 +6321,9 @@ these groups speeds up debugging.
-
-10.5. Install Unattended Upgrades
-
+
+10.6. Install Unattended Upgrades
+
The institute prefers to install security updates as soon as possible.
@@ -6161,9 +6337,9 @@ The institute prefers to install security updates as soon as possible.
-
-10.6. Configure Postfix on Campus
-
+
+10.7. Configure Postfix on Campus
+
The Postfix settings used by the campus include message size, queue
times, and the relayhost Core. The default Debian configuration
@@ -6230,9 +6406,9 @@ tasks below.
-
-10.7. Set Domain Name
-
+
+10.8. Set Domain Name
+
The host's fully qualified (private) domain name (FQDN) is set by an
alias in its /etc/hosts
file, as is customary on Debian. (See "The
@@ -6254,13 +6430,13 @@ manpage.)
-
-10.8. Configure NRPE
-
+
+10.9. Configure NRPE
+
Each campus host runs an NRPE (a NAGIOS Remote Plugin Executor)
server so that the NAGIOS4 server on Core can collect statistics. The
-NAGIOS service is discussed in the Configure NRPE section of The Core
+NAGIOS service is discussed in the Configure NRPE section of The Core
Role.
@@ -6321,8 +6497,8 @@ Role.
-
-11. The Ansible Configuration
+
+11. The Ansible Configuration
The small institute uses Ansible to maintain the configuration of its
@@ -6331,7 +6507,7 @@ runs playbook site.yml
to apply the appro
role(s) to each host. Examples of these files are included here, and
are used to test the roles. The example configuration applies the
institutional roles to VirtualBox machines prepared according to
-chapter Testing.
+chapter Testing.
@@ -6344,13 +6520,13 @@ while changes to the institute's particulars are committed to a
separate revision history.
-
-11.1. ansible.cfg
+
+11.1. ansible.cfg
The Ansible configuration file ansible.cfg
contains just a handful
of settings, some included just to create a test jig as described in
-Testing.
+Testing.
@@ -6359,7 +6535,7 @@ of settings, some included just to create a test jig as described in
that Python 3 can be expected on all institute hosts.
vault_password_file is set to suppress prompts for the vault
password. The institute keeps its vault password in Secret/
(as
-described in Keys) and thus sets this parameter to
+described in Keys) and thus sets this parameter to
Secret/vault-password
.
inventory is set to avoid specifying it on the command line.
roles_path is set to the recently tangled roles files in
@@ -6376,8 +6552,8 @@ described in Keys) and thus sets this parameter to
-
-11.2. hosts
+
+11.2. hosts
The Ansible inventory file hosts
describes all of the institute's
@@ -6389,13 +6565,13 @@ describes three test servers named front, core and
-hosts
all:
+hosts
all:
vars:
ansible_user: sysadm
ansible_ssh_extra_args: -i Secret/ssh_admin/id_rsa
hosts:
front:
- ansible_host: 192.168.57.3
+ ansible_host: 192.168.58.3
ansible_become_password: "{{ become_front }}"
core:
ansible_host: 192.168.56.1
@@ -6454,8 +6630,8 @@ the Secret/vault-password
file.
-
-11.3. playbooks/site.yml
+
+11.3. playbooks/site.yml
The example playbooks/site.yml
playbook (below) applies the
@@ -6488,8 +6664,8 @@ the example inventory: hosts
.
-
-11.4. Secret/vault-password
+
+11.4. Secret/vault-password
As already mentioned, the small institute keeps its Ansible vault
@@ -6501,17 +6677,17 @@ example password matches the example encryptions above.
-Secret/vault-password
alitysortstagess
+Secret/vault-password
alitysortstagess
-
-11.5. Creating A Working Ansible Configuration
+
+11.5. Creating A Working Ansible Configuration
A working Ansible configuration can be "tangled" from this document to
-produce the test configuration described in the Testing chapter. The
+produce the test configuration described in the Testing chapter. The
tangling is done by Emacs's org-babel-tangle function and has
already been performed with the resulting tangle included in the
distribution with this document.
@@ -6522,8 +6698,8 @@ An institution using the Ansible configuration herein can include this
document and its tangle as a Git submodule, e.g. in institute/
, and
thus safely merge updates while keeping public and private particulars
separate, in sibling subdirectories public/
and private/
.
-The following example commands create a new Git repo in ~/net/
-and add an Institute/
submodule.
+The following example commands create a new Git repo in ~/network/
+and add an institute/
submodule.
@@ -6531,8 +6707,8 @@ and add an Institute/
submodule.
mkdir network
cd network
git init
-git submodule add git://birchwood-abbey.net/~puck/Institute
-git add Institute
+git submodule add git://birchwood-abbey.net/~puck/institute
+git add institute
@@ -6542,24 +6718,24 @@ An institute administrator would then need to add several more files.
- A top-level Ansible configuration file,
ansible.cfg
, would be
-created by copying Institute/ansible.cfg
and changing the
-roles_path to roles:Institute/roles.
+created by copying institute/ansible.cfg
and changing the
+roles_path to roles:institute/roles.
- A host inventory,
hosts
, would be created, perhaps by copying
-Institute/hosts
and changing its IP addresses.
+institute/hosts
and changing its IP addresses.
- A site playbook,
site.yml
, would be created in a new playbooks/
-subdirectory by copying Institute/playbooks/site.yml
with
+subdirectory by copying institute/playbooks/site.yml
with
appropriate changes.
-- All of the files in
Institute/public/
and Institute/private/
+ - All of the files in
institute/public/
and institute/private/
would be copied, with appropriate changes, into new subdirectories
public/
and private/
.
-~/net/Secret
would be a symbolic link to the (auto-mounted?)
+~/network/Secret
would be a symbolic link to the (auto-mounted?)
location of the administrator's encrypted USB drive, as described in
-section Keys.
+section Keys.
-The files in Institute/roles_t/
were "tangled" from this document
-and must be copied to Institute/roles/
for reasons discussed in the
+The files in institute/roles_t/
were "tangled" from this document
+and must be copied to institute/roles/
for reasons discussed in the
next section. This document does not "tangle" directly into
roles/
to avoid clobbering changes to a working (debugged!)
configuration.
@@ -6569,13 +6745,13 @@ configuration.
The playbooks/
directory must include the institutional playbooks,
which find their settings and templates relative to this directory,
e.g. in ../private/vars.yml
. Running institutional playbooks from
-~/net/playbooks/
means they will use ~/net/private/
rather than
-the example ~/net/Institute/private/
.
+~/network/playbooks/
means they will use ~/network/private/
rather
+than the example ~/network/institute/private/
.
-cp -r Institute/roles_t Institute/roles
-( cd playbooks; ln -s ../Institute/playbooks/* . )
+cp -r institute/roles_t institute/roles
+( cd playbooks; ln -s ../institute/playbooks/* . )
@@ -6585,13 +6761,13 @@ super-project's directory.
-./Institute/inst config -n
+./institute/inst config -n
-
-11.6. Maintaining A Working Ansible Configuration
+
+11.6. Maintaining A Working Ansible Configuration
The Ansible roles currently tangle into the roles_t/
directory to
@@ -6610,8 +6786,8 @@ their way back to the code block in this document.
-
-12. The Institute Commands
+
+12. The Institute Commands
The institute's administrator uses a convenience script to reliably
@@ -6621,8 +6797,8 @@ Ansible configuration. The Ansible commands it executes are expected
to get their defaults from ./ansible.cfg
.
-
-12.1. Sub-command Blocks
+
+12.1. Sub-command Blocks
The code blocks in this chapter tangle into the inst
script. Each
@@ -6648,8 +6824,8 @@ The first code block is the header of the ./inst script.
-
-12.2. Sanity Check
+
+12.2. Sanity Check
The next code block does not implement a sub-command; it implements
@@ -6709,8 +6885,8 @@ permissions. It probes past the Secret/
mount poin
-
-12.3. Importing Ansible Variables
+
+12.3. Importing Ansible Variables
To ensure that Ansible and ./inst are sympatico vis-a-vi certain
@@ -6750,33 +6926,54 @@ The playbook that updates private/vars.pl
:
playbooks/check-inst-vars.yml
- hosts: localhost
gather_facts: no
- tasks:
- - include_vars: ../public/vars.yml
- - include_vars: ../private/vars.yml
- - copy:
- content: |
- $domain_name = "{{ domain_name }}";
- $domain_priv = "{{ domain_priv }}";
+ roles: [ check-inst-vars ]
+
+
+
+
+
+12.4. The check-inst-vars Role
+
+
+This role is executed by playbooks/check-inst-vars.yml
and is not
+just a playbook because it needs a copy of the role defaults.
+
+
+
+roles_t/check-inst-vars/defaults/main.yml
---
+<<network-vars>>
+<<address-vars>>
+
+
- $front_addr = "{{ front_addr }}";
- $front_wg_pubkey = "{{ front_wg_pubkey }}";
+
+roles_t/check-inst-vars/tasks/main.yml
---
+- include_vars: ../public/vars.yml
+- include_vars: ../private/vars.yml
+- copy:
+ content: |
+ $domain_name = "{{ domain_name }}";
+ $domain_priv = "{{ domain_priv }}";
- $public_wg_net_cidr = "{{ public_wg_net_cidr }}";
- $public_wg_port = "{{ public_wg_port }}";
+ $front_addr = "{{ front_addr }}";
+ $front_wg_pubkey = "{{ front_wg_pubkey }}";
- $private_net_cidr = "{{ private_net_cidr }}";
- $wild_net_cidr = "{{ wild_net_cidr }}";
+ $public_wg_net_cidr = "{{ public_wg_net_cidr }}";
+ $public_wg_port = "{{ public_wg_port }}";
- $gate_wild_addr = "{{ gate_wild_addr }}";
- $gate_wg_pubkey = "{{ gate_wg_pubkey }}";
+ $private_net_cidr = "{{ private_net_cidr }}";
+ $wild_net_cidr = "{{ wild_net_cidr }}";
- $campus_wg_net_cidr = "{{ campus_wg_net_cidr }}";
- $campus_wg_port = "{{ campus_wg_port }}";
+ $gate_wild_addr = "{{ gate_wild_addr }}";
+ $gate_wg_pubkey = "{{ gate_wg_pubkey }}";
- $core_addr = "{{ core_addr }}";
- $core_wg_pubkey = "{{ core_wg_pubkey }}";
- dest: ../private/vars.pl
- mode: u=rw,g=,o=
+ $campus_wg_net_cidr = "{{ campus_wg_net_cidr }}";
+ $campus_wg_port = "{{ campus_wg_port }}";
+
+ $core_addr = "{{ core_addr }}";
+ $core_wg_pubkey = "{{ core_wg_pubkey }}";
+ dest: ../private/vars.pl
+ mode: u=rw,g=,o=
@@ -6786,7 +6983,7 @@ following few provide the servers' public keys and ports.
-=private/vars.ymlfront_wg_pubkey: S+6HaTnOwwhWgUGXjSBcPAvifKw+j8BDTRfq534gNW4=
+private/vars.yml
front_wg_pubkey: S+6HaTnOwwhWgUGXjSBcPAvifKw+j8BDTRfq534gNW4=
public_wg_port: 39608
gate_wg_pubkey: y3cjFnvQbylmH4lGTujpqc8rusIElmJ4Gu9hh6iR7QI=
@@ -6795,71 +6992,11 @@ campus_wg_port: 51820
core_wg_pubkey: lGhC51IBgZtlq4H2bsYFuKvPtV0VAEwUvVIn5fW7D0c=
-
-
-All of the private keys used in the example/test configuration are
-listed in the following table. The first three are copied to
-/etc/wireguard/private-key
on each of the corresponding test
-machines: front, gate and core. The rest are installed on
-the test client to give it different personae.
-
-
-
-
-
-
-
-
-
-
-
-
-Test Host
-WireGuard⢠Private Key
-
-
-
-
-front
-AJkzVxfTm/KvRjzTN/9X2jYy+CAugiwZfN5F3JTegms=
-
-
-
-gate
-yOBdLbXh6KBwYQvvb5mhiku8Fxkqc5Cdyz6gNgjc/2U=
-
-
-
-core
-AI+KhwnsHzSPqyIyAObx7EBBTBXFZPiXb2/Qcts8zEI=
-
-
-
-thing
-KIwQT5eGOl9w1qOa5I+2xx5kJH3z4xdpmirS/eGdsXY=
-
-
-
-dick
-WAhrlGccPf/BaFS5bRtBE4hEyt3kDxCavmwZfVTsfGs=
-
-
-
-dicks-phone
-oG/Kou9HOBCBwHAZGypPA1cZWUL6nR6WoxBiXc/OQWQ=
-
-
-
-dicks-razr
-IGNcF0VpkIBcJQAcLZ9jgRmk0SYyUr/WwSNXZoXXUWQ=
-
-
-
-
-12.4. The CA Command
-
+
+12.5. The CA Command
+
The next code block implements the CA sub-command, which creates a
new CA (certificate authority) in Secret/CA/
as well as SSH and PGP
@@ -6926,28 +7063,28 @@ config.
mysystem "mkdir --mode=700 Secret/root.gnupg";
mysystem ("gpg --homedir Secret/root.gnupg",
- " --batch --quick-generate-key --passphrase ''",
- " root\@core.$pvt");
+ "--batch --quick-generate-key --passphrase ''",
+ "root\@core.$pvt");
mysystem ("gpg --homedir Secret/root.gnupg",
- " --export --armor --output Secret/root-pub.pem",
- " root\@core.$pvt");
+ "--export --armor --output Secret/root-pub.pem",
+ "root\@core.$pvt");
chmod 0440, "root-pub.pem";
mysystem ("gpg --homedir Secret/root.gnupg",
- " --export-secret-key --armor --output Secret/root-sec.pem",
- " root\@core.$pvt");
+ "--export-secret-key --armor --output Secret/root-sec.pem",
+ "root\@core.$pvt");
chmod 0400, "root-sec.pem";
mysystem "mkdir Secret/ssh_admin";
chmod 0700, "Secret/ssh_admin";
- mysystem ("ssh-keygen -q -t rsa"
- ." -C A\\ Small\\ Institute\\ Administrator",
- " -N '' -f Secret/ssh_admin/id_rsa");
+ mysystem ("ssh-keygen -q -t rsa",
+ "-C A\\ Small\\ Institute\\ Administrator",
+ "-N '' -f Secret/ssh_admin/id_rsa");
mysystem "mkdir Secret/ssh_monkey";
chmod 0700, "Secret/ssh_monkey";
mysystem "echo 'HashKnownHosts no' >Secret/ssh_monkey/config";
mysystem ("ssh-keygen -q -t rsa -C monkey\@core",
- " -N '' -f Secret/ssh_monkey/id_rsa");
+ "-N '' -f Secret/ssh_monkey/id_rsa");
mysystem "mkdir Secret/ssh_front";
chmod 0700, "Secret/ssh_front";
@@ -6958,9 +7095,9 @@ config.
-
-12.5. The Config Command
-
+
+12.6. The Config Command
+
The next code block implements the config sub-command, which
provisions network services by running the site.yml
playbook
@@ -7009,12 +7146,12 @@ Example command lines:
-
-12.6. Account Management
-
+
+12.7. Account Management
+
For general information about members and their Unix accounts, see
-Accounts. The account management sub-commands maintain a mapping
+Accounts. The account management sub-commands maintain a mapping
associating member "usernames" (Unix account names) with their
records. The mapping is stored among other things in
private/members.yml
as the value associated with the key members.
@@ -7068,30 +7205,30 @@ clients:
-The test campus starts with the empty membership roll found in
-private/members-empty.yml
and saved in private/members.yml
-(which is not tangled from this document, thus not over-written
-during testing). If members.yml
is not found, members-empty.yml
-is used instead.
+The members.yml
file will be modified during testing, and should not
+be overwritten by a re-tangle during testing, so it not tangled from
+this file. Thus in the fresh built (e.g. test) system
+private/members.yml
does not exist, not until a ./inst new command
+creates the first member. Until then, Ansible includes the
+private/members-empty.yml
file. It does that using the
+first_found lookup plugin and a list of the two files with
+members.yml
first and members-empty.yml
last. That list is the
+value of membership_rolls.
-private/members-empty.yml
---
-members:
-usernames: []
-clients: []
+membership-rolls
+membership_rolls:
+- "../private/members.yml"
+- "../private/members-empty.yml"
-
-Both locations go on the membership_rolls variable used by the
-include_vars tasks.
-
-
-private/vars.yml
membership_rolls:
-- "../private/members.yml"
-- "../private/members-empty.yml"
+private/members-empty.yml
---
+members: {}
+usernames: []
+clients: []
@@ -7148,7 +7285,7 @@ read from the file. The dump subroutine is another story (below).
print $O "- $user\n";
}
} else {
- print $O "members:\n";
+ print $O "members: {}\n";
print $O "usernames: []\n";
}
if (@{$yaml->{"clients"}}) {
@@ -7214,9 +7351,9 @@ each record.
-
-12.7. The New Command
-
+
+12.8. The New Command
+
The next code block implements the new sub-command. It adds a new
member to the institute's membership roll. It runs an Ansible
@@ -7247,15 +7384,17 @@ initial, generated password.
my $core = `mkpasswd -m sha-512 "$epass"`; chomp $core;
my $vault = strip_vault `ansible-vault encrypt_string "$epass"`;
mysystem ("ansible-playbook -e \@Secret/become.yml",
- " playbooks/nextcloud-new.yml",
- " -e user=$user", " -e pass=\"$epass\"");
+ "playbooks/nextcloud-new.yml",
+ "-e user=$user", "-e pass=\"$epass\"",
+ ">/dev/null");
$members->{$user} = { "status" => "current",
"password_front" => $front,
"password_core" => $core,
"password_fetchmail" => $vault };
write_members_yaml $yaml;
mysystem ("ansible-playbook -e \@Secret/become.yml",
- " -t accounts -l core,front playbooks/site.yml");
+ "-t accounts -l core,front playbooks/site.yml",
+ ">/dev/null");
exit;
}
@@ -7290,35 +7429,22 @@ initial, generated password.
playbooks/nextcloud-new.yml
- hosts: core
- no_log: yes
tasks:
- name: Run occ user:add.
- shell: |
- spawn sudo -u www-data /usr/bin/php occ user:add {{ user }}
- expect {
- "Enter password:" {}
- timeout { exit 1 }
- }
- send "{{ pass|quote }}\n";
- expect {
- "Confirm password:" {}
- timeout { exit 2 }
- }
- send "{{ pass|quote }}\n";
- expect {
- "The user \"{{ user }}\" was created successfully" {}
- timeout { exit 3 }
- }
- args:
+ become: yes
+ shell:
chdir: /var/www/nextcloud/
- executable: /usr/bin/expect
+ cmd: >
+ sudo -u www-data sh -c
+ "OC_PASS={{ pass }}
+ php occ user:add {{ user }} --password-from-env"
-
-12.8. The Pass Command
-
+
+12.9. The Pass Command
+
The institute's passwd command on Core securely emails root with a
member's desired password (hashed). The command may update the
@@ -7331,9 +7457,9 @@ Ansible site.yml
playbook to update the
message is sent to member@core.
-
-12.8.1. Less Aggressive passwd.
-
+
+12.9.1. Less Aggressive passwd.
+
The next code block implements the less aggressive passwd command.
It is less aggressive because it just emails root. It does not
@@ -7399,7 +7525,7 @@ close $TMP;
my $O = new IO::File;
open $O, ("| gpg --encrypt --armor"
- ." --trust-model always --recipient root\@core"
+ ." --recipient-file /etc/root-pub.pem"
." > $tmp") or die "Error running gpg > $tmp: $!\n";
print $O <<EOD;
username: $username
@@ -7428,9 +7554,9 @@ that the change was completed.\n";
-
-12.8.2. Less Aggressive Pass Command
-
+
+12.9.2. Less Aggressive Pass Command
+
The following code block implements the ./inst pass command, used by
the administrator to update private/members.yml
before running
@@ -7440,6 +7566,7 @@ the administrator to update private/members.yml
before running
inst
use MIME::Base64;
+sub write_wireguard ($);
if (defined $ARGV[0] && $ARGV[0] eq "pass") {
my $I = new IO::File;
@@ -7458,7 +7585,8 @@ the administrator to update private/members.yml
before running
my $mem_yaml = read_members_yaml ();
my $members = $mem_yaml->{"members"};
my $member = $members->{$user};
- die "No such member: $user\n" if ! defined $member;
+ die "$user: does not exist\n" if ! defined $member;
+ die "$user: no longer current\n" if $member->{"status"} ne "current";
my $pass = decode_base64 $pass64;
my $epass = shell_escape $pass;
@@ -7471,10 +7599,12 @@ the administrator to update private/members.yml
before running
mysystem ("ansible-playbook -e \@Secret/become.yml",
"playbooks/nextcloud-pass.yml",
- "-e user=$user", "-e \"pass=$epass\"");
+ "-e user=$user", "-e \"pass=$epass\"",
+ ">/dev/null");
write_members_yaml $mem_yaml;
mysystem ("ansible-playbook -e \@Secret/become.yml",
- "-t accounts playbooks/site.yml");
+ "-t accounts playbooks/site.yml",
+ ">/dev/null");
my $O = new IO::File;
open ($O, "| sendmail $user\@$domain_priv")
or die "Could not pipe to sendmail: $!\n";
@@ -7492,8 +7622,8 @@ As always: please email root with any questions or concerns.\n";
-And here is the playbook that interacts with Nextcloud's occ
-users:resetpassword command using expect(1).
+And here is the playbook that runs Nextcloud's occ
+users:resetpassword command.
@@ -7501,34 +7631,20 @@ users:resetpassword command using expect(1).
no_log: yes
tasks:
- name: Run occ user:resetpassword.
- shell: |
- spawn sudo -u www-data \
- /usr/bin/php occ user:resetpassword {{ user }}
- expect {
- "Enter a new password:" {}
- timeout { exit 1 }
- }
- send "{{ pass|quote }}\n"
- expect {
- "Confirm the new password:" {}
- timeout { exit 2 }
- }
- send "{{ pass|quote }}\n"
- expect {
- "Successfully reset password for {{ user }}" {}
- "Please choose a different password." { exit 3 }
- timeout { exit 4 }
- }
- args:
+ become: yes
+ shell:
chdir: /var/www/nextcloud/
- executable: /usr/bin/expect
+ cmd: >
+ sudo -u www-data sh -c
+ "OC_PASS={{ pass }}
+ php occ user:resetpassword {{ user }} --password-from-env"
-
-12.8.3. Installing the Less Aggressive passwd
-
+
+12.9.3. Installing the Less Aggressive passwd
+
The following Ansible tasks install the less aggressive passwd
script in /usr/local/bin/passwd
on Core, and a sudo policy file
@@ -7576,10 +7692,10 @@ configuration so that the email to root can be encrypted.
group: root
- name: Install root PGP key file.
- become: no
+ become: yes
copy:
src: ../Secret/root-pub.pem
- dest: ~/.gnupg-root-pub.pem
+ dest: /etc/root-pub.pem
mode: u=r,g=r,o=r
notify: Import root PGP key.
@@ -7589,15 +7705,15 @@ configuration so that the email to root can be encrypted.
roles_t/core/handlers/main.yml
- name: Import root PGP key.
become: no
- command: gpg --import ~/.gnupg-root-pub.pem
+ command: gpg --import /etc/root-pub.pem
-
-12.9. The Old Command
-
+
+12.10. The Old Command
+
The old command disables a member's account (and thus their clients).
@@ -7612,11 +7728,15 @@ The old command disables a member's account (and thus their clients
die "$user: does not exist\n" if ! defined $member;
mysystem ("ansible-playbook -e \@Secret/become.yml",
- "playbooks/nextcloud-old.yml -e user=$user");
+ "playbooks/nextcloud-old.yml -e user=$user",
+ ">/dev/null");
$member->{"status"} = "former";
+ umask 077;
write_members_yaml $yaml;
+ write_wireguard $yaml;
mysystem ("ansible-playbook -e \@Secret/become.yml",
- "-t accounts playbooks/site.yml");
+ "-t accounts playbooks/site.yml",
+ ">/dev/null");
exit;
}
@@ -7626,22 +7746,19 @@ The old command disables a member's account (and thus their clients
playbooks/nextcloud-old.yml
- hosts: core
tasks:
- name: Run occ user:disable.
- shell: |
- spawn sudo -u www-data /usr/bin/php occ user:disable {{ user }}
- expect {
- "The specified user is disabled" {}
- timeout { exit 1 }
- }
- args:
+ become: yes
+ shell:
chdir: /var/www/nextcloud/
- executable: /usr/bin/expect
+ cmd: >
+ sudo -u www-data sh -c
+ "php occ user:disable {{ user }}"
-
-12.10. The Client Command
-
+
+12.11. The Client Command
+
The client command registers the public key of a client wishing to
connect to the institute's WireGuard⢠subnets. The command allocates
@@ -7663,7 +7780,7 @@ the client (i.e. by the WireGuard for Android⢠app) and never revealed
-The generated configuration vary depending on the type of client,
+The generated configurations vary depending on the type of client,
which must be given as the first argument to the command. For most
types, two configuration files are generated. campus.conf
contains
the client's campus VPN configuration, and public.conf
the client's
@@ -7718,12 +7835,13 @@ better support in NetworkManager soon.)
} else {
die "usage: $0 client [debian|android|campus]\n";
}
- my $yaml;
- $yaml = read_members_yaml;
+ my $yaml = read_members_yaml;
my $members = $yaml->{"members"};
my $member = $members->{$user};
die "$user: does not exist\n"
if !defined $member && $type ne "campus";
+ die "$user: no longer current\n"
+ if defined $member && $member->{"status"} ne "current";
my @campus_peers # [ name, hostnum, type, pubkey, user|"" ]
= map { [ (split / /), "" ] } @{$yaml->{"clients"}};
@@ -7756,14 +7874,47 @@ better support in NetworkManager soon.)
umask 077;
write_members_yaml $yaml;
+ write_wireguard $yaml;
- if ($type eq "campus") {
- push @all_peers, [ $name, $hostnum, $type, $pubkey, "" ];
- } else {
- push @member_peers, [ $name, $hostnum, $type, $pubkey, $user ];
- push @all_peers, [ $name, $hostnum, $type, $pubkey, $user ];
+ umask 033;
+ write_wg_client ("public.conf",
+ hostnum_to_ipaddr ($hostnum, $public_wg_net_cidr),
+ $type,
+ $front_wg_pubkey,
+ "$front_addr:$public_wg_port",
+ hostnum_to_ipaddr (1, $public_wg_net_cidr))
+ if $type ne "campus";
+ write_wg_client ("campus.conf",
+ hostnum_to_ipaddr ($hostnum, $campus_wg_net_cidr),
+ $type,
+ $gate_wg_pubkey,
+ "$gate_wild_addr:$campus_wg_port",
+ hostnum_to_ipaddr (1, $campus_wg_net_cidr));
+
+ mysystem ("ansible-playbook -e \@Secret/become.yml",
+ "-l gate,front",
+ "-t accounts playbooks/site.yml",
+ ">/dev/null");
+ exit;
+}
+
+sub write_wireguard ($) {
+ my ($yaml) = @_;
+
+ my @campus_peers # [ name, hostnum, type, pubkey, user|"" ]
+ = map { [ (split / /), "" ] } @{$yaml->{"clients"}};
+
+ my $members = $yaml->{"members"};
+ my @member_peers = ();
+ for my $u (sort keys %$members) {
+ next if $members->{$u}->{"status"} ne "current";
+ push @member_peers,
+ map { [ (split / /), $u ] } @{$members->{$u}->{"clients"}};
}
+ my @all_peers = sort { $a->[1] <=> $b->[1] }
+ (@campus_peers, @member_peers);
+
my $core_wg_addr = hostnum_to_ipaddr (2, $public_wg_net_cidr);
my $extra_front_config = "
PostUp = resolvectl dns %i $core_addr
@@ -7779,28 +7930,10 @@ AllowedIPs = $campus_wg_net_cidr\n";
write_wg_server ("private/front-wg0.conf", \@member_peers,
hostnum_to_ipaddr_cidr (1, $public_wg_net_cidr),
- $public_wg_port, $extra_front_config)
- if $type ne "campus";
+ $public_wg_port, $extra_front_config);
write_wg_server ("private/gate-wg0.conf", \@all_peers,
hostnum_to_ipaddr_cidr (1, $campus_wg_net_cidr),
$campus_wg_port, "\n");
-
- umask 033;
- write_wg_client ("public.conf",
- hostnum_to_ipaddr ($hostnum, $public_wg_net_cidr),
- $type,
- $front_wg_pubkey,
- "$front_addr:$public_wg_port",
- hostnum_to_ipaddr (1, $public_wg_net_cidr))
- if $type ne "campus";
- write_wg_client ("campus.conf",
- hostnum_to_ipaddr ($hostnum, $campus_wg_net_cidr),
- $type,
- $gate_wg_pubkey,
- "$gate_wild_addr:$campus_wg_port",
- hostnum_to_ipaddr (1, $campus_wg_net_cidr));
-
- exit;
}
sub write_wg_server ($$$$$) {
@@ -7868,29 +8001,29 @@ AllowedIPs = $campus_wg_net_cidr\n";
# Assume 24bit subnet, 8bit hostnum.
# Find a Perl library for more generality?
die "$hostnum: hostnum too large\n" if $hostnum > 255;
- my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$";
- die if !$prefix;
- return "$prefix.$hostnum";
+ my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$";
+ die if !$prefix;
+ return "$prefix.$hostnum";
}
-sub hostnum_to_ipaddr_cidr ($$)
+sub hostnum_to_ipaddr_cidr ($$)
{
- my ($hostnum, $net_cidr) = @_;
+ my ($hostnum, $net_cidr) = @_;
- # Assume 24bit subnet, 8bit hostnum.
- # Find a Perl library for more generality?
- die "$hostnum: hostnum too large\n" if $hostnum > 255;
- my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$";
- die if !$prefix;
- return "$prefix.$hostnum/24";
-}
+ # Assume 24bit subnet, 8bit hostnum.
+ # Find a Perl library for more generality?
+ die "$hostnum: hostnum too large\n" if $hostnum > 255;
+ my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$";
+ die if !$prefix;
+ return "$prefix.$hostnum/24";
+}
-
-12.11. Institute Command Help
-
+
+12.12. Institute Command Help
+
This should be the last block tangled into the inst
script. It
catches any command lines that were not handled by a sub-command
@@ -7905,8 +8038,8 @@ above.
-
-13. Testing
+
+13. Testing
The example files in this document, ansible.cfg
and hosts
as well
@@ -7914,9 +8047,9 @@ as those in public/
and priva
certificate authority and GnuPG key-ring in Secret/
(included in the
distribution), can be used to configure three VirtualBox VMs
simulating Core, Gate and Front in test networks simulating a private
-Ethernet, an untrusted Ethernet, the campus ISP, and a commercial
-cloud. With the test networks up and running, a simulated member's
-notebook can be created and alternately attached to the untrusted
+Ethernet, a wild (untrusted) Ethernet, the campus ISP, and a
+commercial cloud. With the test networks up and running, a simulated
+member's notebook can be created and alternately attached to the wild
Ethernet (as though it were on the campus Wi-Fi) or the Internet (as
though it were abroad). The administrator's notebook in this
simulation is the VirtualBox host.
@@ -7925,7 +8058,7 @@ simulation is the VirtualBox host.
The next two sections list the steps taken to create the simulated
Core, Gate and Front machines, and connect them to their networks.
-The process is similar to that described in The (Actual) Hardware, but
+The process is similar to that described in The (Actual) Hardware, but
is covered in detail here where the VirtualBox hypervisor can be
assumed and exact command lines can be given (and copied during
re-testing). The remaining sections describe the manual testing
@@ -7941,15 +8074,15 @@ HTML version of the latest revision can be found on the official web
site at https://www.virtualbox.org/manual/UserManual.html.
-
-13.1. The Test Networks
+
+13.1. The Test Networks
The networks used in the test:
-premises- A NAT Network, simulating the cloud provider's and
+
public- A NAT Network, simulating the cloud provider's and
campus ISP's networks. This is the only network with DHCP and DNS
services provided by the hypervisor. It is not the default NAT
network because
gate and front need to communicate.
@@ -7959,76 +8092,184 @@ private Ethernet switch. It has no services, no DHCP, just the host
machine at 192.168.56.10 pretending to be the administrator's
notebook.
-vboxnet1- Another Host-only network, simulating the untrusted
-Ethernet between Gate and the campus IoT (and Wi-Fi APs). It has no
-services, no DHCP, just the host at
192.168.57.2, simulating the
-NATed Wi-Fi network.
-
+vboxnet1Another Host-only network, simulating the wild
+Ethernet between Gate and the campus IoT (and Wi-Fi APs). It has no
+services, no DHCP, just the host at 192.168.57.2.
+
+vboxnet2A third Host-only network, used only to directly
+connect the host to front.
+
+
+
+In this simulation the IP address for front is not a public address
+but a private address on the NAT network public. Thus front is
+not accessible by the host, by Ansible on the administrator's
+notebook. To work around this restriction, front gets a second
+network interface connected to the vboxnet2 network. The address of
+this second interface is used by Ansible to access front.4
+
+
+
+The networks described above are created and "started" with the
+following VBoxManage commands.
+
+
+
+VBoxManage natnetwork add --netname public \
+ --network 192.168.15.0/24 \
+ --enable --dhcp on --ipv6 off
+VBoxManage natnetwork start --netname public
+VBoxManage dhcpserver modify --network=public --lower-ip=192.168.15.5
+VBoxManage hostonlyif create # vboxnet0
+VBoxManage hostonlyif ipconfig vboxnet0 --ip=192.168.56.10
+VBoxManage hostonlyif create # vboxnet1
+VBoxManage hostonlyif ipconfig vboxnet1 --ip=192.168.57.2
+VBoxManage hostonlyif create # vboxnet2
+VBoxManage hostonlyif ipconfig vboxnet2 --ip=192.168.58.1
+
+
+
+
+Note that only the NAT network public should have a DHCP server
+enabled (to simulate an ISP and cloud for gate and front
+respectively). Yet front is statically assigned an IP address
+outside the DHCP server's pool. This ensures it gets front_addr
+without more server configuration.
+
+
+
+Note also that actual ISPs and clouds will provide Gate and Front with
+public network addresses. In this simulation "they" provide addresses
+in 192.168.15.0/24, on the NAT network public.
+
+
+
+
+13.2. The Test Machines
+
+
+The virtual machines are created by VBoxManage command lines in the
+following sub-sections. They each start with a recent Debian release
+(e.g. debian-12.5.0-amd64-netinst.iso
) in their simulated DVD
+drives. Preparation of The Hardware installed additional software
+packages and keys while the machines had Internet access. They were
+then moved to the new campus network where Ansible completed the
+configuration without Internet access.
+
+
+
+Preparation of the test machines is automated by "preparatory scripts"
+that install the same "additional software packages" and the same test
+keys given in the examples. The scripts are run on each VM while they
+are still attached to the host's NAT network and have Internet access.
+They prepare the machine to reboot on the simulated campus network
+without Internet access, ready for final configuration by Ansible and
+the first launch of services. The "move to campus" is simulated by
+shutting each VM down, executing a VBoxManage command line or two,
+and restarting.
+
+
+
+13.2.1. The Test Wireguard⢠Keys
+
+
+All of the private keys used in the example/test configuration are
+listed here. The first three are copied to
+/etc/wireguard/private-key
on the servers: front, gate and
+core. The rest are installed on the test client to give it
+different personae. In actual use, private keys are generated on the
+servers and clients, and stay there. Only the public keys are
+collected (and registered with the ./inst client command).
+
+
+
+
+
+
+
+
+
+
+
+
+Test Host
+WireGuard⢠Private Key
+
+
+
+
+front
+AJkzVxfTm/KvRjzTN/9X2jYy+CAugiwZfN5F3JTegms=
+
+
+
+gate
+yOBdLbXh6KBwYQvvb5mhiku8Fxkqc5Cdyz6gNgjc/2U=
+
-
-In this simulation the IP address for front is not a public address
-but a private address on the NAT network premises. Thus front is
-not accessible to the administrator's notebook (the host). To work
-around this restriction, front gets a second network interface
-connected to the vboxnet1 network and used only for ssh access from
-the host.4
-
+
+core
+AI+KhwnsHzSPqyIyAObx7EBBTBXFZPiXb2/Qcts8zEI=
+
-
-The networks described above are created and "started" with the
-following VBoxManage commands.
-
+
+thing
+KIwQT5eGOl9w1qOa5I+2xx5kJH3z4xdpmirS/eGdsXY=
+
-
-VBoxManage natnetwork add --netname premises \
- --network 192.168.15.0/24 \
- --enable --dhcp on --ipv6 off
-VBoxManage natnetwork start --netname premises
-VBoxManage hostonlyif create # vboxnet0
-VBoxManage hostonlyif ipconfig vboxnet0 --ip=192.168.56.10
-VBoxManage dhcpserver modify --interface=vboxnet0 --disable
-VBoxManage hostonlyif create # vboxnet1
-VBoxManage hostonlyif ipconfig vboxnet1 --ip=192.168.57.2
-
-
+
+dick
+WAhrlGccPf/BaFS5bRtBE4hEyt3kDxCavmwZfVTsfGs=
+
-
-Note that the first host-only network, vboxnet0, gets DHCP service
-by default, but that will interfere with the service being tested on
-core, so it must be explicitly disabled. Only the NAT network
-premises should have a DHCP server enabled.
-
+
+dicks-phone
+oG/Kou9HOBCBwHAZGypPA1cZWUL6nR6WoxBiXc/OQWQ=
+
-
-Note also that actual ISPs and clouds will provide Gate and Front with
-public network addresses. In this simulation "they" provide addresses
-on the private 192.168.15.0/24 network.
-
+
+dicks-razr
+IGNcF0VpkIBcJQAcLZ9jgRmk0SYyUr/WwSNXZoXXUWQ=
+
+
+
-
-13.2. The Test Machines
-
+
+13.2.2. Ansible Test Authorization
+
-The virtual machines are created by VBoxManage command lines in the
-following sub-sections. They each start with a recent Debian release
-(e.g. debian-12.5.0-amd64-netinst.iso
) in their simulated DVD
-drives. As in The Hardware preparation process being simulated, a few
-additional software packages are installed. Unlike in The Hardware
-preparation, machines are moved to their final networks and then
-remote access is authorized. (They are not accessible via ssh on
-the VirtualBox NAT network where they first boot.)
+Part of each machine's preparation is to authorize password-less SSH
+connections from Ansible, which will be using the public key in
+Secret/ssh_admin/
. This is common to all machines and so is
+provided here tagged with test-auth and used via noweb reference
+<<test-auth>> in each machine's preparatory script.
-
-Once the administrator's notebook is authorized to access the
-privileged accounts on the virtual machines, they are prepared for
-configuration by Ansible.
-
+
+test-auth( cd
+ umask 077
+ if [ ! -d .ssh ]; then mkdir .ssh; fi
+ ( echo -n "ssh-rsa"
+ echo -n " AAAAB3NzaC1yc2EAAAADAQABAAABgQDXxXnqFaUq3WAmmW/P8OMm3cf"
+ echo -n "AGJoL1UC8yjbsRzt63RmusID2CvPTJfO/sbNAxDKHPBvYJqiwBY8Wh2V"
+ echo -n "BDXoO2lWAK9JOSvXMZZRmBh7Yk6+NsPSbeZ6H3DgzdmKubs4E5XEdkmO"
+ echo -n "iivyiGBWiwzDKAOqWvb60yWDDNEuHyGNznKjyL+nAOzul1hP5f23vX3e"
+ echo -n "VhTxV0zdClksvIppGsYY3EvhMxasnjvGOhECz1Pq/9PPxakY1kBKMFj8"
+ echo -n "yh75UfYJyRiUcFUVZD/dQyDMj7gtihv4ANiUAIgn94I4Gt9t8a2OiLyr"
+ echo -n "KhJAwTQrs4CA+suY+3uDcp2FuQAvuzpa2moUufNetQn9YYCpCQaio8I3"
+ echo -n "N9N5POqPGtNT/8Fv1wwWsl/T363NJma7lrtQXKgq52YYmaUNnHxPFqLP"
+ echo -n "/9ELaAKbKrXTel0ew/LyVEO6QJ6fU7lE3LYMF5DngleOpuOHyQdIJKvS"
+ echo -n "oCb7ilDuG8ekZd3ZEROhtyHlr7UcHrtmZMYjhlRc="
+ echo " A Small Institute Administrator" ) \
+ >>.ssh/authorized_keys )
+
-
-13.2.1. A Test Machine
-
+
+
+
+13.2.3. A Test Machine
+
The following shell function contains most of the VBoxManage
commands needed to create the test machines. The name of the machine
@@ -8081,12 +8322,13 @@ create_vm
-Soon after starting, the machine console should show the installer's
-first prompt: to choose a system language. Installation on the small
-machines, front and gate, may put the installation into "low
-memory mode", in which case the installation is textual, the system
-language is English, and the first prompt is for location. The
-appropriate responses to the prompts are given in the list below.
+Soon after starting, the machine console shows the Debian GNU/Linux
+installer menu and the default "Graphical Install" is chosen. On the
+machines with only 512MB of RAM, front and gate, the installer
+switches to a text screen and warns it is using a "Low memory mode".
+The installation proceeds in English and its first prompt is for a
+location. The appropriate responses to this and subsequent prompts
+are given in the list below.
@@ -8096,16 +8338,17 @@ appropriate responses to the prompts are given in the list below.
Select your location
-- Country, territory or area: United States
+- Continent or region: 9 (North America, if in low memory mode!)
+- Country, territory or area: 4 (United States)
Configure the keyboard
-- Keymap to use: American English
+- Keymap to use: 1 (American English)
Configure the network
-- Hostname: front (gate, core, etc.)
-- Domain name: small.example.org (small.private)
+- Hostname: small (gate, core, etc.)
+- Domain name: example.org (small.private)
Set up users and passwords.
@@ -8116,23 +8359,23 @@ appropriate responses to the prompts are given in the list below.
Configure the clock
-- Select your time zone: Eastern
+- Select your time zone: 3 (Mountain)
Partition disks
-- Partitioning method: Guided - use entire disk
-- Select disk to partition: SCSI3 (0,0,0) (sda) - …
-- Partitioning scheme: All files in one partition
-- Finish partitioning and write changes to disk: Continue
-- Write the changes to disks? Yes
+- Partitioning method: 1 (Guided - use entire disk)
+- Select disk to partition: 1 (SCSI2 (0,0,0) (sda) - …)
+- Partitioning scheme: 1 (All files in one partition)
+- 12 (Finish partitioning and write changes to disk …)
+- Write the changes to disks? 1 (Yes)
-Install the base system
+Installing the base system
Configure the package manager
-- Scan extra installation media? No
-- Debian archive mirror country: United States
-- Debian archive mirror: deb.debian.org
-- HTTP proxy information (blank for none): <blank>
+- Scan extra installation media? 2 (No)
+- Debian archive mirror country: 62 (United States)
+- Debian archive mirror: 1 (deb.debian.org)
+- HTTP proxy information (blank for none): <localnet apt cache>
Configure popularity-contest
@@ -8140,8 +8383,7 @@ appropriate responses to the prompts are given in the list below.
Software selection
-- SSH server
-- standard system utilities
+- Choose software to install: SSH server, standard system utilities
Install the GRUB boot loader
@@ -8151,16 +8393,16 @@ appropriate responses to the prompts are given in the list below.
-After the reboot, the machine's console should produce a login:
-prompt. The administrator logs in here, with username sysadm and
-password fubar, before continuing with the specific machine's
-preparation (below).
+After the reboot, the machine's console produces a login: prompt.
+The administrator logs in here, with username sysadm and password
+fubar, before continuing with the specific machine's preparation
+(below).
-
-13.2.2. The Test Front Machine
-
+
+13.2.4. The Test Front Machine
+
The front machine is created with 512MiB of RAM, 4GiB of disk, and
Debian 12.5.0 (recently downloaded) in its CDROM drive. The exact
@@ -8168,77 +8410,198 @@ command lines were given in the previous section.
-After Debian is installed (as detailed above) front is shut down and
-its primary network interface moved to the simulated Internet, the NAT
-network premises. front also gets a second network interface, on
-the host-only network vboxnet1, to make it directly accessible to
-the administrator's notebook (as described in The Test Networks).
+After Debian is installed (as detailed above) and the machine
+rebooted, the administrator copies the following script to the machine
+and executes it.
+
+
+
+The script is copied through an intermediary, an account on the local
+network thus accessible to both the host and guests on the host's NAT
+networks. If USER@SERVER is such an account, the script would be
+copied and executed thusly:
+
+
+
+notebook$ scp private/test-front-prep USER@SERVER:
+notebook$ scp -r Secret/ssh_front/ USER@SERVER:
+
+
+
+sysadm@front$ scp USER@SERVER:test-front-prep ./
+sysadm@front$ scp -r USER@SERVER:ssh_front/ ./
+sysadm@front$ ./test-front-prep
+
+
+
+The script starts by installing additional software packages. The
+wireguard package is installed so that /etc/wireguard/
is created.
+The systemd-resolved package is installed because a reboot seems the
+only way to get name service working afterwards. As front will
+always have Internet access in the cloud, the rest of the packages are
+installed just to shorten Ansible's work later.
+
+
+
+private/test-front-prep
#!/bin/bash -e
+
+sudo apt install wireguard systemd-resolved \
+ unattended-upgrades postfix dovecot-imapd rsync apache2 kamailio
+
+
+
+
+The Postfix installation prompts for a couple settings. The defaults,
+listed below, are fine.
+
+
+
+- General type of mail configuration: Internet Site
+- System mail name: small.example.org
+
+
+
+The script can now install the private WireGuard⢠key, as well as
+Ansible's public SSH key.
-VBoxManage modifyvm front --nic1 natnetwork --natnetwork1 premises
-VBoxManage modifyvm front --nic2 hostonly --hostonlyadapter2 vboxnet1
+private/test-front-prep
+( umask 377
+ echo "AJkzVxfTm/KvRjzTN/9X2jYy+CAugiwZfN5F3JTegms=" \
+ | sudo tee /etc/wireguard/private-key >/dev/null )
+
+<<test-auth>>
-After Debian is installed and the machine rebooted, the administrator
-logs in and configures the "extra" network interface with a static IP
-address using a drop-in configuration file:
-/etc/network/interfaces.d/eth1
.
+Next, the network interfaces are configured with static IP addresses.
+In actuality, Front gets no network configuration tweaks. The Debian
+12 default is to broadcast for a DHCP lease on the primary NIC. This
+works in the cloud, which should respond with an offer, though it must
+offer the public, DNS-registered, hard-coded front_addr.
+
+
+
+For testing purposes, the preparation of front replaces the default
+/etc/network/interfaces
with a new configuration that statically
+assigns front_addr to the primary NIC and a testing subnet address
+to the second NIC.
-eth1
auto enp0s8
+private/test-front-prep
+( cd /etc/network/; \
+ [ -f interfaces~ ] || sudo mv interfaces interfaces~ )
+cat <<EOF | sudo tee /etc/network/interfaces >/dev/null
+# This file describes the network interfaces available on your system
+# and how to activate them. For more information, see interfaces(5).
+
+source /etc/network/interfaces.d/*
+
+# The loopback network interface
+auto lo
+iface lo inet loopback
+
+# The primary network interface
+auto enp0s3
+iface enp0s3 inet static
+ address 192.168.15.4/24
+ gateway 192.168.15.1
+
+# Testing interface
+auto enp0s8
iface enp0s8 inet static
- address 192.168.57.3/24
+ address 192.168.58.3/24
+EOF
+
+
+
+
+Ansible expects front to use the SSH host keys in
+Secret/ssh_front/
, so it is prepared with these keys in advance.
+(If Ansible installed them, front would change identities while
+Ansible was configuring it. Ansible would lose subsequent access
+until the administrator's ~/.ssh/known_hosts
was updated!)
+
+
+
+private/test-front-prep
+( cd ssh_front/etc/ssh/
+ chmod 600 ssh_host_*
+ chmod 644 ssh_host_*.pub
+ sudo cp -b ssh_host_* /etc/ssh/ )
-A sudo ifup enp0s8 command then brings the interface up.
+With the preparatory script successfully executed, front is shut
+down and moved to the simulated cloud (from the default NAT network).
+
+
+
+The following VBoxManage commands effect the move, connecting the
+primary NIC to public and a second NIC to the host-only network
+vboxnet2 (making it directly accessible to the administrator's
+notebook as described in The Test Networks).
+
+VBoxManage modifyvm front --nic1 natnetwork --natnetwork1 public
+VBoxManage modifyvm front --nic2 hostonly --hostonlyadapter2 vboxnet2
+
+
+
-Note that there is no pre-provisioning for front, which is never
-deployed on a frontier, always in the cloud. Additional Debian
-packages are assumed to be readily available. Thus Ansible installs
-them as necessary, but first the administrator authorizes remote
-access by following the instructions in the final section: Ansible
-Test Authorization.
+front is now prepared for configuration by Ansible.
-
-13.2.3. The Test Gate Machine
-
+
+13.2.5. The Test Gate Machine
+
The gate machine is created with the same amount of RAM and disk as
front. Assuming the RAM, DISK, and ISO shell variables have
-not changed, gate can be created with two commands.
+not changed, gate can be created with one command.
-NAME=gate
-create_vm
+NAME=gate create_vm
-After Debian is installed (as detailed in A Test Machine) and the
-machine rebooted, the administrator logs in and installs several
-additional software packages.
+After Debian is installed (as detailed in A Test Machine) and the
+machine rebooted, the administrator copies the following script to the
+machine and executes it.
+
+
+
+notebook$ scp private/test-gate-prep USER@SERVER:
+
+
+
+sysadm@gate$ scp USER@SERVER:test-gate-prep ./
+sysadm@gate$ ./test-gate-prep
+
+
+
+The script starts by installing additional software packages.
-sudo apt install netplan.io systemd-resolved unattended-upgrades \
- ufw isc-dhcp-server postfix wireguard
+private/test-gate-prep
#!/bin/bash -e
+
+sudo apt install wireguard systemd-resolved unattended-upgrades \
+ postfix ufw lm-sensors nagios-nrpe-server
-Again, the Postfix installation prompts for a couple settings. The
-defaults, listed below, are fine.
+The Postfix installation prompts for a couple settings. The defaults,
+listed below, are fine.
@@ -8247,18 +8610,66 @@ defaults, listed below, are fine.
-gate can then move to the campus. It is shut down before the
-following VBoxManage commands are executed. The commands disconnect
-the primary Ethernet interface from premises and connect it to
-vboxnet0. They also create two new interfaces, isp and wild,
-connected to the simulated ISP and campus wireless access point.
+The script then installs the private WireGuard⢠key, as well as
+Ansible's public SSH key.
+
+
+
+private/test-gate-prep
( umask 377
+ echo "yOBdLbXh6KBwYQvvb5mhiku8Fxkqc5Cdyz6gNgjc/2U=" \
+ | sudo tee /etc/wireguard/private-key >/dev/null )
+
+<<test-auth>>
+
+
+
+
+Next, the script configures the primary NIC with 10-lan.link
and
+10-lan.network
files installed in /etc/systemd/network/
. (This is
+sufficient to allow remote access by Ansible.)
+
+
+
+private/test-gate-prep
+cat <<EOD | sudo tee /etc/systemd/network/10-lan.link >/dev/null
+[Match]
+MACAddress=08:00:27:f3:16:79
+
+[Link]
+Name=lan
+EOD
+
+cat <<EOD | sudo tee /etc/systemd/network/10-lan.network >/dev/null
+[Match]
+MACAddress=08:00:27:f3:16:79
+
+[Network]
+Address=192.168.56.2/24
+DNS=192.168.56.1
+Domains=small.private
+EOD
+
+sudo systemctl --quiet enable systemd-networkd
+
+
+
+
+With the preparatory script successfully executed, gate is shut down
+and moved to the campus network (from the default NAT network).
+
+
+
+The following VBoxManage commands effect the move, connecting the
+primary NIC to vboxnet0 and creating two new interfaces, isp and
+wild. These are connected to the simulated ISP and the simulated
+wild Ethernet (e.g. campus wireless access points, IoT, whatnot).
VBoxManage modifyvm gate --mac-address1=080027f31679
VBoxManage modifyvm gate --nic1 hostonly --hostonlyadapter1 vboxnet0
VBoxManage modifyvm gate --mac-address2=0800273d42e5
-VBoxManage modifyvm gate --nic2 natnetwork --natnetwork2 premises
+VBoxManage modifyvm gate --nic2 natnetwork --natnetwork2 public
VBoxManage modifyvm gate --mac-address3=0800274aded2
VBoxManage modifyvm gate --nic3 hostonly --hostonlyadapter3 vboxnet1
@@ -8299,7 +8710,7 @@ values of the MAC address variables in this table.
enp0s8
-premises
+public
campus ISP
gate_isp_mac
@@ -8314,61 +8725,60 @@ values of the MAC address variables in this table.
-After gate boots up with its new network interfaces, the primary
-Ethernet interface is temporarily configured with an IP address.
-(Ansible will install a Netplan soon.)
-
-
-
-sudo ip address add 192.168.56.2/24 dev enp0s3
-
-
-
-
-Finally, the administrator authorizes remote access by following the
-instructions in the final section: Ansible Test Authorization.
+gate is now prepared for configuration by Ansible.
-
-13.2.4. The Test Core Machine
-
+
+13.2.6. The Test Core Machine
+
The core machine is created with 1GiB of RAM and 6GiB of disk.
Assuming the ISO shell variable has not changed, core can be
-created with following commands.
+created with following command.
-NAME=core
-RAM=2048
-DISK=6144
-create_vm
+NAME=core RAM=2048 DISK=6144 create_vm
-After Debian is installed (as detailed in A Test Machine) and the
-machine rebooted, the administrator logs in and installs several
-additional software packages.
+After Debian is installed (as detailed in A Test Machine) and the
+machine rebooted, the administrator copies the following script to the
+machine and executes it.
+
+
+
+notebook$ scp private/test-core-prep USER@SERVER:
+
+
+
+sysadm@core$ scp USER@SERVER:test-core-prep ./
+sysadm@core$ ./test-core-prep
+
+
+
+The script starts by installing additional software packages.
-sudo apt install netplan.io systemd-resolved unattended-upgrades \
- ntp isc-dhcp-server bind9 apache2 wireguard \
- postfix dovecot-imapd fetchmail expect rsync \
- gnupg
-sudo apt install mariadb-server php php-{apcu,bcmath,curl,gd,gmp}\
+private/test-core-prep
#!/bin/bash -e
+
+sudo apt install wireguard systemd-resolved unattended-upgrades \
+ chrony isc-dhcp-server bind9 apache2 postfix \
+ dovecot-imapd fetchmail rsync gnupg \
+ mariadb-server php php-{apcu,bcmath,curl,gd,gmp}\
php-{json,mysql,mbstring,intl,imagick,xml,zip} \
- libapache2-mod-php
-sudo apt install nagios4 monitoring-plugins-basic lm-sensors \
+ imagemagick libapache2-mod-php \
+ nagios4 monitoring-plugins-basic lm-sensors \
nagios-nrpe-plugin
-Again the Postfix installation prompts for a couple settings. The
-defaults, listed below, are fine.
+The Postfix installation prompts for a couple settings. The defaults,
+listed below, are fine.
@@ -8377,135 +8787,109 @@ defaults, listed below, are fine.
-And domain name resolution may be broken after installing
-systemd-resolved. A reboot is often needed after the first apt
-install command above.
-
-
-
-Before shutting down, the name of the primary Ethernet interface
-should be compared to the example variable setting in
-private/vars.yml
. The value assigned to core_ethernet should
-match the interface name.
-
-
-
-core can now move to the campus. It is shut down before the
-following VBoxManage command is executed. The command connects the
-machine's NIC to vboxnet0, which simulates the campus's private
-Ethernet.
+The script can now install the private WireGuard⢠key, as well as
+Ansible's public SSH key.
-VBoxManage modifyvm core --nic1 hostonly --hostonlyadapter1 vboxnet0
-
-
-
-
-After core boots up with its new network connection, its primary NIC
-is temporarily configured with an IP address. (Ansible will install a
-Netplan soon.)
-
+private/test-core-prep
( umask 377
+ echo "AI+KhwnsHzSPqyIyAObx7EBBTBXFZPiXb2/Qcts8zEI=" \
+ | sudo tee /etc/wireguard/private-key >/dev/null )
-
-sudo ip address add 192.168.56.1/24 dev enp0s3
+<<test-auth>>
-Finally, the administrator authorizes remote access by following the
-instructions in the next section: Ansible Test Authorization.
-
-
-
-
-13.2.5. Ansible Test Authorization
-
-
-To authorize Ansible's access to the three test machines, they must
-allow remote access to their sysadm accounts. In the following
-commands, the administrator must use IP addresses to copy the public
-key to each test machine.
+Next, the script configures the primary NIC with 10-lan.link
and
+10-lan.network
files installed in /etc/systemd/network/
.
-SRC=Secret/ssh_admin/id_rsa.pub
-scp $SRC sysadm@192.168.57.3:admin_key # Front
-scp $SRC sysadm@192.168.56.2:admin_key # Gate
-scp $SRC sysadm@192.168.56.1:admin_key # Core
-
-
+private/test-core-prep
+cat <<EOD | sudo tee /etc/systemd/network/10-lan.link >/dev/null
+[Match]
+MACAddress=08:00:27:b3:e5:5f
-
-Then the key must be installed on each machine with the following
-command line (entered at each console, or in an SSH session with
-each machine).
-
-
-
-( cd; umask 077; mkdir .ssh; cp admin_key .ssh/authorized_keys )
+[Link]
+Name=lan
+EOD
+
+cat <<EOD | sudo tee /etc/systemd/network/10-lan.network >/dev/null
+[Match]
+MACAddress=08:00:27:b3:e5:5f
+
+[Network]
+Address=192.168.56.1/24
+Gateway=192.168.56.2
+DNS=192.168.56.1
+Domains=small.private
+EOD
+
+sudo systemctl --quiet enable systemd-networkd
-The front machine needs a little additional preparation. Ansible
-will configure front with the host keys in Secret/
. These should
-be installed there now so that front does not appear to change
-identities while Ansible is configuring.
+With the preparatory script successfully executed, core is shut down
+and moved to the campus network (from the default NAT network).
-First, the host keys are securely copied to front with the following
-command.
+The following VBoxManage commands effect the move, connecting the
+primary NIC to vboxnet0.
-scp Secret/ssh_front/etc/ssh/ssh_host_* sysadm@192.168.57.3:
+VBoxManage modifyvm core --mac-address1=080027b3e55f
+VBoxManage modifyvm core --nic1 hostonly --hostonlyadapter1 vboxnet0
-Then they are installed with these commands.
+core is now prepared for configuration by Ansible.
-
-
-chmod 600 ssh_host_*
-chmod 644 ssh_host_*.pub
-sudo cp -b ssh_host_* /etc/ssh/
-
-
-
-
-Finally, the system administrator removes the old identity of front.
-
-
-
-ssh-keygen -f ~/.ssh/known_hosts -R 192.168.57.3
-
-
-13.3. Configure Test Machines
+
+13.3. Configure Test Machines
At this point the three test machines core, gate, and front are
running fresh Debian systems with select additional packages, on their
final networks, with a privileged account named sysadm that
authorizes password-less access from the administrator's notebook,
-ready to be configured by Ansible.
+ready to be configured by Ansible. However the administrator's
+notebook may not recognize the test VMs or, worse yet, remember
+different public keys for them (from previous test machines). For
+this reason, the administrator executes the following commands before
+the initial ./inst config.
+
+
+
+ssh sysadm@192.168.56.1 date
+ssh sysadm@192.168.56.2 date
+ssh sysadm@192.168.58.3 date
+./inst config
+
+
+
+Note that this initial run should exercise all of the handlers, and
+that subsequent runs probably do not.
-To configure the test machines, the ./inst config command is
-executed and core restarted. Note that this first run should
-exercise all of the handlers, and that subsequent runs probably do
-not.
+Presumably the ./inst config command completed successfully, but
+before testing begins, gate is restarted. Basic networking tests
+will fail unless the interfaces on gate are renamed, and nothing
+less than a restart will get systemd-udevd to rename the isp and
+wifi interfaces.
-
-13.4. Test Basics
+
+13.4. Test Basics
At this point the test institute is just core, gate and front,
@@ -8527,7 +8911,7 @@ forwarding (and NATing). On core (and gate):
ping -c 1 8.8.4.4 # dns.google
-ping -c 1 192.168.15.5 # front_addr
+ping -c 1 192.168.15.4 # front_addr
@@ -8567,12 +8951,12 @@ instant attention).
-
-13.5. The Test Nextcloud
+
+13.5. The Test Nextcloud
Further tests involve Nextcloud account management. Nextcloud is
-installed on core as described in Configure Nextcloud. Once
+installed on core as described in Install Nextcloud. Once
/Nextcloud/
is created, ./inst config core will validate
or update its configuration files.
@@ -8594,8 +8978,8 @@ with the ./inst client command.
-
-13.6. Test New Command
+
+13.6. Test New Command
A member must be enrolled so that a member's client machine can be
@@ -8615,8 +8999,8 @@ Take note of Dick's initial password.
-
-13.7. The Test Member Notebook
+
+13.7. The Test Member Notebook
A test member's notebook is created next, much like the servers,
@@ -8644,7 +9028,7 @@ behind) the access point.
-Debian is installed much as detailed in A Test Machine except that
+Debian is installed much as detailed in A Test Machine except that
the SSH server option is not needed and the GNOME desktop option
is. When the machine reboots, the administrator logs into the
desktop and installs a couple additional software packages (which
@@ -8652,31 +9036,32 @@ require several more).
-sudo apt install wireguard nextcloud-desktop evolution
+sudo apt install systemd-resolved \
+ wireguard nextcloud-desktop evolution
-
-13.8. Test Client Command
+
+13.8. Test Client Command
The ./inst client command is used to register the public key of a
client wishing to connect to the institute's VPNs. In this test, new
member Dick wants to connect his notebook, dick, to the institute
VPNs. First he generates a pair of WireGuard⢠keys by running the
-following commands on Dick's notebook.
+following commands on his notebook.
-( umask 077; wg genkey >private)
-wg pubkey <private >public
+( umask 077; wg genkey \
+ | sudo tee /etc/wireguard/private-key ) | wg pubkey
-The administrator uses the key in public
to run the following
-command, generating campus.conf
and public.conf
files.
+Dick then sends the resulting public key to the administrator, who
+runs the following command.
@@ -8684,38 +9069,25 @@ command, generating campus.conf
and public.conf
files.
4qd4xdRztZBKhFrX9jI/b4fnMzpKQ5qhg691hwYSsX8=
-
-
-
-13.9. Test Campus WireGuard⢠Subnet
-
-
-The campus.conf
WireGuard⢠configuration file (generated in Test
-Client Command) is transferred to dick, which is at the Wi-Fi access
-point's IP address, host 2 on the wild Ethernet.
-
-
-
-scp *.conf sysadm@192.168.57.2:
-
-
-
-
-Dick then pastes his notebook's private key into the template
-campus.conf
file and installs the result in
-/etc/wireguard/wg0.conf
, doing the same to complete public.conf
-and install it in /etc/wireguard/wg1.conf
.
-
-To connect to the campus VPN, the following command is run.
+The command generates campus.conf
and public.conf
configuration
+files, which the administrator sends, openly (e.g. in email) to Dick.
+Dick then installs the configuration files in /etc/wireguard/
and
+creates the campus interface.
-systemctl start wg-quick@wg0
+sudo cp {campus,public}.conf /etc/wireguard/
+sudo wg-quick up campus
+sudo systemctl enable wg-quick@campus
-
+
+
+
+13.9. Test Campus WireGuard⢠Subnet
+
A few basic tests are then performed in a terminal.
@@ -8731,8 +9103,8 @@ host www
-
-13.10. Test Web Pages
+
+13.10. Test Web Pages
Next, the administrator copies Backup/WWW/
(included in the
@@ -8741,10 +9113,10 @@ appropriately.
-sudo chown -R sysadm.staff /WWW/campus
-sudo chown -R monkey.staff /WWW/live /WWW/test
+sudo chown -R monkey:staff /WWW/campus /WWW/live /WWW/test
sudo chmod 02775 /WWW/*
sudo chmod 664 /WWW/*/index.html
+sudo -u monkey /usr/local/sbin/webupdate
@@ -8761,61 +9133,51 @@ the source file.
http://live.small.private/
http://test/
http://test.small.private/
-http://small.example.org/
-The last URL should re-direct to https://small.example.org/, which
-uses a certificate (self-)signed by an unknown authority. Firefox
-will warn but allow the luser to continue.
+The first will probably be flagged as unverifiable, signed by an
+unknown issuer, etc. Otherwise, each should be accessible, displaying
+a short description of the website that was being simulated.
-
-
-
-13.11. Test Web Update
-
+
-Modify /WWW/live/index.html
on core and wait 15 minutes for it to
-appear as https://small.example.org/ (and in /home/www/index.html
-on front).
+The simulated public web site at http://192.168.15.4/ is also
+tested. It should redirect to https://small.example.org/, which
+does not exist. However, the web site at https://192.168.15.4/
+(with httpS) should exist and produce a legible page (after the
+usual warnings).
-Hack /home/www/index.html
on front and observe the result at
-https://small.example.org/. Wait 15 minutes for the correction.
+Next the administrator modifies /WWW/live/index.html
on core and
+waits 15 minutes for the edit to appear in the web page at
+https://192.168.15.4/ (and in the file /home/www/index.html
on
+front). The same is done to /home/www/index.html
on front and
+the edit observed immediately, and its correction within 15 minutes.
-
-13.12. Test Nextcloud
-
+
+13.11. Test Nextcloud
+
-Nextcloud is typically installed and configured after the first
-Ansible run, when core has Internet access via gate. Until the
-installation directory /Nextcloud/nextcloud/
appears, the Ansible
-code skips parts of the Nextcloud configuration. The same
-installation (or restoration) process used on Core is used on core
-to create /Nextcloud/
. The process starts with Create
-/Nextcloud/
, involves Restore Nextcloud or Install Nextcloud,
-and runs ./inst config core again 8.23.6. When the ./inst
-config core command is happy with the Nextcloud configuration on
-core, the administrator uses Dick's notebook to test it, performing
-the following tests on dick's desktop.
+Using the browser on the simulated member notebook, the Nextcloud
+installation on core can be completed. The following steps are
+performed on dick's desktop.
-- Use a web browser to get
http://core/nextcloud/. It should be a
-warning about accessing Nextcloud by an untrusted name.
+- Get
http://core/nextcloud/. The attempt produces a warning about
+using Nextcloud via an untrusted name.
-- Get
https://core.small.private/nextcloud/. It should be a
-login web page.
+- Get
https://core.small.private/nextcloud/. Receive a login page.
- Login as
sysadm with password fubar.
- Examine the security & setup warnings in the Settings >
Administration > Overview web page. A few minor warnings are
-expected (besides the admonishment about using
http rather than
-https).
+expected.
- Download and enable Calendar and Contacts in the Apps > Featured web
page.
@@ -8823,17 +9185,16 @@ page.
- Logout and login as
dick with Dick's initial password (noted
above).
-- Use the Nextcloud app to sync
~/nextCloud/
with the cloud. In the
-Nextcloud app's Connection Wizard (the initial dialog), choose to
-"Log in to your Nextcloud" with the URL
-https://core.small.private/nextcloud. The web browser should pop
-up with a new tab: "Connect to your account". Press "Log in" and
-"Grant access". The Nextcloud Connection Wizard then prompts for
-sync parameters. The defaults are fine. Presumably the Local
-Folder is /home/sysadm/Nextcloud/
.
+- Use the Nextcloud app to sync
~/Nextcloud/
with the cloud. In the
+Nextcloud Desktop app's Connection Wizard (the initial dialog),
+login with the URL https://core.small.private/nextcloud. The web
+browser should pop up with a new tab: "Connect to your account".
+Press "Log in" and "Grant access". The Nextcloud Connection Wizard
+then prompts for sync parameters. The defaults are fine.
+Presumably the Local Folder is /home/sysadm/Nextcloud/
.
-- Drop a file in
~/Nextcloud/
, use the app to force a sync, and find
-the file in the Files web page.
+- Drop a file in
~/Nextcloud/
, then find it in the Nextcloud Files
+web page.
Create a Mail account in Evolution. This step does not involve
@@ -8875,9 +9236,9 @@ the calendar.
-
-13.13. Test Email
-
+
+13.12. Test Email
+
With Evolution running on the member notebook dick, one second email
delivery can be demonstrated. The administrator runs the following
@@ -8904,18 +9265,19 @@ Outgoing email is also tested. A message to
-
-13.14. Test Public VPN
-
+
+13.13. Test Public VPN
+
At this point, dick can move abroad, from the campus Wi-Fi
(host-only network vboxnet1) to the broader Internet (the NAT
-network premises). The following command makes the change. The
-machine does not need to be shut down.
+network public). The following command makes the change. The
+machine does not need to be shut down if the GUI is used to change its
+NIC.
-VBoxManage modifyvm dick --nic1 natnetwork --natnetwork1 premises
+VBoxManage modifyvm dick --nic1 natnetwork --natnetwork1 public
@@ -8934,7 +9296,7 @@ Again, some basics are tested in a terminal.
-ping -c 1 8.8.4.4 # dns.google
+ping -c 1 8.8.8.8 # dns.google
ping -c 1 192.168.56.1 # core
host dns.google
host core.small.private
@@ -8943,17 +9305,19 @@ host www
-And these web pages are fetched with a browser.
+And, again, these web pages are fetched with a browser.
-- http://www/
-- http://www.small.private/
-- http://live/
-- http://live.small.private/
-- http://test/
-- http://test.small.private/
-- http://small.example.org/
+http://www/
+http://www.small.private/
+http://live/
+http://live.small.private/
+http://test/
+http://test.small.private/
+http://192.168.15.4/
+https://192.168.15.4/
+http://core.small.private/nextcloud/
@@ -8963,19 +9327,23 @@ calendar events.
-
-13.15. Test Pass Command
-
+
+13.14. Test Pass Command
+
To test the ./inst pass command, the administrator logs in to core
as dick and runs passwd. A random password is entered, more
-obscure than fubar (else Nextcloud will reject it!). The
-administrator then finds the password change request message in the
-most recent file in /home/sysadm/Maildir/new/
and pipes it to the
-./inst pass command. The administrator might do that by copying the
-message to a more conveniently named temporary file on core,
-e.g. ~/msg
, copying that to the current directory on the notebook,
-and feeding it to ./inst pass on its standard input.
+obscure than fubar (else Nextcloud will reject it!).
+
+
+
+The administrator then finds the password change request message in
+the most recent file in /home/sysadm/Maildir/new/
and pipes it to
+the ./inst pass command. The administrator might do that by copying
+the message to a more conveniently named temporary file on core,
+e.g. ~/msg
, copying that to the current directory on the
+administrator's notebook, and feeding it to ./inst pass on standard
+input.
@@ -8983,7 +9351,8 @@ On core, logged in as sysadm:
-( cd ~/Maildir/new/
+sudo -u dick passwd
+( cd ~/Maildir/new/
cp `ls -1t | head -1` ~/msg )
grep Subject: ~/msg
@@ -9011,9 +9380,9 @@ Finally, the administrator verifies that dick can login on co
-
-13.16. Test Old Command
-
+
+13.15. Test Old Command
+
One more institute command is left to exercise. The administrator
retires dick and his main device dick.
@@ -9032,16 +9401,16 @@ fail.
-
-14. Future Work
+
+14. Future Work
The small institute's network, as currently defined in this doocument,
is lacking in a number of respects.
-
-14.1. Deficiencies
+
+14.1. Deficiencies
The current network monitoring is rudimentary. It could use some
@@ -9067,16 +9436,16 @@ not available on Front, yet.
-
-14.2. More Tests
+
+14.2. More Tests
The testing process described in the previous chapter is far from
complete. Additional tests are needed.
-
-14.2.1. Backup
+
+14.2.1. Backup
The backup command has not been tested. It needs an encrypted
@@ -9085,8 +9454,8 @@ partition with which to sync? And then some way to compare that to
-
-14.2.2. Restore
+
+14.2.2. Restore
The restore process has not been tested. It might just copy Backup/
@@ -9096,8 +9465,8 @@ perhaps permissions too. It could also use an example
-
-14.2.3. Campus Disconnect
+
+14.2.3. Campus Disconnect
Email access (IMAPS) on front is… difficult to test unless
@@ -9112,7 +9481,7 @@ seen.
Find it in /home/dick/Maildir/new/
.
Re-configure Evolution on dick. Edit the dick@small.example.org
mail account (or create a new one?) so that the Receiving Email
-Server name is 192.168.15.5, not mail.small.private. The
+Server name is 192.168.15.4, not mail.small.private. The
latter domain name will not work while the campus is disappeared.
In actual use (with Front, not front), the institute domain name
could be used.
@@ -9121,8 +9490,8 @@ could be used.
-
-15. Appendix: The Bootstrap
+
+15. Appendix: The Bootstrap
Creating the private network from whole cloth (machines with recent
@@ -9142,11 +9511,11 @@ etc.: quite a bit of temporary, manual localnet configuration just to
get to the additional packages.
-
-15.1. The Current Strategy
+
+15.1. The Current Strategy
-The strategy pursued in The Hardware is two phase: prepare the servers
+The strategy pursued in The Hardware is two phase: prepare the servers
on the Internet where additional packages are accessible, then connect
them to the campus facilities (the private Ethernet switch, Wi-Fi AP,
ISP), manually configure IP addresses (while the DHCP client silently
@@ -9154,8 +9523,8 @@ fails), and avoid names until BIND9 is configured.
-
-15.2. Starting With Gate
+
+15.2. Starting With Gate
The strategy of Starting With Gate concentrates on configuring Gate's
@@ -9199,8 +9568,8 @@ ansible-playbook -l core site.yml
-
-15.3. Pre-provision With Ansible
+
+15.3. Pre-provision With Ansible
A refinement of the current strategy might avoid the need to maintain
@@ -9253,7 +9622,7 @@ routes on Front and Gate, making the simulation less… similar.