From a5c258b434ea345d09777656bd4f485594d9142e Mon Sep 17 00:00:00 2001
From: Matt Birkholz
This small institute has a public server on the Internet, Front, that @@ -48,7 +48,7 @@ connects to Front making the institute email, cloud, etc. available to members off campus.
-+= _|||_ =-The-Institute-= @@ -95,8 +95,8 @@ uses OpenPGP encryption to secure message content.
This small institute prizes its privacy, so there is little or no @@ -144,8 +144,8 @@ month) because of this assumption.
The small institute's network is designed to provide a number of @@ -157,8 +157,8 @@ policies. On first reading, those subsections should be skipped; they reference particulars first introduced in the following chapter.
The institute has a public domain, e.g. small.example.org
, and a
@@ -172,8 +172,8 @@ names like core
.
Front provides the public SMTP (Simple Mail Transfer Protocol) service @@ -231,8 +231,8 @@ sign outgoing emails per DKIM (Domain Keys Identified Mail) yet.
TXT v=spf1 ip4:159.65.75.60 -all
-
+Example Small Institute SPF RecordTXT v=spf1 ip4:159.65.75.60 -all
+
@@ -247,8 +247,8 @@ setting for the maximum message size is given in a code block labeled
configurations wherever <<postfix-message-size>>
appears.
The institute aims to accommodate encrypted email containing short @@ -263,8 +263,8 @@ handle maxi-messages.
postfix-message-size
- { p: message_size_limit, v: 104857600 } -+
postfix-message-size
- { p: message_size_limit, v: 104857600 }
+
@@ -278,10 +278,10 @@ re-sending the bounce (or just grabbing the go-bag!).
postfix-queue-times
- { p: delay_warning_time, v: 1h } +postfix-queue-times
+- { p: delay_warning_time, v: 1h } - { p: maximal_queue_lifetime, v: 4h } - { p: bounce_queue_lifetime, v: 4h } -
@@ -292,9 +292,9 @@ disables relaying (other than for the local networks).
postfix-relaying
- p: smtpd_relay_restrictions +postfix-relaying
+- p: smtpd_relay_restrictions v: permit_mynetworks reject_unauth_destination -
@@ -304,8 +304,8 @@ effect.
postfix-maildir
- { p: home_mailbox, v: Maildir/ } -+
postfix-maildir
- { p: home_mailbox, v: Maildir/ }
+
@@ -315,8 +315,8 @@ in the respective roles below.
The Dovecot settings on both Front and Core disable POP and require @@ -330,9 +330,9 @@ The official documentation for Dovecot once was a Wiki but now is
dovecot-tls
protocols = imap +dovecot-tls
+protocols = imap ssl = required -
@@ -342,12 +342,12 @@ configuration keeps them from even listening at the IMAP port
dovecot-ports
service imap-login { +dovecot-ports
+service imap-login { inet_listener imap { port = 0 } } -
@@ -356,8 +356,8 @@ directories.
dovecot-maildir
mail_location = maildir:~/Maildir
-
+dovecot-maildir
mail_location = maildir:~/Maildir
+
@@ -368,15 +368,15 @@ common settings with host specific settings for ssl_cert
and
Front provides the public HTTP service that serves institute web pages
at e.g. https://small.example.org/
. The small institute initially
runs with a self-signed, "snake oil" server certificate, causing
browsers to warn of possible fraud, but this certificate is easily
-replaced by one signed by a recognized authority, as discussed in The
+replaced by one signed by a recognized authority, as discussed in The
Front Role.
/WWW/live/, once they are complete and tested.
http://core/
@@ -431,15 +431,15 @@ will automatically wipe it within 15 minutes.
Core runs Nextcloud to provide a private institute cloud at
-http://core.small.private/nextcloud/
. It is managed manually per
+https://core.small.private/nextcloud/
. It is managed manually per
The Nextcloud Server Administration Guide. The code and data,
including especially database dumps, are stored in /Nextcloud/
which
-is included in Core's backup procedure as described in Backups. The
+is included in Core's backup procedure as described in Backups. The
default Apache2 configuration expects to find the web scripts in
/var/www/nextcloud/
, so the institute symbolically links this to
/Nextcloud/nextcloud/
.
@@ -453,22 +453,22 @@ private network.
The institute's public and campus VPNs have many common configuration
options that are discussed here. These are included, with example
certificates and network addresses, in the complete server
-configurations of The Front Role and The Gate Role, as well as the
-matching client configurations in The Core Role and the .ovpn
files
-generated by The Client Command. The configurations are based on the
+configurations of The Front Role and The Gate Role, as well as the
+matching client configurations in The Core Role and the .ovpn
files
+generated by The Client Command. The configurations are based on the
documentation for OpenVPN v2.4: the openvpn(8)
manual page and this
web page.
The institute VPNs use UDP on a subnet topology (rather than @@ -480,11 +480,11 @@ the VPN subnets using any (experimental) protocol.
openvpn-dev-mode
dev-type tun +openvpn-dev-mode
+dev-type tun dev ovpn topology subnet client-to-client -
@@ -495,20 +495,20 @@ interruptions.
openvpn-keepalive
keepalive 10 120 -+
openvpn-keepalive
keepalive 10 120
+
-As mentioned in The Name Service, the institute uses a campus name +As mentioned in The Name Service, the institute uses a campus name server. OpenVPN is instructed to push its address and the campus search domain.
openvpn-dns
push "dhcp-option DOMAIN {{ domain_priv }}" +openvpn-dns
+push "dhcp-option DOMAIN {{ domain_priv }}" push "dhcp-option DNS {{ core_addr }}" -
@@ -519,11 +519,11 @@ device nor the key files.
openvpn-drop-priv
user nobody +openvpn-drop-priv
+user nobody group nogroup persist-key persist-tun -
@@ -535,9 +535,9 @@ the default for OpenVPN v2.4, and
@@ -546,8 +546,8 @@ accommodating a few members with a handful of devices each.
@@ -560,23 +560,23 @@ raised from the default level 1 to level 3 (just short of a deluge).
auth
is upped to SHA256
openvpn-crypt
cipher AES-256-GCM
+
openvpn-crypt
+cipher AES-256-GCM
auth SHA256
-
openvpn-max
max-clients 20
-
+openvpn-max
max-clients 20
+
openvpn-debug
ifconfig-pool-persist ipp.txt
+
openvpn-debug
+ifconfig-pool-persist ipp.txt
status openvpn-status.log
verb 3
-
A small institute has just a handful of members. For simplicity (and
thus security) static configuration files are preferred over complex
account management systems, LDAP, Active Directory, and the like. The
Ansible scripts configure the same set of user accounts on Core and
-Front. The Institute Commands (e.g. ./inst new dick
) capture the
+Front. The Institute Commands (e.g. ./inst new dick
) capture the
processes of enrolling, modifying and retiring members of the
institute. They update the administrator's membership roll, and run
Ansible to create (and disable) accounts on Core, Front, Nextcloud,
@@ -591,8 +591,8 @@ accomplished via the campus cloud and the resulting desktop files can
all be private (readable and writable only by the owner) by default.
The institute avoids the use of the root
account (uid 0
) because
@@ -601,21 +601,21 @@ command is used to consciously (conscientiously!) run specific scripts
and programs as root
. When installation of a Debian OS leaves the
host with no user accounts, just the root
account, the next step is
to create a system administrator's account named sysadm
and to give
-it permission to use the sudo
command (e.g. as described in The
+it permission to use the sudo
command (e.g. as described in The
Front Machine). When installation prompts for the name of an
initial, privileged user account the same name is given (e.g. as
-described in The Core Machine). Installation may not prompt and
+described in The Core Machine). Installation may not prompt and
still create an initial user account with a distribution specific name
(e.g. pi
). Any name can be used as long as it is provided as the
value of ansible_user
in hosts
. Its password is specified by a
vault-encrypted variable in the Secret/become.yml
file. (The
-hosts
and Secret/become.yml
files are described in The Ansible
+hosts
and Secret/become.yml
files are described in The Ansible
Configuration.)
The institute's Core uses a special account named monkey
to run
@@ -626,8 +626,8 @@ account is created on Front as well.
The institute keeps its "master secrets" in an encrypted @@ -714,7 +714,6 @@ rsync -a Secret/ Secret2/ rsync -a Secret/ Secret3/ -
This is out of consideration for the fragility of USB drives, and the importance of a certain SSH private key, without which the @@ -723,8 +722,8 @@ the administrator's password keep, to install a new SSH key.
The small institute backs up its data, but not so much so that nothing
@@ -755,12 +754,12 @@ version 2.
Given the -n
flag, the script does a "pre-sync" which does not pause
Nextcloud nor dump its DB. A pre-sync gets the big file (video)
copies done while Nextcloud continues to run. A follow-up sudo
-backup
(without -n
) produces the complete copy (with all the
+backup, without -n
, produces the complete copy (with all the
files mentioned in the Nextcloud database dump).
private/backup
#!/bin/bash -e +private/backup+#!/bin/bash -e # # DO NOT EDIT. Maintained (will be replaced) by Ansible. # @@ -798,10 +797,8 @@ files mentioned in the Nextcloud database dump). echo "Mounting /backup/." cryptsetup luksOpen /dev/disk/by-partlabel/Backup backup mount /dev/mapper/backup /backup - mounted=indeed else echo "Found /backup/ already mounted." - mounted= fi if [ ! -d /backup/home ] @@ -813,17 +810,20 @@ files mentioned in the Nextcloud database dump). if [ ! $presync ] then - echo "Putting nextcloud into maintenance mode." + echo "Putting Nextcloud into maintenance mode." ( cd /Nextcloud/nextcloud/ sudo -u www-data php occ maintenance:mode --on &>/dev/null ) - echo "Dumping nextcloud database." + echo "Dumping Nextcloud database." ( cd /Nextcloud/ umask 07 - BAK=`date +"%Y%m%d"`-dbbackup.bak.gz + BAK=`date +"%Y%m%d%H%M"`-dbbackup.bak.gz CNF=/Nextcloud/dbbackup.cnf mysqldump --defaults-file=$CNF nextcloud | gzip > $BAK - chmod 440 $BAK ) + chmod 440 $BAK + ls -t1 *-dbbackup.bak.gz | tail -n +4 \ + | while read; do rm "$REPLY"; done + ) fi } @@ -832,21 +832,19 @@ files mentioned in the Nextcloud database dump). if [ ! $presync ] then - echo "Putting nextcloud back into service." + echo "Putting Nextcloud back into service." ( cd /Nextcloud/nextcloud/ sudo -u www-data php occ maintenance:mode --off &>/dev/null ) fi - if [ $mounted ] + if mountpoint -q /backup/ then echo "Unmounting /backup/." umount /backup cryptsetup luksClose backup - mounted= + echo "Done." + echo "The backup device can be safely disconnected." fi - echo "Done." - echo "The backup device can be safely disconnected." - } start @@ -858,13 +856,13 @@ start done finish -
This chapter introduces Ansible variables intended to simplify
@@ -876,13 +874,13 @@ stored in separate files: public/vars.yml
a
The example settings in this document configure VirtualBox VMs as -described in the Testing chapter. For more information about how a +described in the Testing chapter. For more information about how a small institute turns the example Ansible code into a working Ansible -configuration, see chapter The Ansible Configuration. +configuration, see chapter The Ansible Configuration.
The small institute's domain name is used quite frequently in the
@@ -892,9 +890,9 @@ replace {{ domain_name }}
in the code with small.example.org<
public/vars.yml
--- +public/vars.yml+--- domain_name: small.example.org -
@@ -915,14 +913,14 @@ like DNS-over-HTTPS will pass us by.
private/vars.yml
--- +private/vars.yml+--- domain_priv: small.private -
The small institute uses a private Ethernet, two VPNs, and an @@ -1013,16 +1011,16 @@ example result follows the code.
(let ((bytes ++(let ((bytes (let ((i (random (+ 256 16)))) (if (< i 256) (list 10 i (1+ (random 254))) (list 172 (+ 16 (- i 256)) (1+ (random 254))))))) (format "%d.%d.%d.0/24" (car bytes) (cadr bytes) (caddr bytes))) -
=> 10.62.17.0/24
@@ -1035,16 +1033,16 @@ code block below. The small institute treats these addresses as sensitive information so again the code block below "tangles" intoprivate/vars.ymlrather than
public/vars.yml. Two of the addresses are in
192.168
subnets because they are part of a test
-configuration using mostly-default VirtualBoxes (described here).
+configuration using mostly-default VirtualBoxes (described here).
private/vars.yml
+private/vars.yml+private_net_cidr: 192.168.56.0/24 +wild_net_cidr: 192.168.57.0/24 public_vpn_net_cidr: 10.177.86.0/24 campus_vpn_net_cidr: 10.84.138.0/24 -gate_wifi_net_cidr: 192.168.57.0/24 -
@@ -1056,12 +1054,11 @@ e.g. _net_and_mask
rather than _net_cidr
.
private/vars.yml
private_net: +private/vars.yml+wild_net: "{{ wild_net_cidr | ansible.utils.ipaddr('network') }}" +wild_net_mask: + "{{ wild_net_cidr | ansible.utils.ipaddr('netmask') }}" +wild_net_and_mask: "{{ wild_net }} {{ wild_net_mask }}" +wild_net_broadcast: + "{{ wild_net_cidr | ansible.utils.ipaddr('broadcast') }}" +private_net: "{{ private_net_cidr | ansible.utils.ipaddr('network') }}" private_net_mask: "{{ private_net_cidr | ansible.utils.ipaddr('netmask') }}" -private_net_and_mask: - "{{ private_net }} {{ private_net_mask }}" +private_net_and_mask: "{{ private_net }} {{ private_net_mask }}" public_vpn_net: "{{ public_vpn_net_cidr | ansible.utils.ipaddr('network') }}" public_vpn_net_mask: @@ -1074,15 +1071,13 @@ campus_vpn_net_mask: "{{ campus_vpn_net_cidr | ansible.utils.ipaddr('netmask') }}" campus_vpn_net_and_mask: "{{ campus_vpn_net }} {{ campus_vpn_net_mask }}" -gate_wifi_net: - "{{ gate_wifi_net_cidr | ansible.utils.ipaddr('network') }}" -gate_wifi_net_mask: - "{{ gate_wifi_net_cidr | ansible.utils.ipaddr('netmask') }}" -gate_wifi_net_and_mask: - "{{ gate_wifi_net }} {{ gate_wifi_net_mask }}" -gate_wifi_broadcast: - "{{ gate_wifi_net_cidr | ansible.utils.ipaddr('broadcast') }}" -
@@ -1093,8 +1088,8 @@ the institute's Internet domain name.
@@ -1109,49 +1104,48 @@ virtual machines and networks, and the VirtualBox user manual uses
Finally, five host addresses are needed frequently in the Ansible
code. The first two are Core's and Gate's addresses on the private
Ethernet. The next two are Gate's and the campus Wi-Fi's addresses on
-the Gate-WiFi subnet, the tiny Ethernet (gate_wifi_net
) between Gate
-and the (untrusted) campus Wi-Fi access point. The last is Front's
-address on the public VPN, perversely called front_private_addr
.
-The following code block picks the obvious IP addresses for Core
-(host 1) and Gate (host 2).
+the "wild" subnet, the untrusted Ethernet (wild_net
) between Gate
+and the campus Wi-Fi access point(s) and IoT appliances. The last is
+Front's address on the public VPN, perversely called
+front_private_addr
. The following code block picks the obvious IP
+addresses for Core (host 1) and Gate (host 2).
private/vars.yml
core_addr_cidr: "{{ private_net_cidr | ansible.utils.ipaddr('1') }}" +private/vars.yml+core_addr_cidr: "{{ private_net_cidr | ansible.utils.ipaddr('1') }}" gate_addr_cidr: "{{ private_net_cidr | ansible.utils.ipaddr('2') }}" -gate_wifi_addr_cidr: - "{{ gate_wifi_net_cidr | ansible.utils.ipaddr('1') }}" -wifi_wan_addr_cidr: - "{{ gate_wifi_net_cidr | ansible.utils.ipaddr('2') }}" +gate_wild_addr_cidr: + "{{ wild_net_cidr | ansible.utils.ipaddr('1') }}" +wifi_wan_addr_cidr: "{{ wild_net_cidr | ansible.utils.ipaddr('2') }}" front_private_addr_cidr: "{{ public_vpn_net_cidr | ansible.utils.ipaddr('1') }}" core_addr: "{{ core_addr_cidr | ansible.utils.ipaddr('address') }}" gate_addr: "{{ gate_addr_cidr | ansible.utils.ipaddr('address') }}" -gate_wifi_addr: - "{{ gate_wifi_addr_cidr | ansible.utils.ipaddr('address') }}" +gate_wild_addr: + "{{ gate_wild_addr_cidr | ansible.utils.ipaddr('address') }}" wifi_wan_addr: "{{ wifi_wan_addr_cidr | ansible.utils.ipaddr('address') }}" front_private_addr: "{{ front_private_addr_cidr | ansible.utils.ipaddr('address') }}" -
The small institute's network was built by its system administrator using Ansible on a trusted notebook. The Ansible configuration and scripts were generated by "tangling" the Ansible code included here. -(The Ansible Configuration describes how to do this.) The following +(The Ansible Configuration describes how to do this.) The following sections describe how Front, Gate and Core were prepared for Ansible.
Front is the small institute's public facing server, a virtual machine @@ -1164,8 +1158,8 @@ possible to quickly re-provision a new Front machine from a frontier Internet café using just the administrator's notebook.
The following example prepared a new front on a Digital Ocean droplet. @@ -1186,11 +1180,10 @@ notebook$ ssh root@159.65.75.60 root@ubuntu# -
The freshly created Digital Ocean droplet came with just one account,
root
, but the small institute avoids remote access to the "super
-user" account (per the policy in The Administration Accounts), so the
+user" account (per the policy in The Administration Accounts), so the
administrator created a sysadm
account with the ability to request
escalated privileges via the sudo
command.
The password was generated by gpw
, saved in the administrator's
password keep, and later added to Secret/become.yml
as shown below.
(Producing a working Ansible configuration with Secret/become.yml
-file is described in The Ansible Configuration.)
+file is described in The Ansible Configuration.)
@@ -1225,11 +1217,10 @@ notebook$ ansible-vault encrypt_string givitysticangout \ notebook_ >>Secret/become.yml-
After creating the sysadm
account on the droplet, the administrator
concatenated a personal public ssh key and the key found in
-Secret/ssh_admin/
(created by The CA Command) into an admin_keys
+Secret/ssh_admin/
(created by The CA Command) into an admin_keys
file, copied it to the droplet, and installed it as the
authorized_keys
for sysadm
.
The Ansible configuration expects certain host keys on the new front. The administrator should install them now, and deal with the machine's @@ -1272,7 +1262,6 @@ sysadm@ubuntu$ logout notebook$ ssh-keygen -f ~/.ssh/known_hosts -R 159.65.75.60 -
The last command removes the old host key from the administrator's
known_hosts
file. The next SSH connection should ask to confirm the
@@ -1291,7 +1280,6 @@ sysadm@ubuntu$ sudo head -1 /etc/shadow
root:*:18355:0:99999:7:::
-
After passing the above test, the administrator disabled root logins on the droplet. The last command below tested that root logins were @@ -1306,7 +1294,6 @@ root@159.65.75.60: Permission denied (publickey). notebook$ -
At this point the droplet was ready for configuration by Ansible. Later, provisioned with all of Front's services and tested, the @@ -1316,8 +1303,8 @@ address.
Core is the small institute's private file, email, cloud and whatnot
@@ -1341,7 +1328,7 @@ The following example prepared a new core on a PC with Debian 11
freshly installed. During installation, the machine was named core
,
no desktop or server software was installed, no root password was set,
and a privileged account named sysadm
was created (per the policy in
-The Administration Accounts).
+The Administration Accounts).
@@ -1353,12 +1340,11 @@ Retype new password: oingstramextedil Is the information correct? [Y/n]-
The password was generated by gpw
, saved in the administrator's
password keep, and later added to Secret/become.yml
as shown below.
(Producing a working Ansible configuration with Secret/become.yml
-file is described in The Ansible Configuration.)
+file is described in The Ansible Configuration.)
@@ -1369,7 +1355,6 @@ notebook$ ansible-vault encrypt_string oingstramextedil \ notebook_ >>Secret/become.yml-
With Debian freshly installed, Core needed several additional software packages. The administrator temporarily plugged Core into a cable @@ -1383,7 +1368,6 @@ _ postfix dovecot-imapd fetchmail expect rsync \ _ gnupg openssh-server -
The Nextcloud configuration requires Apache2, MariaDB and a number of PHP modules. Installing them while Core was on a cable modem sped up @@ -1396,7 +1380,6 @@ _ php-{json,mysql,mbstring,intl,imagick,xml,zip} \ _ libapache2-mod-php -
Similarly, the NAGIOS configuration requires a handful of packages that were pre-loaded via cable modem (to test a frontier deployment). @@ -1407,10 +1390,9 @@ $ sudo apt install nagios4 monitoring-plugins-basic lm-sensors \ _ nagios-nrpe-plugin -
Next, the administrator concatenated a personal public ssh key and the
-key found in Secret/ssh_admin/
(created by The CA Command) into an
+key found in Secret/ssh_admin/
(created by The CA Command) into an
admin_keys
file, copied it to Core, and installed it as the
authorized_keys
for sysadm
.
Note that the name
In the example command lines below, the address
At this point Core was ready for provisioning with Ansible.
core.lan
should be known to the cable modem's DNS
service. An IP address might be used instead, discovered with an ip
@@ -1451,7 +1432,7 @@ a new, private IP address and a default route.
10.227.248.1
was
generated by the random subnet address picking procedure described in
-Subnets, and is named core_addr
in the Ansible code. The second
+Subnets, and is named core_addr
in the Ansible code. The second
address, 10.227.248.2
, is the corresponding address for Gate's
Ethernet interface, and is named gate_addr
in the Ansible
code.
@@ -1462,14 +1443,13 @@ sysadm@core$ sudo ip address add 10.227.248.1 dev enp82s0
sysadm@core$ sudo ip route add default via 10.227.248.2 dev enp82s0
-
Gate is the small institute's route to the Internet, and the campus @@ -1480,9 +1460,9 @@ interfaces.
lan
is its main Ethernet interface, connected to the campus's
private Ethernet switch.wifi
is its second Ethernet interface, connected to the campus
-Wi-Fi access point's WAN Ethernet interface (with a cross-over
-cable).wild
is its second Ethernet interface, connected to the
+untrusted network of campus IoT appliances and Wi-Fi access
+point(s).isp
is its third network interface, connected to the campus
ISP. This could be an Ethernet device connected to a cable
modem. It could be a USB port tethered to a phone, a
@@ -1490,7 +1470,7 @@ USB-Ethernet adapter, or a wireless adapter connected to a
campground Wi-Fi access point, etc.+=============== | ================================================== | Premises (Campus ISP) @@ -1503,8 +1483,8 @@ campground Wi-Fi access point, etc. +----Ethernet switch
While Gate and Core really need to be separate machines for security @@ -1513,7 +1493,7 @@ This avoids the need for a second Wi-Fi access point and leads to the following topology.
-+=============== | ================================================== | Premises (House ISP) @@ -1525,7 +1505,8 @@ following topology. +----Ethernet switch-In this case Gate has two interfaces and there is no Gate-WiFi subnet. +In this case Gate has two interfaces and there is no wild subnet +other than the Internets themselves.
@@ -1536,12 +1517,12 @@ its Ethernet and Wi-Fi clients are allowed to communicate).
The Ansible code in this document is somewhat dependent on the -physical network shown in the Overview wherein Gate has three network +physical network shown in the Overview wherein Gate has three network interfaces.
@@ -1550,7 +1531,7 @@ The following example prepared a new gate on a PC with Debian 11 freshly installed. During installation, the machine was namedgate
,
no desktop or server software was installed, no root password was set,
and a privileged account named sysadm
was created (per the policy in
-The Administration Accounts).
+The Administration Accounts).
@@ -1562,12 +1543,11 @@ Retype new password: icismassssadestm Is the information correct? [Y/n]-
The password was generated by gpw
, saved in the administrator's
password keep, and later added to Secret/become.yml
as shown below.
(Producing a working Ansible configuration with Secret/become.yml
-file is described in The Ansible Configuration.)
+file is described in The Ansible Configuration.)
@@ -1578,7 +1558,6 @@ notebook$ ansible-vault encrypt_string icismassssadestm \ notebook_ >>Secret/become.yml-
With Debian freshly installed, Gate needed a couple additional software packages. The administrator temporarily plugged Gate into a @@ -1591,10 +1570,9 @@ _ ufw isc-dhcp-server postfix openvpn \ _ openssh-server -
Next, the administrator concatenated a personal public ssh key and the
-key found in Secret/ssh_admin/
(created by The CA Command) into an
+key found in Secret/ssh_admin/
(created by The CA Command) into an
admin_keys
file, copied it to Gate, and installed it as the
authorized_keys
for sysadm
.
Note that the name
In the example command lines below, the address
Gate was also connected to the USB Ethernet dongles cabled to the
campus Wi-Fi access point and the campus ISP. The three network
adapters are known by their MAC addresses, the values of the variables
-
@@ -1658,37 +1634,37 @@ At this point Gate was ready for provisioning with Ansible.
gate.lan
should be known to the cable modem's DNS
service. An IP address might be used instead, discovered with an ip
@@ -1635,20 +1612,19 @@ a new, private IP address.
10.227.248.2
was
generated by the random subnet address picking procedure described in
-Subnets, and is named gate_addr
in the Ansible code.
+Subnets, and is named gate_addr
in the Ansible code.
$ sudo ip address add 10.227.248.2 dev eth0
-
gate_lan_mac
, gate_wifi_mac
, and gate_isp_mac
. (For more
-information, see the Gate role's Configure Netplan task.)
+gate_lan_mac
, gate_wild_mac
, and gate_isp_mac
. (For more
+information, see the Gate role's Configure Netplan task.)
The all
role contains tasks that are executed on all of the
institute's servers. At the moment there is just the one.
The all
role's task contains a reference to a common institute
particular, the institute's domain_name
, a variable found in the
public/vars.yml
file. Thus the first task of the all
role is to
-include the variables defined in this file (described in The
+include the variables defined in this file (described in The
Particulars). The code block below is the first to tangle into
roles/all/tasks/main.yml
.
roles/all/tasks/main.yml
--- +roles/all/tasks/main.yml+--- - name: Include public variables. include_vars: ../public/vars.yml tags: accounts -
The systemd-networkd
and systemd-resolved
service units are not
@@ -1709,7 +1685,7 @@ follows these recommendations (and not the suggestion to enable
roles_t/all/tasks/main.yml
+roles_t/all/tasks/main.yml+- name: Install systemd-resolved. become: yes apt: pkg=systemd-resolved @@ -1741,22 +1717,22 @@ follows these recommendations (and not the suggestion to enable when: - ansible_distribution == 'Debian' - 12 > ansible_distribution_major_version|int -
All servers should recognize the institute's Certificate Authority as trustworthy, so its certificate is added to the set of trusted CAs on each host. More information about how the small institute manages its -X.509 certificates is available in Keys. +X.509 certificates is available in Keys.
roles_t/all/tasks/main.yml
+roles_t/all/tasks/main.yml+- name: Trust the institute CA. become: yes copy: @@ -1766,28 +1742,28 @@ X.509 certificates is available in Keys. owner: root group: root notify: Update CAs. -
roles_t/all/handlers/main.yml
+roles_t/all/handlers/main.yml+- name: Update CAs. become: yes command: update-ca-certificates -
The front
role installs and configures the services expected on the
institute's publicly accessible "front door": email, web, VPN. The
virtual machine is prepared with an Ubuntu Server install and remote
access to a privileged, administrator's account. (For details, see
-The Front Machine.)
+The Front Machine.)
@@ -1808,17 +1784,17 @@ uses the institute's CA and server certificates, and expects client certificates signed by the institute CA.
-The first task, as in The All Role, is to include the institute
+The first task, as in The All Role, is to include the institute
particulars. The front
role refers to private variables and the
membership roll, so these are included was well.
roles/front/tasks/main.yml
--- +roles/front/tasks/main.yml+--- - name: Include public variables. include_vars: ../public/vars.yml tags: accounts @@ -1830,12 +1806,12 @@ membership roll, so these are included was well. - name: Include members. include_vars: "{{ lookup('first_found', membership_rolls) }}" tags: accounts -
This task ensures that Front's /etc/hostname
and /etc/mailname
are
@@ -1844,7 +1820,7 @@ delivery.
roles_t/front/tasks/main.yml
- name: Configure hostname. +roles_t/front/tasks/main.yml+- name: Configure hostname. become: yes copy: content: "{{ domain_name }}\n" @@ -1853,20 +1829,20 @@ delivery. - /etc/hostname - /etc/mailname notify: Update hostname. -
roles_t/front/handlers/main.yml
--- +roles_t/front/handlers/main.yml+--- - name: Update hostname. become: yes command: hostname -F /etc/hostname -
The administrator often needs to read (directories of) log files owned @@ -1875,19 +1851,19 @@ these groups speeds up debugging.
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Add {{ ansible_user }} to system groups. become: yes user: name: "{{ ansible_user }}" append: yes groups: root,adm -
The SSH service on Front needs to be known to Monkey. The following
@@ -1896,7 +1872,7 @@ those stored in Secret/ssh_front/etc/ssh/
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Install SSH host keys. become: yes copy: @@ -1911,22 +1887,22 @@ those stored in
Secret/ssh_front/etc/ssh/ - { name: ssh_host_rsa_key, mode: "u=rw,g=,o=" } - { name: ssh_host_rsa_key.pub, mode: "u=rw,g=r,o=r" } notify: Reload SSH server. -
roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml+- name: Reload SSH server. become: yes systemd: service: ssh state: reloaded -
The small institute runs cron jobs and web scripts that generate
@@ -1934,13 +1910,13 @@ reports and perform checks. The un-privileged jobs are run by a
system account named monkey
. One of Monkey's more important jobs on
Core is to run rsync
to update the public web site on Front. Monkey
on Core will login as monkey
on Front to synchronize the files (as
-described in *Configure Apache2). To do that without needing a
+described in *Configure Apache2). To do that without needing a
password, the monkey
account on Front should authorize Monkey's SSH
key on Core.
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Create monkey. become: yes user: @@ -1962,54 +1938,54 @@ key on Core. name: "{{ ansible_user }}" append: yes groups: monkey -
Monkey uses Rsync to keep the institute's public web site up-to-date.
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Install rsync. become: yes apt: pkg=rsync -
The institute prefers to install security updates as soon as possible.
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Install basic software. become: yes apt: pkg=unattended-upgrades -
User accounts are created immediately so that Postfix and Dovecot can
start delivering email immediately, without returning "no such
-recipient" replies. The Account Management chapter describes the
+recipient" replies. The Account Management chapter describes the
members
and usernames
variables used below.
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Create user accounts. become: yes user: @@ -2038,12 +2014,12 @@ recipient" replies. The Account Management chapter de loop: "{{ usernames }}" when: members[item].status != 'current' tags: accounts -
The servers on Front use the same certificate (and key) to
@@ -2053,7 +2029,7 @@ readable by root
.
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Install server certificate/key. become: yes copy: @@ -2069,12 +2045,12 @@ readable by
root
. notify: - Restart Postfix. - Restart Dovecot. -
Front uses Postfix to provide the institute's public SMTP service, and @@ -2091,7 +2067,7 @@ The appropriate answers are listed here but will be checked
-As discussed in The Email Service above, Front's Postfix configuration +As discussed in The Email Service above, Front's Postfix configuration includes site-wide support for larger message sizes, shorter queue times, the relaying configuration, and the common path to incoming emails. These and a few Front-specific Postfix configurations @@ -2104,13 +2080,13 @@ relays messages from the campus.
postfix-front-networks
- p: mynetworks +postfix-front-networks
+- p: mynetworks v: >- {{ public_vpn_net_cidr }} 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 -
@@ -2120,13 +2096,13 @@ difficult for internal hosts, who do not have (public) domain names.
postfix-front-restrictions
- p: smtpd_recipient_restrictions +postfix-front-restrictions
+- p: smtpd_recipient_restrictions v: >- permit_mynetworks reject_unauth_pipelining reject_unauth_destination reject_unknown_sender_domain -
@@ -2141,15 +2117,15 @@ messages; incoming messages are delivered locally, without
postfix-header-checks
- p: smtp_header_checks +postfix-header-checks
+- p: smtp_header_checks v: regexp:/etc/postfix/header_checks.cf -
postfix-header-checks-content
/^Received:/ IGNORE +postfix-header-checks-content
+/^Received:/ IGNORE /^User-Agent:/ IGNORE -
@@ -2159,7 +2135,7 @@ Debian default for inet_interfaces
.
postfix-front
- { p: smtpd_tls_cert_file, v: /etc/server.crt } +postfix-front
+- { p: smtpd_tls_cert_file, v: /etc/server.crt } - { p: smtpd_tls_key_file, v: /etc/server.key } <<postfix-front-networks>> <<postfix-front-restrictions>> @@ -2168,7 +2144,7 @@ Debian default for
inet_interfaces
. <<postfix-queue-times>> <<postfix-maildir>> <<postfix-header-checks>> -
@@ -2178,7 +2154,7 @@ start and enable the service.
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Install Postfix. become: yes apt: pkg=postfix @@ -2207,11 +2183,11 @@ start and enable the service. service: postfix enabled: yes state: started -
roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml+- name: Restart Postfix. become: yes systemd: @@ -2224,12 +2200,12 @@ start and enable the service. chdir: /etc/postfix/ cmd: postmap header_checks.cf notify: Restart Postfix. -
The institute's Front needs to deliver email addressed to a number of @@ -2246,7 +2222,7 @@ created by a more specialized role.
roles_t/front/tasks/main.yml
- name: Install institute email aliases. +roles_t/front/tasks/main.yml+- name: Install institute email aliases. become: yes blockinfile: block: | @@ -2258,20 +2234,20 @@ created by a more specialized role. path: /etc/aliases marker: "# {mark} INSTITUTE MANAGED BLOCK" notify: New aliases. -
roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml+- name: New aliases. become: yes command: newaliases -
Front uses Dovecot's IMAPd to allow user Fetchmail jobs on Core to @@ -2280,7 +2256,7 @@ default with POP and IMAP (without TLS) support disabled. This is a bit "over the top" given that Core accesses Front via VPN, but helps to ensure privacy even when members must, in extremis, access recent email directly from their accounts on Front. For more information -about Front's role in the institute's email services, see The Email +about Front's role in the institute's email services, see The Email Service.
@@ -2299,7 +2275,7 @@ and enables it to start at every reboot.roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Install Dovecot IMAPd. become: yes apt: pkg=dovecot-imapd @@ -2322,22 +2298,22 @@ and enables it to start at every reboot. service: dovecot enabled: yes state: started -
roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml+- name: Restart Dovecot. become: yes systemd: service: dovecot state: restarted -
This is the small institute's public web site. It is simple, static, @@ -2373,7 +2349,7 @@ taken from https://www
apache-ciphers
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1 +apache-ciphers
+SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1 SSLHonorCipherOrder on SSLCipherSuite {{ [ 'ECDHE-ECDSA-AES128-GCM-SHA256', 'ECDHE-ECDSA-AES256-GCM-SHA384', @@ -2403,7 +2379,7 @@ SSLHonorCipherOrder on '!SRP', '!DSS', '!RC4' ] |join(":") }} -
@@ -2428,12 +2404,12 @@ used on all of the institute's web sites.
apache-userdir-front
UserDir /home/www-users +apache-userdir-front
+UserDir /home/www-users <Directory /home/www-users/> Require all granted AllowOverride None </Directory> -
@@ -2443,10 +2419,10 @@ HTTPS URLs.
apache-redirect-front
<VirtualHost *:80> +apache-redirect-front
+<VirtualHost *:80> Redirect permanent / https://{{ domain_name }}/ </VirtualHost> -
@@ -2468,7 +2444,7 @@ the inside of a VirtualHost
block. They should apply globally.
apache-front
ServerName {{ domain_name }} +apache-front
+ServerName {{ domain_name }} ServerAdmin webmaster@{{ domain_name }} DocumentRoot /home/www @@ -2493,7 +2469,7 @@ CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> <<apache-ciphers>> -
@@ -2503,7 +2479,7 @@ e.g. /etc/apache2/sites-available/small.example.org.conf
and runs
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Install Apache2. become: yes apt: pkg=apache2 @@ -2544,17 +2520,17 @@ e.g.
/etc/apache2/sites-available/small.example.org.confand runs service: apache2 enabled: yes state: started -
roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml+- name: Restart Apache2. become: yes systemd: service: apache2 state: restarted -
@@ -2563,7 +2539,7 @@ that it does not interfere with its replacement.
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Disable default vhosts. become: yes file: @@ -2571,7 +2547,7 @@ that it does not interfere with its replacement. state: absent loop: [ 000-default.conf, default-ssl.conf ] notify: Restart Apache2. -
@@ -2581,14 +2557,14 @@ same records as access.log
.
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Disable other-vhosts-access-log option. become: yes file: path: /etc/apache2/conf-enabled/other-vhosts-access-log.conf state: absent notify: Restart Apache2. -
@@ -2597,7 +2573,7 @@ the users' ~/Public/HTML/
directories.
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Create UserDir. become: yes file: @@ -2623,12 +2599,12 @@ the users'
~/Public/HTML/directories. loop: "{{ usernames }}" when: members[item].status != 'current' tags: accounts -
Front uses OpenVPN to provide the institute's public VPN service. The @@ -2641,9 +2617,9 @@ route packets for the campus networks to Core.
openvpn-ccd-core
iroute {{ private_net_and_mask }} +openvpn-ccd-core
+iroute {{ private_net_and_mask }} iroute {{ campus_vpn_net_and_mask }} -
@@ -2658,21 +2634,21 @@ through some ISP, and thus needs the same routes as the clients.
openvpn-front-routes
route {{ private_net_and_mask }} +openvpn-front-routes
+route {{ private_net_and_mask }} route {{ campus_vpn_net_and_mask }} push "route {{ private_net_and_mask }}" push "route {{ campus_vpn_net_and_mask }}" -
The complete OpenVPN configuration for Front includes a server
option, the client-config-dir
option, the routes mentioned above,
-and the common options discussed in The VPN Service.
+and the common options discussed in The VPN Service.
openvpn-front
server {{ public_vpn_net_and_mask }} +openvpn-front
+tls-crypt shared.key +server {{ public_vpn_net_and_mask }} client-config-dir /etc/openvpn/ccd <<openvpn-front-routes>> <<openvpn-dev-mode>> @@ -2686,8 +2662,8 @@ ca /usr/local/share/ca-certificates/{{ domain_name }}.crt cert server.crt key server.key dh dh2048.pem -tls-auth ta.key 0 -
@@ -2696,7 +2672,7 @@ configure the OpenVPN server on Front.
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Install OpenVPN. become: yes apt: pkg=openvpn @@ -2752,7 +2728,7 @@ configure the OpenVPN server on Front. mode: u=r,g=,o= loop: - { src: front-dh2048.pem, dest: dh2048.pem } - - { src: front-ta.key, dest: ta.key } + - { src: front-shared.key, dest: shared.key } notify: Restart OpenVPN. - name: Configure OpenVPN. @@ -2770,22 +2746,22 @@ configure the OpenVPN server on Front. service: openvpn@server enabled: yes state: started -
roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml+- name: Restart OpenVPN. become: yes systemd: service: openvpn@server state: restarted -
Front uses Kamailio to provide a SIP service on the public VPN so that
@@ -2807,8 +2783,8 @@ specifies the actual IP, known here as front_private_addr
.
kamailio
listen=udp:{{ front_private_addr }}:5060
-
+kamailio
listen=udp:{{ front_private_addr }}:5060
+
@@ -2823,11 +2799,11 @@ The first step is to install Kamailio.
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Install Kamailio. become: yes apt: pkg=kamailio -
@@ -2838,7 +2814,7 @@ be started before the tun
device has appeared.
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Create Kamailio/Systemd configuration drop. become: yes file: @@ -2854,16 +2830,16 @@ be started before the
tun
device has appeared. After=sys-devices-virtual-net-ovpn.device dest: /etc/systemd/system/kamailio.service.d/depend.conf notify: Reload Systemd. -
roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml+- name: Reload Systemd. become: yes systemd: daemon-reload: yes -
@@ -2871,7 +2847,7 @@ Finally, Kamailio can be configured and started.
roles_t/front/tasks/main.yml
+roles_t/front/tasks/main.yml+- name: Configure Kamailio. become: yes copy: @@ -2886,42 +2862,42 @@ Finally, Kamailio can be configured and started. service: kamailio enabled: yes state: started -
roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml+- name: Restart Kamailio. become: yes systemd: service: kamailio state: restarted -
The core
role configures many essential campus network services as
well as the institute's private cloud, so the core machine has
horsepower (CPUs and RAM) and large disks and is prepared with a
Debian install and remote access to a privileged, administrator's
-account. (For details, see The Core Machine.)
+account. (For details, see The Core Machine.)
-The first task, as in The Front Role, is to include the institute +The first task, as in The Front Role, is to include the institute particulars and membership roll.
roles_t/core/tasks/main.yml
--- +roles_t/core/tasks/main.yml+--- - name: Include public variables. include_vars: ../public/vars.yml tags: accounts @@ -2931,12 +2907,12 @@ particulars and membership roll. - name: Include members. include_vars: "{{ lookup('first_found', membership_rolls) }}" tags: accounts -
This task ensures that Core's /etc/hostname
and /etc/mailname
are
@@ -2947,7 +2923,7 @@ proper email delivery.
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Configure hostname. become: yes copy: @@ -2957,20 +2933,20 @@ proper email delivery. - { name: "core.{{ domain_priv }}", file: /etc/mailname } - { name: "{{ inventory_hostname }}", file: /etc/hostname } notify: Update hostname. -
roles_t/core/handlers/main.yml
--- +roles_t/core/handlers/main.yml+--- - name: Update hostname. become: yes command: hostname -F /etc/hostname -
Core runs the campus name server, so Resolved is configured to use it @@ -2979,7 +2955,7 @@ list, and to disable its cache and stub listener.
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Configure resolved. become: yes lineinfile: @@ -2995,11 +2971,11 @@ list, and to disable its cache and stub listener. notify: - Reload Systemd. - Restart Systemd resolved. -
roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml+- name: Reload Systemd. become: yes systemd: @@ -3010,12 +2986,12 @@ list, and to disable its cache and stub listener. systemd: service: systemd-resolved state: restarted -
Core's network interface is statically configured using Netplan and an @@ -3035,12 +3011,12 @@ fact was an empty hash at first boot on a simulated campus Ethernet.)
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Install netplan. become: yes apt: pkg=netplan.io @@ -3062,20 +3038,20 @@ fact was an empty hash at first boot on a simulated campus Ethernet.) dest: /etc/netplan/60-core.yaml mode: u=rw,g=r,o= notify: Apply netplan. -
roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml+- name: Apply netplan. become: yes command: netplan apply -
Core speaks DHCP (Dynamic Host Configuration Protocol) using the
@@ -3090,12 +3066,13 @@ The example configuration file, private/cor
RFC3442's extension to encode a second (non-default) static route.
The default route is through the campus ISP at Gate. A second route
directs campus traffic to the Front VPN through Core. This is just an
-example file. The administrator adds and removes actual machines from
-the actual
private/core-dhcpd.conf
file.
+example file, with MAC addresses chosen to (probably?) match
+VirtualBox test machines. In actual use private/core-dhcpd.conf
+refers to a replacement file.
private/core-dhcpd.conf
option domain-name "small.private"; +private/core-dhcpd.conf+option domain-name "small.private"; option domain-name-servers 192.168.56.1; default-lease-time 3600; @@ -3123,16 +3100,16 @@ log-facility daemon; hardware ethernet 08:00:27:e0:79:ab; fixed-address 192.168.56.2; } host server { hardware ethernet 08:00:27:f3:41:66; fixed-address 192.168.56.3; } -
-The following tasks install the ISC's DHCP server and configure it
-with the real private/core-dhcpd.conf
(not the example above).
+The following tasks install ISC's DHCP server and configure it with
+the real private/core-dhcpd.conf
(not the example above).
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Install DHCP server. become: yes apt: pkg=isc-dhcp-server @@ -3158,26 +3135,26 @@ with the real
private/core-dhcpd.confservice: isc-dhcp-server enabled: yes state: started -
roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml+- name: Restart DHCP server. become: yes systemd: service: isc-dhcp-server state: restarted -
Core uses BIND9 to provide name service for the institute as described -in The Name Service. The configuration supports reverse name lookups, +in The Name Service. The configuration supports reverse name lookups, resolving many private network addresses to private domain names.
@@ -3186,7 +3163,7 @@ The following tasks install and configure BIND9 on Core.roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Install BIND9. become: yes apt: pkg=bind9 @@ -3221,17 +3198,17 @@ The following tasks install and configure BIND9 on Core. service: bind9 enabled: yes state: started -
roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml+- name: Reload BIND9. become: yes systemd: service: bind9 state: reloaded -
@@ -3242,11 +3219,11 @@ probably be used as forwarders rather than Google.
bind-options
acl "trusted" { +bind-options
+acl "trusted" { {{ private_net_cidr }}; + {{ wild_net_cidr }}; {{ public_vpn_net_cidr }}; {{ campus_vpn_net_cidr }}; - {{ gate_wifi_net_cidr }}; localhost; }; @@ -3267,11 +3244,11 @@ probably be used as forwarders rather than Google. localhost; }; }; -
bind-local
include "/etc/bind/zones.rfc1918"; +bind-local
+include "/etc/bind/zones.rfc1918"; zone "{{ domain_priv }}." { type master; @@ -3295,11 +3272,11 @@ probably be used as forwarders rather than Google. type master; file "/etc/bind/db.campus_vpn"; }; -
private/db.domain
; +private/db.domain+; ; BIND data file for a small institute's PRIVATE domain names. ; $TTL 604800 @@ -3323,11 +3300,11 @@ probably be used as forwarders rather than Google. ; core IN A 192.168.56.1 gate IN A 192.168.56.2 -
private/db.private
; +private/db.private+; ; BIND reverse data file for a small institute's private Ethernet. ; $TTL 604800 @@ -3342,11 +3319,11 @@ probably be used as forwarders rather than Google. $TTL 7200 1 IN PTR core.small.private. 2 IN PTR gate.small.private. -
private/db.public_vpn
; +private/db.public_vpn+; ; BIND reverse data file for a small institute's public VPN. ; $TTL 604800 @@ -3361,11 +3338,11 @@ probably be used as forwarders rather than Google. $TTL 7200 1 IN PTR front-p.small.private. 2 IN PTR core-p.small.private. -
private/db.campus_vpn
; +private/db.campus_vpn+; ; BIND reverse data file for a small institute's campus VPN. ; $TTL 604800 @@ -3379,12 +3356,12 @@ probably be used as forwarders rather than Google. @ IN NS core.small.private. $TTL 7200 1 IN PTR gate-c.small.private. -
The administrator often needs to read (directories of) log files owned @@ -3393,30 +3370,30 @@ these groups speeds up debugging.
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Add {{ ansible_user }} to system groups. become: yes user: name: "{{ ansible_user }}" append: yes groups: root,adm -
The small institute runs cron jobs and web scripts that generate
reports and perform checks. The un-privileged jobs are run by a
system account named monkey
. One of Monkey's more important jobs on
Core is to run rsync
to update the public web site on Front (as
-described in *Configure Apache2).
+described in *Configure Apache2).
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Create monkey. become: yes user: @@ -3468,54 +3445,54 @@ described in *Configure Apache2). owner: monkey group: monkey mode: "u=rw,g=,o=" -
The institute prefers to install security updates as soon as possible.
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Install basic software. become: yes apt: pkg=unattended-upgrades -
-The expect
program is used by The Institute Commands to interact
+The expect
program is used by The Institute Commands to interact
with Nextcloud on the command line.
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Install expect. become: yes apt: pkg=expect -
User accounts are created immediately so that backups can begin
-restoring as soon as possible. The Account Management chapter
+restoring as soon as possible. The Account Management chapter
describes the members
and usernames
variables.
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Create user accounts. become: yes user: @@ -3544,12 +3521,12 @@ describes the
members
andusernames
variables. loop: "{{ usernames }}" when: members[item].status != 'current' tags: accounts -
The servers on Core use the same certificate (and key) to authenticate
@@ -3558,7 +3535,7 @@ themselves to institute clients. They share the /etc/server.crt
and
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Install server certificate/key. become: yes copy: @@ -3574,12 +3551,12 @@ themselves to institute clients. They share the
/etc/server.crtand - Restart Postfix. - Restart Dovecot. - Restart OpenVPN. -
Core uses NTP to provide a time synchronization service to the campus. @@ -3587,16 +3564,16 @@ The default daemon's default configuration is fine.
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Install NTP. become: yes apt: pkg=ntp -
Core uses Postfix to provide SMTP service to the campus. The default @@ -3612,7 +3589,7 @@ The appropriate answers are listed here but will be checked
-As discussed in The Email Service above, Core delivers email addressed +As discussed in The Email Service above, Core delivers email addressed to any internal domain name locally, and uses its smarthost Front to relay the rest. Core is reachable only on institute networks, so there is little benefit in enabling TLS, but it does need to handle @@ -3625,7 +3602,7 @@ Core relays messages from any institute network.
postfix-core-networks
- p: mynetworks +postfix-core-networks
+- p: mynetworks v: >- {{ private_net_cidr }} {{ public_vpn_net_cidr }} @@ -3633,7 +3610,7 @@ Core relays messages from any institute network. 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 -
@@ -3641,8 +3618,8 @@ Core uses Front to relay messages to the Internet.
postfix-core-relayhost
- { p: relayhost, v: "[{{ front_private_addr }}]" }
-
+postfix-core-relayhost
- { p: relayhost, v: "[{{ front_private_addr }}]" }
+
@@ -3653,9 +3630,9 @@ file.
postfix-transport
.{{ domain_name }} local:$myhostname +postfix-transport
+.{{ domain_name }} local:$myhostname .{{ domain_priv }} local:$myhostname -
@@ -3664,7 +3641,7 @@ The complete list of Core's Postfix settings for
postfix-core
<<postfix-relaying>> +postfix-core
+<<postfix-relaying>> - { p: smtpd_tls_security_level, v: none } - { p: smtp_tls_security_level, v: none } <<postfix-message-size>> @@ -3673,7 +3650,7 @@ The complete list of Core's Postfix settings for <<postfix-core-networks>> <<postfix-core-relayhost>> - { p: inet_interfaces, v: "127.0.0.1 {{ core_addr }}" } -
@@ -3684,7 +3661,7 @@ enable the service. Whenever /etc/postfix/transport
is changed, the
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Install Postfix. become: yes apt: pkg=postfix @@ -3714,11 +3691,11 @@ enable the service. Whenever
/etc/postfix/transportis changed, the service: postfix enabled: yes state: started -
roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml+- name: Restart Postfix. become: yes systemd: @@ -3731,12 +3708,12 @@ enable the service. Whenever
/etc/postfix/transportis changed, the chdir: /etc/postfix/ cmd: postmap transport notify: Restart Postfix. -
The institute's Core needs to deliver email addressed to institute @@ -3748,7 +3725,7 @@ installed by more specialized roles.
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Install institute email aliases. become: yes blockinfile: @@ -3761,20 +3738,20 @@ installed by more specialized roles. path: /etc/aliases marker: "# {mark} INSTITUTE MANAGED BLOCK" notify: New aliases. -
roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml+- name: New aliases. become: yes command: newaliases -
Core uses Dovecot's IMAPd to store and serve member emails. As on @@ -3784,7 +3761,7 @@ top" given that Core is only accessed from private (encrypted) networks, but helps to ensure privacy even when members accidentally attempt connections from outside the private networks. For more information about Core's role in the institute's email services, see -The Email Service. +The Email Service.
@@ -3792,7 +3769,7 @@ The institute follows the recommendation in the package
README.Debian
(in /usr/share/dovecot-core/
) but replaces the
default "snake oil" certificate with another, signed by the institute.
(For more information about the institute's X.509 certificates, see
-Keys.)
+Keys.)
@@ -3802,7 +3779,7 @@ and enables it to start at every reboot.
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Install Dovecot IMAPd. become: yes apt: pkg=dovecot-imapd @@ -3824,22 +3801,22 @@ and enables it to start at every reboot. service: dovecot enabled: yes state: started -
roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml+- name: Restart Dovecot. become: yes systemd: service: dovecot state: restarted -
Core runs a fetchmail
for each member of the institute. Individual
@@ -3856,7 +3833,7 @@ the username. The template is only used when the record has a
fetchmail-config
# Permissions on this file may be no greater than 0600. +fetchmail-config
+# Permissions on this file may be no greater than 0600. set no bouncemail set no spambounce @@ -3867,7 +3844,7 @@ poll {{ front_private_addr }} protocol imap timeout 15 username {{ item }} password "{{ members[item].password_fetchmail }}" fetchall ssl sslproto tls1.2+ sslcertck sslcommonname {{ domain_name }} -
@@ -3875,7 +3852,7 @@ The Systemd service description.
fetchmail-service
[Unit] +fetchmail-service
+[Unit] Description=Fetchmail --idle task for {{ item }}. AssertPathExists=/home/{{ item }}/.fetchmailrc After=openvpn@front.service @@ -3890,7 +3867,7 @@ The Systemd service description. [Install] WantedBy=default.target -
@@ -3903,7 +3880,7 @@ provided the Core service.
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Install fetchmail. become: yes apt: pkg=fetchmail @@ -3946,7 +3923,7 @@ provided the Core service. - members[item].status == 'current' - members[item].password_fetchmail is defined tags: accounts -
@@ -3955,7 +3932,7 @@ stopped and disabled from restarting at boot, deleted even.
roles_t/core/tasks/main.yml
+roles_t/core/tasks/main.yml+- name: Stop former user fetchmail services. become: yes systemd: @@ -3967,7 +3944,7 @@ stopped and disabled from restarting at boot, deleted even. - members[item].status != 'current' - members[item].password_fetchmail is defined tags: accounts -
@@ -3977,7 +3954,7 @@ Otherwise the following task might be appropriate.
++- name: Delete former user fetchmail services. become: yes file: @@ -3988,16 +3965,16 @@ Otherwise the following task might be appropriate. - members[item].status != 'current' - members[item].password_fetchmail is defined tags: accounts -
This is the small institute's campus web server. It hosts several web -sites as described in The Web Services. +sites as described in The Web Services.
enp0s9 |
vboxnet1 |
-campus wireless | -gate_wifi_mac |
+campus IoT | +gate_wild_mac |
sudo ip address add 192.168.56.2/24 dev enp0s3 -+
sudo ip address add 192.168.56.2/24 dev enp0s3
+
Finally, the administrator authorizes remote access by following the -instructions in the final section: Ansible Test Authorization. +instructions in the final section: Ansible Test Authorization.
The core
machine is created with 1GiB of RAM and 6GiB of disk.
@@ -8216,21 +8205,21 @@ created with following commands.
NAME=core ++NAME=core RAM=2048 DISK=6144 create_vm -
-After Debian is installed (as detailed in A Test Machine) and the +After Debian is installed (as detailed in A Test Machine) and the machine rebooted, the administrator logs in and installs several additional software packages.
sudo apt install netplan.io systemd-resolved unattended-upgrades \ ++sudo apt install netplan.io systemd-resolved unattended-upgrades \ ntp isc-dhcp-server bind9 apache2 openvpn \ postfix dovecot-imapd fetchmail expect rsync \ gnupg @@ -8239,7 +8228,7 @@ sudo apt install mariadb-server php php-{apcu,bcmath,curl,gd,gmp}\ nagios-nrpe-plugin -
@@ -8252,6 +8241,12 @@ defaults, listed below, are fine.
+And domain name resolution may be broken after installing
+systemd-resolved
. A reboot is often needed after the first apt
+install
command above.
+
Before shutting down, the name of the primary Ethernet interface should be compared to the example variable setting in @@ -8267,8 +8262,8 @@ Ethernet.
VBoxManage modifyvm core --nic1 hostonly --hostonlyadapter1 vboxnet0 -+
VBoxManage modifyvm core --nic1 hostonly --hostonlyadapter1 vboxnet0
+
@@ -8278,18 +8273,18 @@ Netplan soon.)
sudo ip address add 192.168.56.1/24 dev enp0s3 -+
sudo ip address add 192.168.56.1/24 dev enp0s3
+
Finally, the administrator authorizes remote access by following the -instructions in the next section: Ansible Test Authorization. +instructions in the next section: Ansible Test Authorization.
To authorize Ansible's access to the three test machines, they must @@ -8299,11 +8294,11 @@ key to each test machine.
SRC=Secret/ssh_admin/id_rsa.pub ++SRC=Secret/ssh_admin/id_rsa.pub scp $SRC sysadm@192.168.57.3:admin_key # Front scp $SRC sysadm@192.168.56.2:admin_key # Gate scp $SRC sysadm@192.168.56.1:admin_key # Core -
@@ -8313,8 +8308,8 @@ each machine).
( cd; umask 077; mkdir .ssh; cp admin_key .ssh/authorized_keys ) -+
( cd; umask 077; mkdir .ssh; cp admin_key .ssh/authorized_keys )
+
@@ -8330,8 +8325,8 @@ command.
scp Secret/ssh_front/etc/ssh/ssh_host_* sysadm@192.168.57.3: -+
scp Secret/ssh_front/etc/ssh/ssh_host_* sysadm@192.168.57.3:
+
@@ -8339,10 +8334,10 @@ Then they are installed with these commands.
chmod 600 ssh_host_* ++chmod 600 ssh_host_* chmod 644 ssh_host_*.pub sudo cp -b ssh_host_* /etc/ssh/ -
@@ -8355,8 +8350,8 @@ ssh-keygen -f ~/.ssh/known_hosts -R 192.168.57.3
At this point the three test machines core
, gate
, and front
are
@@ -8374,8 +8369,8 @@ not.
At this point the test institute is just core
, gate
and front
,
@@ -8385,8 +8380,8 @@ with 0 failed units.
systemctl status -+
systemctl status
+
@@ -8396,9 +8391,9 @@ forwarding (and NATing). On core
(and gate
):
ping -c 1 8.8.4.4 # dns.google ++ping -c 1 8.8.4.4 # dns.google ping -c 1 192.168.15.5 # front_addr -
@@ -8408,10 +8403,10 @@ names yet.) On core
(and gate
):
host dns.google ++host dns.google host core.small.private host www -
@@ -8420,10 +8415,10 @@ administrator's account. On core
, gate
and fron
/sbin/sendmail root ++/sbin/sendmail root Testing email to root. . -
@@ -8437,12 +8432,12 @@ instant attention).
Further tests involve Nextcloud account management. Nextcloud is
-installed on core
as described in Configure Nextcloud. Once
+installed on core
as described in Configure Nextcloud. Once
/Nextcloud/
is created, ./inst config core
will validate
or update its configuration files.
./inst new
command and issuing client VPN keys with the
A member must be enrolled so that a member's client machine can be
@@ -8476,8 +8471,8 @@ named dick
, as is his notebook.
./inst new dick -+
./inst new dick
+
@@ -8485,8 +8480,8 @@ Take note of Dick's initial password.
A test member's notebook is created next, much like the servers, @@ -8497,13 +8492,13 @@ desktop VPN client and web browser test the OpenVPN configurations on
NAME=dick ++NAME=dick RAM=2048 DISK=8192 create_vm VBoxManage modifyvm $NAME --macaddress1 080027dc54b5 VBoxManage modifyvm $NAME --nic1 hostonly --hostonlyadapter1 vboxnet1 -
@@ -8514,7 +8509,7 @@ behind) the access point.
-Debian is installed much as detailed in A Test Machine except that +Debian is installed much as detailed in A Test Machine except that the SSH server option is not needed and the GNOME desktop option is. When the machine reboots, the administrator logs into the desktop and installs a couple additional software packages (which @@ -8522,15 +8517,15 @@ require several more).
sudo apt install network-manager-openvpn-gnome \ ++sudo apt install network-manager-openvpn-gnome \ openvpn-systemd-resolved \ nextcloud-desktop evolution -
The ./inst client
command is used to issue keys for the institute's
@@ -8541,23 +8536,23 @@ the test VPNs.
./inst client debian dick dick -+
./inst client debian dick dick
+
-The campus.ovpn
OpenVPN configuration file (generated in Test Client
+The campus.ovpn
OpenVPN configuration file (generated in Test Client
Command) is transferred to dick
, which is at the Wi-Fi access
point's wifi_wan_addr
.
scp *.ovpn sysadm@192.168.57.2: -+
scp *.ovpn sysadm@192.168.57.2:
+
@@ -8575,18 +8570,18 @@ instantly) and does a few basic tests in a terminal.
systemctl status ++systemctl status ping -c 1 8.8.4.4 # dns.google ping -c 1 192.168.56.1 # core host dns.google host core.small.private host www -
Next, the administrator copies Backup/WWW/
(included in the
@@ -8595,11 +8590,11 @@ appropriately.
sudo chown -R sysadm.staff /WWW/campus ++sudo chown -R sysadm.staff /WWW/campus sudo chown -R monkey.staff /WWW/live /WWW/test sudo chmod 02775 /WWW/* sudo chmod 664 /WWW/*/index.html -
@@ -8625,8 +8620,8 @@ will warn but allow the luser to continue.
Modify /WWW/live/index.html
on core
and wait 15 minutes for it to
@@ -8640,8 +8635,8 @@ Hack /home/www/index.html
on front
and observe the result at
Nextcloud is typically installed and configured after the first
@@ -8649,9 +8644,9 @@ Ansible run, when core
has Internet access via gate
.
installation directory /Nextcloud/nextcloud/
appears, the Ansible
code skips parts of the Nextcloud configuration. The same
installation (or restoration) process used on Core is used on core
-to create /Nextcloud/
. The process starts with Create
-/Nextcloud/
, involves Restore Nextcloud or Install Nextcloud,
-and runs ./inst config core
again 8.23.6. When the ./inst
+to create
/Nextcloud/
. The process starts with Create
+/Nextcloud/
, involves Restore Nextcloud or Install Nextcloud,
+and runs ./inst config core
again 8.23.6. When the ./inst
config core
command is happy with the Nextcloud configuration on
core
, the administrator uses Dick's notebook to test it, performing
the following tests on dick
's desktop.
@@ -8661,7 +8656,7 @@ the following tests on dick
's desktop.
http://core/nextcloud/
. It should be a
warning about accessing Nextcloud by an untrusted name.http://core.small.private/nextcloud/
. It should be a
+https://core.small.private/nextcloud/
. It should be a
login web page.sysadm
with password fubar
.~/nextCloud/
with the cloud. In the
Nextcloud app's Connection Wizard (the initial dialog), choose to
"Log in to your Nextcloud" with the URL
-http://core.small.private/nextcloud
. The web browser should pop
+https://core.small.private/nextcloud
. The web browser should pop
up with a new tab: "Connect to your account". Press "Log in" and
"Grant access". The Nextcloud Connection Wizard then prompts for
sync parameters. The defaults are fine. Presumably the Local
@@ -8714,7 +8709,7 @@ self-signed and unknown. It must be accepted (permanently).
dick
.
-The URL starts with http://core.small.private/nextcloud/
and
+The URL starts with https://core.small.private/nextcloud/
and
ends with remote.php/dav/addressbooks/users/dick/contacts/
(yeah,
88 characters!). Create a contact in the new address book and see
it in the Contacts web page. At some point Evolution will need
@@ -8729,8 +8724,8 @@ the calendar.
With Evolution running on the member notebook dick
, one second email
@@ -8739,12 +8734,12 @@ commands on front
/sbin/sendmail dick ++/sbin/sendmail dick Subject: Hello, Dick. How are you? . -
@@ -8758,8 +8753,8 @@ Outgoing email is also tested. A message to
At this point, dick
can move abroad, from the campus Wi-Fi
@@ -8769,8 +8764,8 @@ machine does not need to be shut down.
VBoxManage modifyvm dick --nic1 natnetwork --natnetwork1 premises -+
VBoxManage modifyvm dick --nic1 natnetwork --natnetwork1 premises
+
@@ -8783,12 +8778,12 @@ tested in a terminal.
ping -c 1 8.8.4.4 # dns.google ++ping -c 1 8.8.4.4 # dns.google ping -c 1 192.168.56.1 # core host dns.google host core.small.private host www -
@@ -8812,8 +8807,8 @@ calendar events.
To test the ./inst pass
command, the administrator logs in to core
@@ -8832,10 +8827,10 @@ On core
, logged in as sysadm
:
( cd ~/Maildir/new/ ++( cd ~/Maildir/new/ cp `ls -1t | head -1` ~/msg ) grep Subject: ~/msg -
@@ -8845,9 +8840,9 @@ password.. Then on the administrator's notebook:
scp sysadm@192.168.56.1:msg ./ ++scp sysadm@192.168.56.1:msg ./ ./inst pass < msg -
@@ -8860,8 +8855,8 @@ Finally, the administrator verifies that dick
can login on co
One more institute command is left to exercise. The administrator
@@ -8869,8 +8864,8 @@ retires dick
and his main device dick
.
./inst old dick -+
./inst old dick
+
@@ -8881,16 +8876,16 @@ should fail.
The small institute's network, as currently defined in this doocument, is lacking in a number of respects.
The current network monitoring is rudimentary. It could use some
@@ -8942,30 +8937,18 @@ include the essential verify-x509-name
. Use the same name on
separate certificates for Gate and Front? Use the same certificate
and key on Gate and Front?
-Nextcloud should really be found at https://CLOUD.small.private/
-rather than https://core.small.private/nextcloud/
, to ease
-future expansion (moving services to additional machines).
-
-HTTPS could be used for Nextcloud transactions even though they are -carried on encrypted VPNs. This would eliminate a big warning on the -Nextcloud Administration Overview page. -
The testing process described in the previous chapter is far from complete. Additional tests are needed.
The backup
command has not been tested. It needs an encrypted
@@ -8974,8 +8957,8 @@ partition with which to sync? And then some way to compare that to
The restore process has not been tested. It might just copy Backup/
@@ -8985,8 +8968,8 @@ perhaps permissions too. It could also use an example
Email access (IMAPS) on front
is… difficult to test unless
@@ -9010,8 +8993,8 @@ could be used.
Creating the private network from whole cloth (machines with recent @@ -9031,11 +9014,11 @@ etc.: quite a bit of temporary, manual localnet configuration just to get to the additional packages.
-The strategy pursued in The Hardware is two phase: prepare the servers +The strategy pursued in The Hardware is two phase: prepare the servers on the Internet where additional packages are accessible, then connect them to the campus facilities (the private Ethernet switch, Wi-Fi AP, ISP), manually configure IP addresses (while the DHCP client silently @@ -9043,8 +9026,8 @@ fails), and avoid names until BIND9 is configured.
The strategy of Starting With Gate concentrates on configuring Gate's @@ -9088,8 +9071,8 @@ ansible-playbook -l core site.yml
A refinement of the current strategy might avoid the need to maintain @@ -9142,7 +9125,7 @@ routes on Front and Gate, making the simulation less… similar.