From 0ef7cafe670924c3be8486ce378abefbd1f2ed08 Mon Sep 17 00:00:00 2001
From: Matt Birkholz
This small institute has a public server on the Internet, Front, that @@ -48,7 +48,7 @@ connects to Front making the institute email, cloud, etc. available to members off campus.
-+= _|||_ =-The-Institute-= @@ -95,8 +95,8 @@ uses OpenPGP encryption to secure message content.
This small institute prizes its privacy, so there is little or no @@ -144,8 +144,8 @@ month) because of this assumption.
The small institute's network is designed to provide a number of @@ -157,8 +157,8 @@ policies. On first reading, those subsections should be skipped; they reference particulars first introduced in the following chapter.
The institute has a public domain, e.g. small.example.org
, and a
@@ -172,8 +172,8 @@ names like core
.
Front provides the public SMTP (Simple Mail Transfer Protocol) service
@@ -247,8 +247,8 @@ setting for the maximum message size is given in a code block labeled
configurations wherever <<postfix-message-size>>
appears.
The institute aims to accommodate encrypted email containing short @@ -263,7 +263,7 @@ handle maxi-messages.
postfix-message-size
- { p: message_size_limit, v: 104857600 }
+postfix-message-size
- { p: message_size_limit, v: 104857600 }
postfix-queue-times
- { p: delay_warning_time, v: 1h }
+postfix-queue-times
- { p: delay_warning_time, v: 1h }
- { p: maximal_queue_lifetime, v: 4h }
- { p: bounce_queue_lifetime, v: 4h }
@@ -292,7 +292,7 @@ disables relaying (other than for the local networks).
-postfix-relaying
- p: smtpd_relay_restrictions
+postfix-relaying
- p: smtpd_relay_restrictions
v: permit_mynetworks reject_unauth_destination
@@ -304,7 +304,7 @@ effect.
-postfix-maildir
- { p: home_mailbox, v: Maildir/ }
+postfix-maildir
- { p: home_mailbox, v: Maildir/ }
@@ -315,8 +315,8 @@ in the respective roles below.
The Dovecot settings on both Front and Core disable POP and require @@ -330,7 +330,7 @@ The official documentation for Dovecot once was a Wiki but now is
dovecot-tls
protocols = imap
+dovecot-tls
protocols = imap
ssl = required
dovecot-ports
service imap-login {
+dovecot-ports
service imap-login {
inet_listener imap {
port = 0
}
@@ -356,7 +356,7 @@ directories.
-dovecot-maildir
mail_location = maildir:~/Maildir
+dovecot-maildir
mail_location = maildir:~/Maildir
@@ -368,15 +368,15 @@ common settings with host specific settings for ssl_cert
and
Front provides the public HTTP service that serves institute web pages
at e.g. https://small.example.org/
. The small institute initially
runs with a self-signed, "snake oil" server certificate, causing
browsers to warn of possible fraud, but this certificate is easily
-replaced by one signed by a recognized authority, as discussed in The
+replaced by one signed by a recognized authority, as discussed in The
Front Role.
Core runs Nextcloud to provide a private institute cloud at
https://core.small.private/nextcloud/
. It is managed manually per
The Nextcloud Server Administration Guide. The code and data,
including especially database dumps, are stored in /Nextcloud/
which
-is included in Core's backup procedure as described in Backups. The
+is included in Core's backup procedure as described in Backups. The
default Apache2 configuration expects to find the web scripts in
/var/www/nextcloud/
, so the institute symbolically links this to
/Nextcloud/nextcloud/
.
@@ -453,15 +453,15 @@ private network.
A small institute has just a handful of members. For simplicity (and
thus security) static configuration files are preferred over complex
account management systems, LDAP, Active Directory, and the like. The
Ansible scripts configure the same set of user accounts on Core and
-Front. The Institute Commands (e.g. ./inst new dick
) capture the
+Front. The Institute Commands (e.g. ./inst new dick
) capture the
processes of enrolling, modifying and retiring members of the
institute. They update the administrator's membership roll, and run
Ansible to create (and disable) accounts on Core, Front, Nextcloud,
@@ -476,8 +476,8 @@ accomplished via the campus cloud and the resulting desktop files can
all be private (readable and writable only by the owner) by default.
The institute avoids the use of the root
account (uid 0
) because
@@ -486,21 +486,21 @@ command is used to consciously (conscientiously!) run specific scripts
and programs as root
. When installation of a Debian OS leaves the
host with no user accounts, just the root
account, the next step is
to create a system administrator's account named sysadm
and to give
-it permission to use the sudo
command (e.g. as described in The
+it permission to use the sudo
command (e.g. as described in The
Front Machine). When installation prompts for the name of an
initial, privileged user account the same name is given (e.g. as
-described in The Core Machine). Installation may not prompt and
+described in The Core Machine). Installation may not prompt and
still create an initial user account with a distribution specific name
(e.g. pi
). Any name can be used as long as it is provided as the
value of ansible_user
in hosts
. Its password is specified by a
vault-encrypted variable in the Secret/become.yml
file. (The
-hosts
and Secret/become.yml
files are described in The Ansible
+hosts
and Secret/become.yml
files are described in The Ansible
Configuration.)
The institute's Core uses a special account named monkey
to run
@@ -511,8 +511,8 @@ account is created on Front as well.
The institute keeps its "master secrets" in an encrypted @@ -597,8 +597,8 @@ the administrator's password keep, to install a new SSH key.
The small institute backs up its data, but not so much so that nothing @@ -634,12 +634,14 @@ files mentioned in the Nextcloud database dump).
private/backup
#!/bin/bash -e
-#
-# DO NOT EDIT. Maintained (will be replaced) by Ansible.
-#
-# sudo backup [-n]
-
+private/backup
#!/bin/bash -e
+#
+# DO NOT EDIT.
+#
+# Maintained (will be replaced) by Ansible.
+#
+# sudo backup [-n]
+
if [ `id -u` != "0" ]
then
echo "This script must be run as root."
@@ -736,8 +738,8 @@ finish
This chapter introduces Ansible variables intended to simplify
@@ -749,13 +751,13 @@ stored in separate files: public/vars.yml
a
The example settings in this document configure VirtualBox VMs as -described in the Testing chapter. For more information about how a +described in the Testing chapter. For more information about how a small institute turns the example Ansible code into a working Ansible -configuration, see chapter The Ansible Configuration. +configuration, see chapter The Ansible Configuration.
The small institute's domain name is used quite frequently in the
@@ -783,7 +785,7 @@ institute.
The institute's private domain name should end with one of the
top-level domains set aside for this purpose: .intranet
,
.internal
, .private
, .corp
, .home
or .lan
.1 It is
-hoped that doing so will increase that chances that some abomination
+hoped that doing so will increase the chances that some abomination
like DNS-over-HTTPS will pass us by.
The small institute uses a private Ethernet, two VPNs, and a "wild", @@ -829,52 +831,52 @@ notation) in abbreviated form (eliding 69,624 rows).
=> 10.62.17.0/24
@@ -908,7 +910,7 @@ code block below. The small institute treats these addresses as sensitive information so again the code block below "tangles" intoprivate/vars.ymlrather than
public/vars.yml. Two of the addresses are in
192.168
subnets because they are part of a test
-configuration using mostly-default VirtualBoxes (described here).
+configuration using mostly-default VirtualBoxes (described here).
The small institute's network was built by its system administrator using Ansible on a trusted notebook. The Ansible configuration and scripts were generated by "tangling" the Ansible code included here. -(The Ansible Configuration describes how to do this.) The following +(The Ansible Configuration describes how to do this.) The following sections describe how Front, Gate and Core were prepared for Ansible.
Front is the small institute's public facing server, a virtual machine @@ -1029,8 +1031,8 @@ possible to quickly re-provision a new Front machine from a frontier Internet café using just the administrator's notebook.
The following example prepared a new front on a Digital Ocean droplet. @@ -1054,7 +1056,7 @@ root@ubuntu#
The freshly created Digital Ocean droplet came with just one account,
root
, but the small institute avoids remote access to the "super
-user" account (per the policy in The Administration Accounts), so the
+user" account (per the policy in The Administration Accounts), so the
administrator created a sysadm
account with the ability to request
escalated privileges via the sudo
command.
gpw
, saved in the administrator's
password keep, and later added to Secret/become.ymlas shown below. (Producing a working Ansible configuration with
Secret/become.yml-file is described in The Ansible Configuration.) +file is described in The Ansible Configuration.)
@@ -1091,7 +1093,7 @@ notebook_ >>Secret/become.ymlAfter creating the
@@ -1175,8 +1177,8 @@ address.sysadm
account on the droplet, the administrator concatenated a personal public ssh key and the key found in -Secret/ssh_admin/(created by The CA Command) into anadmin_keys+Secret/ssh_admin/(created by The CA Command) into anadmin_keysfile, copied it to the droplet, and installed it as theauthorized_keysforsysadm
.
Core is the small institute's private file, email, cloud and whatnot
@@ -1200,7 +1202,7 @@ The following example prepared a new core on a PC with Debian 11
freshly installed. During installation, the machine was named core
,
no desktop or server software was installed, no root password was set,
and a privileged account named sysadm
was created (per the policy in
-The Administration Accounts).
+The Administration Accounts).
@@ -1216,7 +1218,7 @@ Is the information correct? [Y/n] The password was generated bygpw
, saved in the administrator's password keep, and later added toSecret/become.ymlas shown below. (Producing a working Ansible configuration withSecret/become.yml-file is described in The Ansible Configuration.) +file is described in The Ansible Configuration.)@@ -1235,13 +1237,30 @@ modem and installed them as shown below.$ sudo apt install netplan.io systemd-resolved unattended-upgrades \ -_ ntp isc-dhcp-server bind9 apache2 wireguard \ +_ chrony isc-dhcp-server bind9 apache2 wireguard \ _ postfix dovecot-imapd fetchmail expect rsync \ _ gnupg openssh-server-The Nextcloud configuration requires Apache2, MariaDB and a number of +Manual installation of Postfix prompted for configuration type and +mail name. The answers given are listed here. +
+ +
+The host then needed to be rebooted to get its name service working
+again after systemd-resolved
was installed. (Any help with this
+will be welcome!) After rebooting and re-logging in, yet more
+software packages were installed.
+
+The Nextcloud configuration required Apache2, MariaDB and a number of PHP modules. Installing them while Core was on a cable modem sped up final configuration "in position" (on a frontier).
@@ -1253,7 +1272,7 @@ _ libapache2-mod-php-Similarly, the NAGIOS configuration requires a handful of packages +Similarly, the NAGIOS configuration required a handful of packages that were pre-loaded via cable modem (to test a frontier deployment).
@@ -1264,7 +1283,7 @@ _ nagios-nrpe-plugin
Next, the administrator concatenated a personal public ssh key and the
-key found in Secret/ssh_admin/
(created by The CA Command) into an
+key found in Secret/ssh_admin/
(created by The CA Command) into an
admin_keys
file, copied it to Core, and installed it as the
authorized_keys
for sysadm
.
Note that the name core.lan
should be known to the cable modem's DNS
service. An IP address might be used instead, discovered with an ip
-a
on Core.
+-4 a command on Core.
@@ -1304,7 +1323,7 @@ a new, private IP address and a default route.
In the example command lines below, the address 10.227.248.1
was
generated by the random subnet address picking procedure described in
-Subnets, and is named core_addr
in the Ansible code. The second
+Subnets, and is named core_addr
in the Ansible code. The second
address, 10.227.248.2
, is the corresponding address for Gate's
Ethernet interface, and is named gate_addr
in the Ansible
code.
@@ -1320,8 +1339,8 @@ At this point Core was ready for provisioning with Ansible.
Gate is the small institute's route to the Internet, and the campus @@ -1337,12 +1356,11 @@ untrusted network of campus IoT appliances and Wi-Fi access point(s).
isp
is its third network interface, connected to the campus
ISP. This could be an Ethernet device connected to a cable
-modem. It could be a USB port tethered to a phone, a
-USB-Ethernet adapter, or a wireless adapter connected to a
-campground Wi-Fi access point, etc.+=============== | ================================================== | Premises (Campus ISP) @@ -1355,8 +1373,8 @@ campground Wi-Fi access point, etc. +----Ethernet switch
While Gate and Core really need to be separate machines for security @@ -1365,7 +1383,7 @@ This avoids the need for a second Wi-Fi access point and leads to the following topology.
-+=============== | ================================================== | Premises (House ISP) @@ -1389,12 +1407,12 @@ its Ethernet and Wi-Fi clients are allowed to communicate).
The Ansible code in this document is somewhat dependent on the -physical network shown in the Overview wherein Gate has three network +physical network shown in the Overview wherein Gate has three network interfaces.
@@ -1403,7 +1421,7 @@ The following example prepared a new gate on a PC with Debian 11 freshly installed. During installation, the machine was namedgate
,
no desktop or server software was installed, no root password was set,
and a privileged account named sysadm
was created (per the policy in
-The Administration Accounts).
+The Administration Accounts).
@@ -1419,7 +1437,7 @@ Is the information correct? [Y/n] The password was generated bygpw
, saved in the administrator's password keep, and later added toSecret/become.ymlas shown below. (Producing a working Ansible configuration withSecret/become.yml-file is described in The Ansible Configuration.) +file is described in The Ansible Configuration.)@@ -1442,9 +1460,16 @@ _ ufw isc-dhcp-server postfix wireguard \ _ openssh-server++The host then needed to be rebooted to get its name service working +again after
+systemd-resolved
was installed. (Any help with this will +be welcome!) After rebooting and re-logging in, the administrator was +ready to proceed. +Next, the administrator concatenated a personal public ssh key and the -key found in
@@ -1484,7 +1509,7 @@ a new, private IP address.Secret/ssh_admin/(created by The CA Command) into an +key found inSecret/ssh_admin/(created by The CA Command) into anadmin_keysfile, copied it to Gate, and installed it as theauthorized_keysforsysadm
.In the example command lines below, the address
10.227.248.2
was generated by the random subnet address picking procedure described in -Subnets, and is namedgate_addr
in the Ansible code. +Subnets, and is namedgate_addr
in the Ansible code.@@ -1493,10 +1518,11 @@ $ sudo ip address add 10.227.248.2 dev eth0Gate was also connected to the USB Ethernet dongles cabled to the -campus Wi-Fi access point and the campus ISP. The three network -adapters are known by their MAC addresses, the values of the variables -
gate_lan_mac
,gate_wild_mac
, andgate_isp_mac
. (For more -information, see the Gate role's Configure Netplan task.) +campus Wi-Fi access point and the campus ISP and the values of three +variables (gate_lan_mac
,gate_wild_mac
, andgate_isp_mac
in +private/vars.yml) match the actual hardware MAC addresses of the +dongles. (For more information, see the Gate role's Configure Netplan +task.)@@ -1506,22 +1532,22 @@ At this point Gate was ready for provisioning with Ansible.
The all
role contains tasks that are executed on all of the
institute's servers. At the moment there is just the one.
The all
role's task contains a reference to a common institute
particular, the institute's domain_name
, a variable found in the
public/vars.yml
file. Thus the first task of the all
role is to
-include the variables defined in this file (described in The
+include the variables defined in this file (described in The
Particulars). The code block below is the first to tangle into
roles/all/tasks/main.yml
.
The systemd-networkd
and systemd-resolved
service units are not
@@ -1565,19 +1591,31 @@ follows these recommendations (and not the suggestion to enable
- ansible_distribution == 'Debian'
- 11 < ansible_distribution_major_version|int
-- name: Enable/Start systemd-networkd.
+- name: Start systemd-networkd.
+ become: yes
+ systemd:
+ service: systemd-networkd
+ state: started
+ tags: actualizer
+
+- name: Enable systemd-networkd.
become: yes
systemd:
service: systemd-networkd
enabled: yes
+
+- name: Start systemd-resolved.
+ become: yes
+ systemd:
+ service: systemd-resolved
state: started
+ tags: actualizer
-- name: Enable/Start systemd-resolved.
+- name: Enable systemd-resolved.
become: yes
systemd:
service: systemd-resolved
enabled: yes
- state: started
- name: Link /etc/resolv.conf.
become: yes
@@ -1593,14 +1631,14 @@ follows these recommendations (and not the suggestion to enable
All servers should recognize the institute's Certificate Authority as trustworthy, so its certificate is added to the set of trusted CAs on each host. More information about how the small institute manages its -X.509 certificates is available in Keys. +X.509 certificates is available in Keys.
roles_t/all/handlers/main.yml
+roles_t/all/handlers/main.yml
---
- name: Update CAs.
become: yes
command: update-ca-certificates
@@ -1627,15 +1665,15 @@ X.509 certificates is available in Keys.
The front
role installs and configures the services expected on the
institute's publicly accessible "front door": email, web, VPN. The
virtual machine is prepared with an Ubuntu Server install and remote
access to a privileged, administrator's account. (For details, see
-The Front Machine.)
+The Front Machine.)
@@ -1650,11 +1688,11 @@ perhaps with symbolic links to, for example,
/etc/letsencrypt/live/small.example.org/fullchain.pem
.
-The first task, as in The All Role, is to include the institute
+The first task, as in The All Role, is to include the institute
particulars. The front
role refers to private variables and the
membership roll, so these are included was well.
This task ensures that Front's /etc/hostname
and /etc/mailname
are
@@ -1694,21 +1732,18 @@ delivery.
loop:
- /etc/hostname
- /etc/mailname
- notify: Update hostname.
-
-
roles_t/front/handlers/main.yml
---
- name: Update hostname.
become: yes
command: hostname -F /etc/hostname
+ when: domain_name != ansible_hostname
+ tags: actualizer
The administrator often needs to read (directories of) log files owned @@ -1728,8 +1763,8 @@ these groups speeds up debugging.
The SSH service on Front needs to be known to Monkey. The following
@@ -1757,18 +1792,19 @@ those stored in Secret/ssh_front/etc/ssh/
roles_t/front/handlers/main.yml
+roles_t/front/handlers/main.yml
---
- name: Reload SSH server.
become: yes
systemd:
service: ssh
state: reloaded
+ tags: actualizer
The small institute runs cron jobs and web scripts that generate
@@ -1776,7 +1812,7 @@ reports and perform checks. The un-privileged jobs are run by a
system account named monkey
. One of Monkey's more important jobs on
Core is to run rsync
to update the public web site on Front. Monkey
on Core will login as monkey
on Front to synchronize the files (as
-described in *Configure Apache2). To do that without needing a
+described in *Configure Apache2). To do that without needing a
password, the monkey
account on Front should authorize Monkey's SSH
key on Core.
Monkey uses Rsync to keep the institute's public web site up-to-date. @@ -1824,8 +1860,8 @@ Monkey uses Rsync to keep the institute's public web site up-to-date.
The institute prefers to install security updates as soon as possible. @@ -1840,13 +1876,13 @@ The institute prefers to install security updates as soon as possible.
User accounts are created immediately so that Postfix and Dovecot can
start delivering email immediately, without returning "no such
-recipient" replies. The Account Management chapter describes the
+recipient" replies. The Account Management chapter describes the
members
and usernames
variables used below.
The servers on Front use the same certificate (and key) to
@@ -1915,8 +1951,8 @@ readable by root
.
Front uses Postfix to provide the institute's public SMTP service, and @@ -1933,7 +1969,7 @@ The appropriate answers are listed here but will be checked
-As discussed in The Email Service above, Front's Postfix configuration +As discussed in The Email Service above, Front's Postfix configuration includes site-wide support for larger message sizes, shorter queue times, the relaying configuration, and the common path to incoming emails. These and a few Front-specific Postfix configurations @@ -1946,7 +1982,7 @@ via which Core relays messages from the campus.
postfix-front-networks
- p: mynetworks
+postfix-front-networks
- p: mynetworks
v: >-
{{ public_wg_net_cidr }}
127.0.0.0/8
@@ -1962,7 +1998,7 @@ difficult for internal hosts, who do not have (public) domain names.
-postfix-front-restrictions
- p: smtpd_recipient_restrictions
+postfix-front-restrictions
- p: smtpd_recipient_restrictions
v: >-
permit_mynetworks
reject_unauth_pipelining
@@ -1983,13 +2019,13 @@ messages; incoming messages are delivered locally, without
-postfix-header-checks
- p: smtp_header_checks
+postfix-header-checks
- p: smtp_header_checks
v: regexp:/etc/postfix/header_checks.cf
-postfix-header-checks-content
/^Received:/ IGNORE
+postfix-header-checks-content
/^Received:/ IGNORE
/^User-Agent:/ IGNORE
@@ -2001,7 +2037,7 @@ Debian default for inet_interfaces
.
-postfix-front
- { p: smtpd_tls_cert_file, v: /etc/server.crt }
+postfix-front
- { p: smtpd_tls_cert_file, v: /etc/server.crt }
- { p: smtpd_tls_key_file, v: /etc/server.key }
<<postfix-front-networks>>
<<postfix-front-restrictions>>
@@ -2043,12 +2079,18 @@ start and enable the service.
dest: /etc/postfix/header_checks.cf
notify: Postmap header checks.
-- name: Enable/Start Postfix.
+- name: Start Postfix.
become: yes
systemd:
service: postfix
- enabled: yes
state: started
+ tags: actualizer
+
+- name: Enable Postfix.
+ become: yes
+ systemd:
+ service: postfix
+ enabled: yes
@@ -2059,6 +2101,7 @@ start and enable the service.
systemd:
service: postfix
state: restarted
+ tags: actualizer
- name: Postmap header checks.
become: yes
@@ -2070,8 +2113,8 @@ start and enable the service.
The institute's Front needs to deliver email addressed to a number of @@ -2108,12 +2151,13 @@ created by a more specialized role. - name: New aliases. become: yes command: newaliases + tags: actualizer
Front uses Dovecot's IMAPd to allow user Fetchmail jobs on Core to @@ -2122,7 +2166,7 @@ default with POP and IMAP (without TLS) support disabled. This is a bit "over the top" given that Core accesses Front via VPN, but helps to ensure privacy even when members must, in extremis, access recent email directly from their accounts on Front. For more information -about Front's role in the institute's email services, see The Email +about Front's role in the institute's email services, see The Email Service.
@@ -2158,12 +2202,18 @@ and enables it to start at every reboot. dest: /etc/dovecot/local.conf notify: Restart Dovecot. -- name: Enable/Start Dovecot. +- name: Start Dovecot. become: yes systemd: service: dovecot - enabled: yes state: started + tags: actualizer + +- name: Enable Dovecot. + become: yes + systemd: + service: dovecot + enabled: yesThis is the small institute's public web site. It is simple, static, @@ -2215,7 +2266,7 @@ taken from https://www
apache-ciphers
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
+apache-ciphers
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLHonorCipherOrder on
SSLCipherSuite {{ [ 'ECDHE-ECDSA-AES128-GCM-SHA256',
'ECDHE-ECDSA-AES256-GCM-SHA384',
@@ -2270,7 +2321,7 @@ used on all of the institute's web sites.
-apache-userdir-front
UserDir /home/www-users
+apache-userdir-front
UserDir /home/www-users
<Directory /home/www-users/>
Require all granted
AllowOverride None
@@ -2285,7 +2336,7 @@ HTTPS URLs.
-apache-redirect-front
<VirtualHost *:80>
+apache-redirect-front
<VirtualHost *:80>
Redirect permanent / https://{{ domain_name }}/
</VirtualHost>
@@ -2310,7 +2361,7 @@ the inside of a VirtualHost
block. They should apply globally.
-apache-front
ServerName {{ domain_name }}
+apache-front
ServerName {{ domain_name }}
ServerAdmin webmaster@{{ domain_name }}
DocumentRoot /home/www
@@ -2380,12 +2431,18 @@ e.g. /etc/apache2/sites-available/small.example.org.conf
and runs
creates: /etc/apache2/sites-enabled/{{ domain_name }}.conf
notify: Restart Apache2.
-- name: Enable/Start Apache2.
+- name: Start Apache2.
become: yes
systemd:
service: apache2
- enabled: yes
state: started
+ tags: actualizer
+
+- name: Enable Apache2.
+ become: yes
+ systemd:
+ service: apache2
+ enabled: yes
@@ -2396,6 +2453,7 @@ e.g. /etc/apache2/sites-available/small.example.org.conf
and runs
systemd:
service: apache2
state: restarted
+ tags: actualizer
@@ -2469,8 +2527,8 @@ the users' ~/Public/HTML/
directories.
Front uses WireGuard⢠to provide a public (Internet accessible) VPN @@ -2479,35 +2537,35 @@ packets between it and the institute's other private networks.
-The following example private/front-wg0.conf
configuration recognizes
+The following example private/front-wg0.conf
configuration recognizes
Core by its public key and routes the institute's private networks to
it. It also recognizes Dick's notebook and his (replacement) phone,
assigning them host numbers 4 and 6 on the VPN.
private/front-wg0.conf
[Interface]
+private/front-wg0.conf
[Interface]
Address = 10.177.87.1/24
ListenPort = 39608
PostUp = wg set %i private-key /etc/wireguard/private-key
PostUp = resolvectl dns %i 192.168.56.1
PostUp = resolvectl domain %i small.private
-# Core
-[Peer]
+# Core
+[Peer]
PublicKey = lGhC51IBgZtlq4H2bsYFuKvPtV0VAEwUvVIn5fW7D0c=
AllowedIPs = 10.177.87.2
AllowedIPs = 192.168.56.0/24
AllowedIPs = 192.168.57.0/24
AllowedIPs = 10.84.139.0/24
-# dick
-[Peer]
+# dick
+[Peer]
PublicKey = 4qd4xdRztZBKhFrX9jI/b4fnMzpKQ5qhg691hwYSsX8=
AllowedIPs = 10.177.87.4
-# dicks-razr
-[Peer]
+# dicks-razr
+[Peer]
PublicKey = zho0qMxoLclJSQu4GeJEcMkk0hx4Q047OcNc8vOejVw=
AllowedIPs = 10.177.87.6
@@ -2538,11 +2596,18 @@ WireGuard⢠tunnel on Dick's notebook, used abroad
The following tasks install WireGuardâ¢, configure it with
-private/front-wg0.conf
, and enable the service.
+private/front-wg0.conf
, and enable the service.
roles_t/front/tasks/main.yml
+- name: Enable IP forwarding.
+ become: yes
+ sysctl:
+ name: net.ipv4.ip_forward
+ value: "1"
+ state: present
+
- name: Install WireGuard™.
become: yes
apt: pkg=wireguard
@@ -2557,12 +2622,18 @@ The following tasks install WireGuardâ¢, configure it with
group: root
notify: Restart WireGuard™.
-- name: Enable/Start WireGuard™ on boot.
+- name: Start WireGuard™.
become: yes
systemd:
service: wg-quick@wg0
- enabled: yes
state: started
+ tags: actualizer
+
+- name: Enable WireGuard™.
+ become: yes
+ systemd:
+ service: wg-quick@wg0
+ enabled: yes
@@ -2573,12 +2644,13 @@ The following tasks install WireGuardâ¢, configure it with
systemd:
service: wg-quick@wg0
state: restarted
+ tags: actualizer
Front uses Kamailio to provide a SIP service on the public VPN so that
@@ -2600,7 +2672,7 @@ specifies the actual IP, known here as front_wg_addr
.
kamailio
listen=udp:{{ front_wg_addr }}:5060
+kamailio
listen=udp:{{ front_wg_addr }}:5060
wg0
device has appeared.
become: yes
systemd:
daemon-reload: yes
+ tags: actualizer
The core
role configures many essential campus network services as
well as the institute's private cloud, so the core machine has
horsepower (CPUs and RAM) and large disks and is prepared with a
Debian install and remote access to a privileged, administrator's
-account. (For details, see The Core Machine.)
+account. (For details, see The Core Machine.)
-The first task, as in The Front Role, is to include the institute +The first task, as in The Front Role, is to include the institute particulars and membership roll.
@@ -2728,8 +2808,8 @@ particulars and membership roll.
This task ensures that Core's /etc/hostname
and /etc/mailname
are
@@ -2749,21 +2829,18 @@ proper email delivery.
loop:
- { name: "core.{{ domain_priv }}", file: /etc/mailname }
- { name: "{{ inventory_hostname }}", file: /etc/hostname }
- notify: Update hostname.
-
-
roles_t/core/handlers/main.yml
---
- name: Update hostname.
become: yes
command: hostname -F /etc/hostname
+ when: inventory_hostname != ansible_hostname
+ tags: actualizer
Core runs the campus name server, so Resolved is configured to use it @@ -2792,23 +2869,25 @@ list, and to disable its cache and stub listener.
roles_t/core/handlers/main.yml
+roles_t/core/handlers/main.yml
---
- name: Reload Systemd.
become: yes
systemd:
daemon-reload: yes
+ tags: actualizer
- name: Restart Systemd resolved.
become: yes
systemd:
service: systemd-resolved
state: restarted
+ tags: actualizer
Core's network interface is statically configured using Netplan and an @@ -2849,7 +2928,9 @@ fact was an empty hash at first boot on a simulated campus Ethernet.) nameservers: search: [ {{ domain_priv }} ] addresses: [ {{ core_addr }} ] - gateway4: {{ gate_addr }} + routes: + - to: default + via: {{ gate_addr }} dest: /etc/netplan/60-core.yaml mode: u=rw,g=r,o= notify: Apply netplan. @@ -2861,12 +2942,13 @@ fact was an empty hash at first boot on a simulated campus Ethernet.) - name: Apply netplan. become: yes command: netplan apply + tags: actualizer
Core speaks DHCP (Dynamic Host Configuration Protocol) using the
@@ -2945,12 +3027,18 @@ the real private/core-dhcpd.conf
(<
dest: /etc/dhcp/dhcpd.conf
notify: Restart DHCP server.
-- name: Enable/Start DHCP server.
+- name: Start DHCP server.
become: yes
systemd:
service: isc-dhcp-server
- enabled: yes
state: started
+ tags: actualizer
+
+- name: Enable DHCP server.
+ become: yes
+ systemd:
+ service: isc-dhcp-server
+ enabled: yes
private/core-dhcpd.conf(< systemd: service: isc-dhcp-server state: restarted + tags: actualizer
Core uses BIND9 to provide name service for the institute as described -in The Name Service. The configuration supports reverse name lookups, +in The Name Service. The configuration supports reverse name lookups, resolving many private network addresses to private domain names.
@@ -3008,12 +3097,18 @@ The following tasks install and configure BIND9 on Core. loop: [ domain, private, public_vpn, campus_vpn ] notify: Reload BIND9. -- name: Enable/Start BIND9. +- name: Start BIND9. become: yes systemd: service: bind9 - enabled: yes state: started + tags: actualizer + +- name: Enable BIND9. + become: yes + systemd: + service: bind9 + enabled: yesbind-options
acl "trusted" {
+bind-options
acl "trusted" {
{{ private_net_cidr }};
{{ wild_net_cidr }};
{{ public_wg_net_cidr }};
@@ -3064,27 +3160,27 @@ probably be used as forwarders rather than Google.
bind-local
include "/etc/bind/zones.rfc1918";
+bind-local
include "/etc/bind/zones.rfc1918";
zone "{{ domain_priv }}." {
type master;
file "/etc/bind/db.domain";
};
-zone "{{ private_net_cidr | ansible.utils.ipaddr('revdns')
- | regex_replace('^0\.','') }}" {
+zone "{{ private_net_cidr | ansible.utils.ipaddr('revdns')
+ | regex_replace('^0\.','') }}" {
type master;
file "/etc/bind/db.private";
};
-zone "{{ public_wg_net_cidr | ansible.utils.ipaddr('revdns')
- | regex_replace('^0\.','') }}" {
+zone "{{ public_wg_net_cidr | ansible.utils.ipaddr('revdns')
+ | regex_replace('^0\.','') }}" {
type master;
file "/etc/bind/db.public_vpn";
};
-zone "{{ campus_wg_net_cidr | ansible.utils.ipaddr('revdns')
- | regex_replace('^0\.','') }}" {
+zone "{{ campus_wg_net_cidr | ansible.utils.ipaddr('revdns')
+ | regex_replace('^0\.','') }}" {
type master;
file "/etc/bind/db.campus_vpn";
};
@@ -3093,91 +3189,91 @@ probably be used as forwarders rather than Google.
private/db.domain
;
-; BIND data file for a small institute's PRIVATE domain names.
-;
-$TTL 604800
-@ IN SOA small.private. root.small.private. (
- 1 ; Serial
- 604800 ; Refresh
- 86400 ; Retry
- 2419200 ; Expire
- 604800 ) ; Negative Cache TTL
-;
-@ IN NS core.small.private.
-$TTL 7200
-mail IN CNAME core.small.private.
-smtp IN CNAME core.small.private.
-ns IN CNAME core.small.private.
-www IN CNAME core.small.private.
-test IN CNAME core.small.private.
-live IN CNAME core.small.private.
-ntp IN CNAME core.small.private.
-sip IN A 10.177.87.1
-;
-core IN A 192.168.56.1
-gate IN A 192.168.56.2
+; BIND data file for a small institute's PRIVATE domain names.
+;
+$TTL 604800
+@ IN SOA small.private. root.small.private. (
+ 1 ; Serial
+ 604800 ; Refresh
+ 86400 ; Retry
+ 2419200 ; Expire
+ 604800 ) ; Negative Cache TTL
+;
+@ IN NS core.small.private.
+$TTL 7200
+mail IN CNAME core.small.private.
+smtp IN CNAME core.small.private.
+ns IN CNAME core.small.private.
+www IN CNAME core.small.private.
+test IN CNAME core.small.private.
+live IN CNAME core.small.private.
+ntp IN CNAME core.small.private.
+sip IN A 10.177.87.1
+;
+core IN A 192.168.56.1
+gate IN A 192.168.56.2
private/db.private
;
-; BIND reverse data file for a small institute's private Ethernet.
-;
-$TTL 604800
-@ IN SOA small.private. root.small.private. (
- 1 ; Serial
- 604800 ; Refresh
- 86400 ; Retry
- 2419200 ; Expire
- 604800 ) ; Negative Cache TTL
-;
-@ IN NS core.small.private.
-$TTL 7200
-1 IN PTR core.small.private.
-2 IN PTR gate.small.private.
+; BIND reverse data file for a small institute's private Ethernet.
+;
+$TTL 604800
+@ IN SOA small.private. root.small.private. (
+ 1 ; Serial
+ 604800 ; Refresh
+ 86400 ; Retry
+ 2419200 ; Expire
+ 604800 ) ; Negative Cache TTL
+;
+@ IN NS core.small.private.
+$TTL 7200
+1 IN PTR core.small.private.
+2 IN PTR gate.small.private.
private/db.public_vpn
;
-; BIND reverse data file for a small institute's public VPN.
-;
-$TTL 604800
-@ IN SOA small.private. root.small.private. (
- 1 ; Serial
- 604800 ; Refresh
- 86400 ; Retry
- 2419200 ; Expire
- 604800 ) ; Negative Cache TTL
-;
-@ IN NS core.small.private.
-$TTL 7200
-1 IN PTR front-p.small.private.
-2 IN PTR core-p.small.private.
+; BIND reverse data file for a small institute's public VPN.
+;
+$TTL 604800
+@ IN SOA small.private. root.small.private. (
+ 1 ; Serial
+ 604800 ; Refresh
+ 86400 ; Retry
+ 2419200 ; Expire
+ 604800 ) ; Negative Cache TTL
+;
+@ IN NS core.small.private.
+$TTL 7200
+1 IN PTR front-p.small.private.
+2 IN PTR core-p.small.private.
private/db.campus_vpn
;
-; BIND reverse data file for a small institute's campus VPN.
-;
-$TTL 604800
-@ IN SOA small.private. root.small.private. (
- 1 ; Serial
- 604800 ; Refresh
- 86400 ; Retry
- 2419200 ; Expire
- 604800 ) ; Negative Cache TTL
-;
-@ IN NS core.small.private.
-$TTL 7200
-1 IN PTR gate-c.small.private.
+; BIND reverse data file for a small institute's campus VPN.
+;
+$TTL 604800
+@ IN SOA small.private. root.small.private. (
+ 1 ; Serial
+ 604800 ; Refresh
+ 86400 ; Retry
+ 2419200 ; Expire
+ 604800 ) ; Negative Cache TTL
+;
+@ IN NS core.small.private.
+$TTL 7200
+1 IN PTR gate-c.small.private.
The administrator often needs to read (directories of) log files owned @@ -3197,15 +3293,15 @@ these groups speeds up debugging.
The small institute runs cron jobs and web scripts that generate
reports and perform checks. The un-privileged jobs are run by a
system account named monkey
. One of Monkey's more important jobs on
Core is to run rsync
to update the public web site on Front (as
-described in *Configure Apache2).
+described in *Configure Apache2).
The institute prefers to install security updates as soon as possible. @@ -3281,11 +3377,11 @@ The institute prefers to install security updates as soon as possible.
-The expect
program is used by The Institute Commands to interact
+The expect
program is used by The Institute Commands to interact
with Nextcloud on the command line.
User accounts are created immediately so that backups can begin
-restoring as soon as possible. The Account Management chapter
+restoring as soon as possible. The Account Management chapter
describes the members
and usernames
variables.
members
and usernames
variables.
The servers on Core use the same certificate (and key) to authenticate
@@ -3370,25 +3466,44 @@ themselves to institute clients. They share the /etc/server.crt
and
-Core uses NTP to provide a time synchronization service to the campus. +Core uses Chrony to provide a time synchronization service to the campus. The default daemon's default configuration is fine.
roles_t/core/tasks/main.yml
-- name: Install NTP.
+- name: Install Chrony.
+ become: yes
+ apt: pkg=chrony
+
+- name: Configure NTP service.
become: yes
- apt: pkg=ntp
+ copy:
+ content: |
+ allow {{ private_net_cidr }}
+ allow {{ public_wg_net_cidr }}
+ allow {{ campus_wg_net_cidr }}
+ dest: /etc/chrony/conf.d/institute.conf
+ notify: Restart Chrony.
roles_t/core/handlers/main.yml
+- name: Restart Chrony.
+ systemd:
+ service: chrony
+ state: restarted
+
Core uses Postfix to provide SMTP service to the campus. The default @@ -3404,7 +3519,7 @@ The appropriate answers are listed here but will be checked
-As discussed in The Email Service above, Core delivers email addressed +As discussed in The Email Service above, Core delivers email addressed to any internal domain name locally, and uses its smarthost Front to relay the rest. Core is reachable only on institute networks, so there is little benefit in enabling TLS, but it does need to handle @@ -3417,7 +3532,7 @@ Core relays messages from any institute network.
postfix-core-networks
- p: mynetworks
+postfix-core-networks
- p: mynetworks
v: >-
{{ private_net_cidr }}
{{ public_wg_net_cidr }}
@@ -3433,7 +3548,7 @@ Core uses Front to relay messages to the Internet.
-postfix-core-relayhost
- { p: relayhost, v: "[{{ front_wg_addr }}]" }
+postfix-core-relayhost
- { p: relayhost, v: "[{{ front_wg_addr }}]" }
@@ -3445,7 +3560,7 @@ file.
-postfix-transport
.{{ domain_name }} local:$myhostname
+postfix-transport
.{{ domain_name }} local:$myhostname
.{{ domain_priv }} local:$myhostname
@@ -3456,7 +3571,7 @@ The complete list of Core's Postfix settings for
-postfix-core
<<postfix-relaying>>
+postfix-core
<<postfix-relaying>>
- { p: smtpd_tls_security_level, v: none }
- { p: smtp_tls_security_level, v: none }
<<postfix-message-size>>
@@ -3500,12 +3615,18 @@ enable the service. Whenever /etc/postfix/transport
is changed, the
dest: /etc/postfix/transport
notify: Postmap transport.
-- name: Enable/Start Postfix.
+- name: Start Postfix.
become: yes
systemd:
service: postfix
- enabled: yes
state: started
+ tags: actualizer
+
+- name: Enable Postfix.
+ become: yes
+ systemd:
+ service: postfix
+ enabled: yes
@@ -3516,6 +3637,7 @@ enable the service. Whenever /etc/postfix/transport
is changed, the
systemd:
service: postfix
state: restarted
+ tags: actualizer
- name: Postmap transport.
become: yes
@@ -3527,8 +3649,8 @@ enable the service. Whenever /etc/postfix/transport
is changed, the
The institute's Core needs to deliver email addressed to institute @@ -3545,11 +3667,9 @@ installed by more specialized roles. become: yes blockinfile: block: | - webmaster: root admin: root www-data: root monkey: root - root: {{ ansible_user }} path: /etc/aliases marker: "# {mark} INSTITUTE MANAGED BLOCK" notify: New aliases. @@ -3561,12 +3681,13 @@ installed by more specialized roles. - name: New aliases. become: yes command: newaliases + tags: actualizer
Core uses Dovecot's IMAPd to store and serve member emails. As on @@ -3576,7 +3697,7 @@ top" given that Core is only accessed from private (encrypted) networks, but helps to ensure privacy even when members accidentally attempt connections from outside the private networks. For more information about Core's role in the institute's email services, see -The Email Service. +The Email Service.
@@ -3584,7 +3705,7 @@ The institute follows the recommendation in the package
README.Debian
(in /usr/share/dovecot-core/
) but replaces the
default "snake oil" certificate with another, signed by the institute.
(For more information about the institute's X.509 certificates, see
-Keys.)
+Keys.)
@@ -3610,12 +3731,18 @@ and enables it to start at every reboot. dest: /etc/dovecot/local.conf notify: Restart Dovecot. -- name: Enable/Start Dovecot. +- name: Start Dovecot. become: yes systemd: service: dovecot - enabled: yes state: started + tags: actualizer + +- name: Enable Dovecot. + become: yes + systemd: + service: dovecot + enabled: yes
Core runs a fetchmail
for each member of the institute. Individual
@@ -3648,13 +3776,13 @@ the username. The template is only used when the record has a
fetchmail-config
# Permissions on this file may be no greater than 0600.
-
+fetchmail-config
# Permissions on this file may be no greater than 0600.
+
set no bouncemail
set no spambounce
set no syslog
-#set logfile /home/{{ item }}/.fetchmail.log
-
+#set logfile /home/{{ item }}/.fetchmail.log
+
poll {{ front_wg_addr }} protocol imap timeout 15
username {{ item }}
password "{{ members[item].password_fetchmail }}" fetchall
@@ -3667,7 +3795,7 @@ The Systemd service description.
-fetchmail-service
[Unit]
+fetchmail-service
[Unit]
Description=Fetchmail --idle task for {{ item }}.
AssertPathExists=/home/{{ item }}/.fetchmailrc
After=wg-quick@wg0.service
@@ -3737,7 +3865,7 @@ provided the Core service.
when:
- members[item].status == 'current'
- members[item].password_fetchmail is defined
- tags: accounts
+ tags: accounts, actualizer
@@ -3784,12 +3912,12 @@ Otherwise the following task might be appropriate.
This is the small institute's campus web server. It hosts several web -sites as described in The Web Services. +sites as described in The Web Services.
The next code block implements the CA
sub-command, which creates a
@@ -6699,8 +6916,8 @@ config.
umask 077;
mysystem "cd Secret/CA; ./easyrsa init-pki";
mysystem "cd Secret/CA; ./easyrsa build-ca nopass";
- # Common Name: small.example.org
-
+ # Common Name: small.example.org
+
my $dom = $domain_name;
my $pvt = $domain_priv;
mysystem "cd Secret/CA; ./easyrsa build-server-full $dom nopass";
@@ -6741,8 +6958,8 @@ config.
The next code block implements the config
sub-command, which
@@ -6792,12 +7009,12 @@ Example command lines:
For general information about members and their Unix accounts, see
-Accounts. The account management sub-commands maintain a mapping
+Accounts. The account management sub-commands maintain a mapping
associating member "usernames" (Unix account names) with their
records. The mapping is stored among other things in
private/members.yml
as the value associated with the key members
.
@@ -6902,8 +7119,8 @@ read from the file. The dump subroutine is another story (below).
my $old_umask = umask 077;
my $path = "private/members.yml";
print "$path: "; STDOUT->flush;
- eval { #DumpFile ("$path.tmp", $yaml);
- dump_members_yaml ("$path.tmp", $yaml);
+ eval { #DumpFile ("$path.tmp", $yaml);
+ dump_members_yaml ("$path.tmp", $yaml);
rename ("$path.tmp", $path)
or die "Could not rename $path.tmp: $!\n"; };
my $err = $@;
@@ -6997,8 +7214,8 @@ each record.
The next code block implements the new
sub-command. It adds a new
@@ -7099,8 +7316,8 @@ initial, generated password.
The institute's passwd
command on Core securely emails root
with a
@@ -7114,8 +7331,8 @@ Ansible site.yml
playbook to update the
message is sent to member@core
.
The next code block implements the less aggressive passwd
command.
@@ -7129,8 +7346,8 @@ in Secret/
.
roles_t/core/templates/passwd
#!/bin/perl -wT
-
+roles_t/core/templates/passwd
#!/bin/perl -wT
+
use strict;
$ENV{PATH} = "/usr/sbin:/usr/bin:/bin";
@@ -7184,35 +7401,35 @@ close $TMP;
open $O, ("| gpg --encrypt --armor"
." --trust-model always --recipient root\@core"
." > $tmp") or die "Error running gpg > $tmp: $!\n";
-print $O <<EOD;
-username: $username
-password: $epass
-EOD
-close $O or die "Error closing pipe to gpg: $!\n";
+print $O <<EOD;
+username: $username
+password: $epass
+EOD
+close $O or die "Error closing pipe to gpg: $!\n";
use File::Copy;
open ($O, "| sendmail root");
-print $O <<EOD;
-From: root
-To: root
-Subject: New password.
+print $O <<EOD;
+From: root
+To: root
+Subject: New password.
-EOD
-$O->flush;
+EOD
+$O->flush;
copy $tmp, $O;
-#print $O `cat $tmp`;
-close $O or die "Error closing pipe to sendmail: $!\n";
+#print $O `cat $tmp`;
+close $O or die "Error closing pipe to sendmail: $!\n";
-print "
-Your request was sent to Root. PLEASE WAIT for email confirmation
-that the change was completed.\n";
+print "
+Your request was sent to Root. PLEASE WAIT for email confirmation
+that the change was completed.\n";
exit;
The following code block implements the ./inst pass
command, used by
@@ -7261,13 +7478,13 @@ the administrator to update private/members.yml
before running
my $O = new IO::File;
open ($O, "| sendmail $user\@$domain_priv")
or die "Could not pipe to sendmail: $!\n";
- print $O "From: <root>
-To: <$user>
-Subject: Password change.
+ print $O "From: <root>
+To: <$user>
+Subject: Password change.
-Your new password has been distributed to the servers.
+Your new password has been distributed to the servers.
-As always: please email root with any questions or concerns.\n";
+As always: please email root with any questions or concerns.\n";
close $O or die "pipe to sendmail failed: $!\n";
exit;
}
@@ -7309,8 +7526,8 @@ users:resetpassword command using expect(1)
.
The following Ansible tasks install the less aggressive passwd
@@ -7378,8 +7595,8 @@ configuration so that the email to root can be encrypted.
The old
command disables a member's account (and thus their clients).
@@ -7422,8 +7639,8 @@ The old
command disables a member's account (and thus their clients
The client
command registers the public key of a client wishing to
@@ -7508,8 +7725,8 @@ better support in NetworkManager soon.)
die "$user: does not exist\n"
if !defined $member && $type ne "campus";
- my @campus_peers # [ name, hostnum, type, pubkey, user|"" ]
- = map { [ (split / /), "" ] } @{$yaml->{"clients"}};
+ my @campus_peers # [ name, hostnum, type, pubkey, user|"" ]
+ = map { [ (split / /), "" ] } @{$yaml->{"clients"}};
my @member_peers = ();
for my $u (sort keys %$members) {
@@ -7548,17 +7765,17 @@ better support in NetworkManager soon.)
}
my $core_wg_addr = hostnum_to_ipaddr (2, $public_wg_net_cidr);
- my $extra_front_config = "
-PostUp = resolvectl dns %i $core_addr
-PostUp = resolvectl domain %i $domain_priv
-
-# Core
-[Peer]
-PublicKey = $core_wg_pubkey
-AllowedIPs = $core_wg_addr
-AllowedIPs = $private_net_cidr
-AllowedIPs = $wild_net_cidr
-AllowedIPs = $campus_wg_net_cidr\n";
+ my $extra_front_config = "
+PostUp = resolvectl dns %i $core_addr
+PostUp = resolvectl domain %i $domain_priv
+
+# Core
+[Peer]
+PublicKey = $core_wg_pubkey
+AllowedIPs = $core_wg_addr
+AllowedIPs = $private_net_cidr
+AllowedIPs = $wild_net_cidr
+AllowedIPs = $campus_wg_net_cidr\n";
write_wg_server ("private/front-wg0.conf", \@member_peers,
hostnum_to_ipaddr_cidr (1, $public_wg_net_cidr),
@@ -7590,19 +7807,19 @@ better support in NetworkManager soon.)
my ($file, $peers, $addr_cidr, $port, $extra) = @_;
my $O = new IO::File;
open ($O, ">$file.tmp") or die "Could not open $file.tmp: $!\n";
- print $O "[Interface]
-Address = $addr_cidr
-ListenPort = $port
-PostUp = wg set %i private-key /etc/wireguard/private-key$extra";
+ print $O "[Interface]
+Address = $addr_cidr
+ListenPort = $port
+PostUp = wg set %i private-key /etc/wireguard/private-key$extra";
for my $p (@$peers) {
my ($n, $h, $t, $k, $u) = @$p;
next if $k =~ /^-/;
my $ip = hostnum_to_ipaddr ($h, $addr_cidr);
- print $O "
-# $n
-[Peer]
-PublicKey = $k
-AllowedIPs = $ip\n";
+ print $O "
+# $n
+[Peer]
+PublicKey = $k
+AllowedIPs = $ip\n";
}
close $O or die "Could not close $file.tmp: $!\n";
rename ("$file.tmp", $file)
@@ -7616,29 +7833,29 @@ better support in NetworkManager soon.)
open ($O, ">$file.tmp") or die "Could not open $file.tmp: $!\n";
my $DNS = ($type eq "android"
- ? "
-DNS = $core_addr
-Domain = $domain_priv"
- : "
-PostUp = wg set %i private-key /etc/wireguard/private-key
-PostUp = resolvectl dns %i $core_addr
-PostUp = resolvectl domain %i $domain_priv");
+ ? "
+DNS = $core_addr
+Domain = $domain_priv"
+ : "
+PostUp = wg set %i private-key /etc/wireguard/private-key
+PostUp = resolvectl dns %i $core_addr
+PostUp = resolvectl domain %i $domain_priv");
my $WILD = ($file eq "public.conf"
- ? "
-AllowedIPs = $wild_net_cidr"
+ ? "
+AllowedIPs = $wild_net_cidr"
: "");
- print $O "[Interface]
-Address = $addr$DNS
+ print $O "[Interface]
+Address = $addr$DNS
-[Peer]
-PublicKey = $pubkey
-EndPoint = $endpt
-AllowedIPs = $server_addr
-AllowedIPs = $private_net_cidr$WILD
-AllowedIPs = $public_wg_net_cidr
-AllowedIPs = $campus_wg_net_cidr\n";
+[Peer]
+PublicKey = $pubkey
+EndPoint = $endpt
+AllowedIPs = $server_addr
+AllowedIPs = $private_net_cidr$WILD
+AllowedIPs = $public_wg_net_cidr
+AllowedIPs = $campus_wg_net_cidr\n";
close $O or die "Could not close $file.tmp: $!\n";
rename ("$file.tmp", $file)
or die "Could not rename $file.tmp: $!\n";
@@ -7648,31 +7865,31 @@ better support in NetworkManager soon.)
{
my ($hostnum, $net_cidr) = @_;
- # Assume 24bit subnet, 8bit hostnum.
- # Find a Perl library for more generality?
- die "$hostnum: hostnum too large\n" if $hostnum > 255;
- my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$";
- die if !$prefix;
- return "$prefix.$hostnum";
-}
+ # Assume 24bit subnet, 8bit hostnum.
+ # Find a Perl library for more generality?
+ die "$hostnum: hostnum too large\n" if $hostnum > 255;
+ my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$";
+ die if !$prefix;
+ return "$prefix.$hostnum";
+}
-sub hostnum_to_ipaddr_cidr ($$)
-{
- my ($hostnum, $net_cidr) = @_;
+sub hostnum_to_ipaddr_cidr ($$)
+{
+ my ($hostnum, $net_cidr) = @_;
- # Assume 24bit subnet, 8bit hostnum.
- # Find a Perl library for more generality?
- die "$hostnum: hostnum too large\n" if $hostnum > 255;
- my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$";
- die if !$prefix;
- return "$prefix.$hostnum/24";
-}
+ # Assume 24bit subnet, 8bit hostnum.
+ # Find a Perl library for more generality?
+ die "$hostnum: hostnum too large\n" if $hostnum > 255;
+ my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$";
+ die if !$prefix;
+ return "$prefix.$hostnum/24";
+}
This should be the last block tangled into the inst
script. It
@@ -7688,8 +7905,8 @@ above.
The example files in this document, ansible.cfg
and hosts
as well
@@ -7708,7 +7925,7 @@ simulation is the VirtualBox host.
The next two sections list the steps taken to create the simulated Core, Gate and Front machines, and connect them to their networks. -The process is similar to that described in The (Actual) Hardware, but +The process is similar to that described in The (Actual) Hardware, but is covered in detail here where the VirtualBox hypervisor can be assumed and exact command lines can be given (and copied during re-testing). The remaining sections describe the manual testing @@ -7724,8 +7941,8 @@ HTML version of the latest revision can be found on the official web site at https://www.virtualbox.org/manual/UserManual.html.
The networks used in the test:
@@ -7767,11 +7984,11 @@ following VBoxManage
commands.
--network 192.168.15.0/24 \
--enable --dhcp on --ipv6 off
VBoxManage natnetwork start --netname premises
-VBoxManage hostonlyif create # vboxnet0
-VBoxManage hostonlyif ipconfig vboxnet0 --ip=192.168.56.10
+VBoxManage hostonlyif create # vboxnet0
+VBoxManage hostonlyif ipconfig vboxnet0 --ip=192.168.56.10
VBoxManage dhcpserver modify --interface=vboxnet0 --disable
-VBoxManage hostonlyif create # vboxnet1
-VBoxManage hostonlyif ipconfig vboxnet1 --ip=192.168.57.2
+VBoxManage hostonlyif create # vboxnet1
+VBoxManage hostonlyif ipconfig vboxnet1 --ip=192.168.57.2
192.168.15.0/24
network.
The virtual machines are created by VBoxManage
command lines in the
following sub-sections. They each start with a recent Debian release
(e.g. debian-12.5.0-amd64-netinst.iso
) in their simulated DVD
-drives. As in The Hardware preparation process being simulated, a few
-additional software packages are installed. Unlike in The Hardware
+drives. As in The Hardware preparation process being simulated, a few
+additional software packages are installed. Unlike in The Hardware
preparation, machines are moved to their final networks and then
remote access is authorized. (They are not accessible via ssh
on
the VirtualBox NAT network where they first boot.)
@@ -7809,8 +8026,8 @@ privileged accounts on the virtual machines, they are prepared for
configuration by Ansible.
The following shell function contains most of the VBoxManage
@@ -7904,7 +8121,7 @@ appropriate responses to the prompts are given in the list below.
The front
machine is created with 512MiB of RAM, 4GiB of disk, and
@@ -7955,7 +8172,7 @@ After Debian is installed (as detailed above) front
is shut down an
its primary network interface moved to the simulated Internet, the NAT
network premises
. front
also gets a second network interface, on
the host-only network vboxnet1
, to make it directly accessible to
-the administrator's notebook (as described in The Test Networks).
+the administrator's notebook (as described in The Test Networks).
front
, which is never
deployed on a frontier, always in the cloud. Additional Debian
packages are assumed to be readily available. Thus Ansible installs
them as necessary, but first the administrator authorizes remote
-access by following the instructions in the final section: Ansible
+access by following the instructions in the final section: Ansible
Test Authorization.
The gate
machine is created with the same amount of RAM and disk as
@@ -8008,7 +8225,7 @@ create_vm
-After Debian is installed (as detailed in A Test Machine) and the +After Debian is installed (as detailed in A Test Machine) and the machine rebooted, the administrator logs in and installs several additional software packages.
@@ -8109,12 +8326,12 @@ Ethernet interface is temporarily configured with an IP address.Finally, the administrator authorizes remote access by following the -instructions in the final section: Ansible Test Authorization. +instructions in the final section: Ansible Test Authorization.
The core
machine is created with 1GiB of RAM and 6GiB of disk.
@@ -8131,7 +8348,7 @@ create_vm
-After Debian is installed (as detailed in A Test Machine) and the +After Debian is installed (as detailed in A Test Machine) and the machine rebooted, the administrator logs in and installs several additional software packages.
@@ -8197,12 +8414,12 @@ Netplan soon.)Finally, the administrator authorizes remote access by following the -instructions in the next section: Ansible Test Authorization. +instructions in the next section: Ansible Test Authorization.
To authorize Ansible's access to the three test machines, they must @@ -8213,9 +8430,9 @@ key to each test machine.
SRC=Secret/ssh_admin/id_rsa.pub
-scp $SRC sysadm@192.168.57.3:admin_key # Front
-scp $SRC sysadm@192.168.56.2:admin_key # Gate
-scp $SRC sysadm@192.168.56.1:admin_key # Core
+scp $SRC sysadm@192.168.57.3:admin_key # Front
+scp $SRC sysadm@192.168.56.2:admin_key # Gate
+scp $SRC sysadm@192.168.56.1:admin_key # Core
At this point the three test machines core
, gate
, and front
are
@@ -8287,8 +8504,8 @@ not.
At this point the test institute is just core
, gate
and front
,
@@ -8309,8 +8526,8 @@ forwarding (and NATing). On core
(and gate
):
ping -c 1 8.8.4.4 # dns.google
-ping -c 1 192.168.15.5 # front_addr
+ping -c 1 8.8.4.4 # dns.google
+ping -c 1 192.168.15.5 # front_addr
Further tests involve Nextcloud account management. Nextcloud is
-installed on core
as described in Configure Nextcloud. Once
+installed on core
as described in Configure Nextcloud. Once
/Nextcloud/
is created, ./inst config core
will validate
or update its configuration files.
./inst client
command.
A member must be enrolled so that a member's client machine can be @@ -8398,8 +8615,8 @@ Take note of Dick's initial password.
A test member's notebook is created next, much like the servers, @@ -8427,7 +8644,7 @@ behind) the access point.
-Debian is installed much as detailed in A Test Machine except that +Debian is installed much as detailed in A Test Machine except that the SSH server option is not needed and the GNOME desktop option is. When the machine reboots, the administrator logs into the desktop and installs a couple additional software packages (which @@ -8440,8 +8657,8 @@ require several more).
The ./inst client
command is used to register the public key of a
@@ -8469,11 +8686,11 @@ command, generating campus.conf
and public.conf
files.
-The campus.conf
WireGuard⢠configuration file (generated in Test
+The campus.conf
WireGuard⢠configuration file (generated in Test
Client Command) is transferred to dick
, which is at the Wi-Fi access
point's IP address, host 2 on the wild Ethernet.
systemctl status
-ping -c 1 8.8.8.8 # dns.google
-ping -c 1 192.168.56.1 # core
-host dns.google
+ping -c 1 8.8.8.8 # dns.google
+ping -c 1 192.168.56.1 # core
+host dns.google
host core.small.private
host www
Next, the administrator copies Backup/WWW/
(included in the
@@ -8554,8 +8771,8 @@ will warn but allow the luser to continue.
Modify /WWW/live/index.html
on core
and wait 15 minutes for it to
@@ -8569,8 +8786,8 @@ Hack /home/www/index.html
on front
and observe the result at
Nextcloud is typically installed and configured after the first
@@ -8578,9 +8795,9 @@ Ansible run, when core
has Internet access via gate
.
installation directory /Nextcloud/nextcloud/
appears, the Ansible
code skips parts of the Nextcloud configuration. The same
installation (or restoration) process used on Core is used on core
-to create /Nextcloud/
. The process starts with Create
-/Nextcloud/
, involves Restore Nextcloud or Install Nextcloud,
-and runs ./inst config core
again 8.23.6. When the ./inst
+to create
/Nextcloud/
. The process starts with Create
+/Nextcloud/
, involves Restore Nextcloud or Install Nextcloud,
+and runs ./inst config core
again 8.23.6. When the ./inst
config core
command is happy with the Nextcloud configuration on
core
, the administrator uses Dick's notebook to test it, performing
the following tests on dick
's desktop.
@@ -8658,8 +8875,8 @@ the calendar.
With Evolution running on the member notebook dick
, one second email
@@ -8687,8 +8904,8 @@ Outgoing email is also tested. A message to
At this point, dick
can move abroad, from the campus Wi-Fi
@@ -8717,9 +8934,9 @@ Again, some basics are tested in a terminal.
ping -c 1 8.8.4.4 # dns.google
-ping -c 1 192.168.56.1 # core
-host dns.google
+ping -c 1 8.8.4.4 # dns.google
+ping -c 1 192.168.56.1 # core
+host dns.google
host core.small.private
host www
@@ -8746,8 +8963,8 @@ calendar events.
To test the ./inst pass
command, the administrator logs in to core
@@ -8794,8 +9011,8 @@ Finally, the administrator verifies that dick
can login on co
One more institute command is left to exercise. The administrator @@ -8815,16 +9032,16 @@ fail.
The small institute's network, as currently defined in this doocument, is lacking in a number of respects.
The current network monitoring is rudimentary. It could use some @@ -8850,16 +9067,16 @@ not available on Front, yet.
The testing process described in the previous chapter is far from complete. Additional tests are needed.
The backup
command has not been tested. It needs an encrypted
@@ -8868,8 +9085,8 @@ partition with which to sync? And then some way to compare that to
The restore process has not been tested. It might just copy Backup/
@@ -8879,11 +9096,11 @@ perhaps permissions too. It could also use an example
-Email access (IMAPS) on front
is… difficult to test unless
+Email access (IMAPS) on front
is… difficult to test unless
core
's fetchmails are disconnected, i.e. the whole campus is
disconnected, so that new email stays on front
long enough to be
seen.
@@ -8904,8 +9121,8 @@ could be used.
Creating the private network from whole cloth (machines with recent @@ -8925,11 +9142,11 @@ etc.: quite a bit of temporary, manual localnet configuration just to get to the additional packages.
-The strategy pursued in The Hardware is two phase: prepare the servers +The strategy pursued in The Hardware is two phase: prepare the servers on the Internet where additional packages are accessible, then connect them to the campus facilities (the private Ethernet switch, Wi-Fi AP, ISP), manually configure IP addresses (while the DHCP client silently @@ -8937,8 +9154,8 @@ fails), and avoid names until BIND9 is configured.
The strategy of Starting With Gate concentrates on configuring Gate's @@ -8982,8 +9199,8 @@ ansible-playbook -l core site.yml
A refinement of the current strategy might avoid the need to maintain @@ -9028,7 +9245,7 @@ done, and is left as a manual exercise.
Front is accessible via Gate but routing from the host address
on vboxnet0
through Gate requires extensive interference with the
-routes on Front and Gate, making the simulation less… similar.
+routes on Front and Gate, making the simulation less… similar.