: sysadm@ubuntu$ logout
: notebook$ ssh-keygen -f ~/.ssh/known_hosts -R 159.65.75.60
-The last command removes the old host key from the administrator's
-=known_hosts= file. The next SSH connection should ask to confirm the
-new host identity.
-
-The administrator then tested the password-less ssh login as well as
-the privilege escalation command.
+The last command removed the old host key from the administrator's
+=known_hosts= file. The next few commands served to test
+password-less login as well as the privilege escalation command
+~sudo~.
+
+The Droplet needed a couple additional software packages immediately.
+The ~wireguard~ package was needed to generate the Droplet's private
+key. The ~systemd-resolved~ package was installed so that the
+subsequent reboot gets ResolveD configured properly (else ~resolvectl~
+hangs, causing ~wg-quick@wg0~ to hang...). The rest are included just
+to speed up (re)testing of "prepared" test machines, e.g. prepared as
+described in [[* The Test Front Machine][The Test Front Machine]].
+
+# A similar list of packages is installed on "The Test Front Machine".
+# That list should be kept in sync with this list!
: notebook$ ssh sysadm@159.65.75.60
-: sysadm@ubuntu$ sudo head -1 /etc/shadow
-: [sudo] password for sysadm:
-: root:*:18355:0:99999:7:::
+: sysadm@ubuntu$ sudo apt install wireguard systemd-resolved \
+: unattended-upgrades postfix dovecot-imapd rsync apache2 kamailio
+
+With WireGuard™ installed, the following commands generated a new
+private key, and displayed its public key.
+
+: sysadm@ubuntu$ umask 077
+: susadm@ubuntu$ wg genkey \
+: sysadm@ubuntu_ | sudo tee /etc/wireguard/private-key \
+: sysadm@ubuntu_ | wg pubkey
+: S+6HaTnOwwhWgUGXjSBcPAvifKw+j8BDTRfq534gNW4=
-/After/ passing the above test, the administrator disabled root logins
-on the droplet. The last command below tested that root logins were
-indeed denied.
+The public key is copied and pasted into [[file:private/vars.yml][=private/vars.yml=]] as the
+value of ~front_wg_pubkey~ (as in the example [[pubkeys][here]]).
+
+/After/ collecting Front's public key, the administrator disabled root
+logins on the droplet. The last command below tested that root logins
+were indeed denied.
: sysadm@ubuntu$ sudo rm -r /root/.ssh
: sysadm@ubuntu# logout
packages. The administrator temporarily plugged Core into a cable
modem and installed them as shown below.
-: $ sudo apt install systemd-resolved unattended-upgrades \
-: _ chrony isc-dhcp-server bind9 apache2 wireguard \
-: _ postfix dovecot-imapd fetchmail expect rsync \
-: _ gnupg openssh-server
+# A similar list of packages is installed on "The Test Core Machine".
+# That list should be kept in sync with this list!
+
+: $ sudo apt install wireguard systemd-resolved unattended-upgrades \
+: _ chrony isc-dhcp-server bind9 apache2 postfix \
+: _ dovecot-imapd fetchmail expect rsync gnupg
Manual installation of Postfix prompted for configuration type and
mail name. The answers given are listed here.
PHP modules. Installing them while Core was on a cable modem sped up
final configuration "in position" (on a frontier).
+# A similar list of packages is installed on "The Test Core Machine".
+# That list should be kept in sync with this list!
+
: $ sudo apt install mariadb-server php php-{apcu,bcmath,curl,gd,gmp}\
: _ php-{json,mysql,mbstring,intl,imagick,xml,zip} \
: _ libapache2-mod-php
Similarly, the NAGIOS configuration required a handful of packages
that were pre-loaded via cable modem (to test a frontier deployment).
+# A similar list of packages is installed on "The Test Core Machine".
+# That list should be kept in sync with this list!
+
: $ sudo apt install nagios4 monitoring-plugins-basic lm-sensors \
: _ nagios-nrpe-plugin
software packages. The administrator temporarily plugged Gate into a
cable modem and installed them as shown below.
+# A similar list of packages is installed on "The Test Gate Machine".
+# That list should be kept in sync with this list!
+
: $ sudo apt install systemd-resolved unattended-upgrades \
-: _ ufw postfix wireguard \
-: _ openssh-server
+: _ ufw postfix wireguard lm-sensors \
+: _ nagios-nrpe-server
The host then needed to be rebooted to get its name service working
again after ~systemd-resolved~ was installed. (Any help with this will
** Configure Core NetworkD
Core's network interface is statically configured using the
-~systemd-networkd~ configuration file =10-ether.network= installed in
-=/etc/systemd/network/=. That file provides Core's address on the
-private Ethernet, the campus name server and search domain, and the
-default route through Gate. A second route, through Core itself to
-Front, is advertised to other hosts, and is routed through a
-WireGuard™ interface connected to Front's public WireGuard™ VPN.
-
-The configuration needs the name of its main (only) Ethernet
-interface, an example of which is given here. (A clever way to
-extract that name from ~ansible_facts~ would be appreciated. The
-~ansible_default_ipv4~ fact was an empty hash at first boot on a
-simulated campus Ethernet.)
+~systemd-networkd~ configuration files =10-lan.link= and
+=10-lan.network= installed in =/etc/systemd/network/=. Those files
+statically assign Core's IP address (as well as the campus name server
+and search domain), and its default route through Gate. A second
+route, through Core itself to Front, is advertised to other hosts, and
+is routed through a WireGuard™ interface connected to Front's public
+WireGuard™ VPN.
+
+Note that the ~[Match]~ sections of the =.network= files should
+specify only a ~MACAddress~. Getting ~systemd-udevd~ to rename
+interfaces has thusfar been futile (short of a reboot), so specifying
+a ~Name~ means the interface does not match, leaving it un-configured
+(until the next reboot).
+
+The configuration needs the MAC address of the primary (only) NIC, an
+example of which is given here. (A clever way to extract that name
+from ~ansible_facts~ would be appreciated. The ~ansible_default_ipv4~
+fact was an empty hash at first boot on a simulated campus Ethernet.)
#+CAPTION: [[file:private/vars.yml][=private/vars.yml=]]
#+BEGIN_SRC conf :tangle private/vars.yml
-core_ethernet: enp0s3
+core_lan_mac: 08:00:27:b3:e5:5f
#+END_SRC
#+CAPTION: [[file:roles_t/core/tasks/main.yml][=roles_t/core/tasks/main.yml=]]
#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml
-- name: Install 10-ether.network.
+- name: Install 10-lan.link.
+ become: yes
+ copy:
+ content: |
+ [Match]
+ MACAddress={{ core_lan_mac }}
+
+ [Link]
+ Name=lan
+ dest: /etc/systemd/network/10-lan.link
+
+- name: Install 10-lan.network.
become: yes
copy:
content: |
[Match]
- Name={{ core_ethernet }}
+ MACAddress={{ core_lan_mac }}
[Network]
Address={{ core_addr_cidr }}
Gateway={{ gate_addr }}
DNS={{ core_addr }}
Domains={{ domain_priv }}
- dest: /etc/systemd/network/10-ether.network
+ dest: /etc/systemd/network/10-lan.network
+ notify: Reload networkd.
+#+END_SRC
+
+# A similar configuration is created by the test-core-prep script.
+# That script and the above configuration should be kept in sync!
+
+#+CAPTION: [[file:roles_t/core/handlers/main.yml][=roles_t/core/handlers/main.yml=]]
+#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml
+
+- name: Reload networkd.
+ become: yes
+ command: networkctl reload
+ tags: actualizer
#+END_SRC
** Configure DHCP For the Private Ethernet
become: yes
lineinfile:
path: /etc/default/isc-dhcp-server
- line: INTERFACESv4="{{ core_ethernet }}"
+ line: INTERFACESv4="lan"
regexp: ^INTERFACESv4=
notify: Restart DHCP server.
#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml
- name: Restart Chrony.
+ become: yes
systemd:
service: chrony
state: restarted
content: |
[Match]
MACAddress={{ gate_lan_mac }}
- Name=lan
[Network]
Address={{ gate_addr_cidr }}
Domains={{ domain_priv }}
[Route]
- Destination={{ public_vpn_net_cidr }}
+ Destination={{ public_wg_net_cidr }}
Gateway={{ core_addr }}
dest: /etc/systemd/network/10-lan.network
notify: Reload networkd.
#+END_SRC
+# A similar configuration is created by the test-gate-prep script.
+# That script and the above configuration should be kept in sync!
+
#+CAPTION: [[file:roles_t/gate/handlers/main.yml][=roles_t/gate/handlers/main.yml=]]
#+BEGIN_SRC conf :tangle roles_t/gate/handlers/main.yml :mkdirp yes
---
- name: Reload networkd.
become: yes
command: networkctl reload
+ tags: actualizer
#+END_SRC
*** Gate's ~wild~ Interface
ansible_ssh_extra_args: -i Secret/ssh_admin/id_rsa
hosts:
front:
- ansible_host: 192.168.57.3
+ ansible_host: 192.168.58.3
ansible_become_password: "{{ become_front }}"
core:
ansible_host: 192.168.56.1
Most of these settings are already in =private/vars.yml=. The
following few provide the servers' public keys and ports.
+#+NAME: pubkeys
#+CAPTION: [[file:private/vars.yml][=private/vars.yml=]]
#+BEGIN_SRC conf :tangle private/vars.yml
front_wg_pubkey: S+6HaTnOwwhWgUGXjSBcPAvifKw+j8BDTRfq534gNW4=
core_wg_pubkey: lGhC51IBgZtlq4H2bsYFuKvPtV0VAEwUvVIn5fW7D0c=
#+END_SRC
-All of the private keys used in the example/test configuration are
-listed in the following table. The first three are copied to
-=/etc/wireguard/private-key= on each of the corresponding test
-machines: ~front~, ~gate~ and ~core~. The rest are installed on
-the test client to give it different personae.
-
-| Test Host | WireGuard™ Private Key |
-|---------------+----------------------------------------------|
-| ~front~ | AJkzVxfTm/KvRjzTN/9X2jYy+CAugiwZfN5F3JTegms= |
-| ~gate~ | yOBdLbXh6KBwYQvvb5mhiku8Fxkqc5Cdyz6gNgjc/2U= |
-| ~core~ | AI+KhwnsHzSPqyIyAObx7EBBTBXFZPiXb2/Qcts8zEI= |
-| ~thing~ | KIwQT5eGOl9w1qOa5I+2xx5kJH3z4xdpmirS/eGdsXY= |
-| ~dick~ | WAhrlGccPf/BaFS5bRtBE4hEyt3kDxCavmwZfVTsfGs= |
-| ~dicks-phone~ | oG/Kou9HOBCBwHAZGypPA1cZWUL6nR6WoxBiXc/OQWQ= |
-| ~dicks-razr~ | IGNcF0VpkIBcJQAcLZ9jgRmk0SYyUr/WwSNXZoXXUWQ= |
-
** The CA Command
The next code block implements the ~CA~ sub-command, which creates a
machine at ~192.168.56.10~ pretending to be the administrator's
notebook.
-- ~vboxnet1~ :: Another Host-only network, simulating the untrusted
+- ~vboxnet1~ :: Another Host-only network, simulating the wild
Ethernet between Gate and the campus IoT (and Wi-Fi APs). It has no
- services, no DHCP, just the host at ~192.168.57.2~, simulating the
- NATed Wi-Fi network.
+ services, no DHCP, just the host at ~192.168.57.2~.
+
+- ~vboxnet2~ :: A third Host-only network, used only to directly
+ connect the host to ~front~.
In this simulation the IP address for ~front~ is not a public address
but a private address on the NAT network ~premises~. Thus ~front~ is
-not accessible to the administrator's notebook (the host). To work
-around this restriction, ~front~ gets a second network interface
-connected to the ~vboxnet1~ network and used only for ssh access from
-the host.[fn:4]
+not accessible by the host, by Ansible on the administrator's
+notebook. To work around this restriction, ~front~ gets a second
+network interface connected to the ~vboxnet2~ network. The address of
+this second interface is used by Ansible to access ~front~.[fn:4]
The networks described above are created and "started" with the
following ~VBoxManage~ commands.
VBoxManage natnetwork start --netname premises
VBoxManage hostonlyif create # vboxnet0
VBoxManage hostonlyif ipconfig vboxnet0 --ip=192.168.56.10
-VBoxManage dhcpserver modify --interface=vboxnet0 --disable
VBoxManage hostonlyif create # vboxnet1
VBoxManage hostonlyif ipconfig vboxnet1 --ip=192.168.57.2
+VBoxManage hostonlyif create # vboxnet2
+VBoxManage hostonlyif ipconfig vboxnet2 --ip=192.168.58.1
#+END_SRC
-Note that the first host-only network, ~vboxnet0~, gets DHCP service
-by default, but that will interfere with the service being tested on
-~core~, so it must be explicitly disabled. Only the NAT network
-~premises~ should have a DHCP server enabled.
+Note that only the NAT network ~premises~ should have a DHCP server
+enabled.
Note also that actual ISPs and clouds will provide Gate and Front with
public network addresses. In this simulation "they" provide addresses
-on the private ~192.168.15.0/24~ network.
+on the private ~192.168.15.0/24~ NAT network.
** The Test Machines
The virtual machines are created by ~VBoxManage~ command lines in the
following sub-sections. They each start with a recent Debian release
(e.g. =debian-12.5.0-amd64-netinst.iso=) in their simulated DVD
-drives. As in [[*The Hardware][The Hardware]] preparation process being simulated, a few
-additional software packages are installed. Unlike in [[*The Hardware][The Hardware]]
-preparation, machines are moved to their final networks and /then/
-remote access is authorized. (They are not accessible via ~ssh~ on
-the VirtualBox NAT network where they first boot.)
+drives. Preparation of [[*The Hardware][The Hardware]] installed additional software
+packages and keys while the machines had Internet access. They were
+then moved to the new campus network where Ansible completed the
+configuration without Internet access.
+
+Preparation of the test machines is automated by "preparatory scripts"
+that install the same "additional software packages" and the same test
+keys given in the examples. The scripts are run on each VM while they
+are still attached to the host's NAT network and have Internet access.
+They prepare the machine to reboot on the simulated campus network
+without Internet access, ready for final configuration by Ansible and
+the first launch of services. The "move to campus" is simulated by
+shutting each VM down, executing a ~VBoxManage~ command line or two,
+and restarting.
+
+*** The Test Wireguard™ Keys <<privkeys>>
+
+All of the private keys used in the example/test configuration are
+listed here. The first three are copied to
+=/etc/wireguard/private-key= on the servers: ~front~, ~gate~ and
+~core~. The rest are installed on the test client to give it
+different personae. In actual use, private keys are generated on the
+servers and clients, and stay there. Only the public keys are
+collected (and registered with the ~./inst client~ command).
+
+| Test Host | WireGuard™ Private Key |
+|---------------+----------------------------------------------|
+| ~front~ | AJkzVxfTm/KvRjzTN/9X2jYy+CAugiwZfN5F3JTegms= |
+| ~gate~ | yOBdLbXh6KBwYQvvb5mhiku8Fxkqc5Cdyz6gNgjc/2U= |
+| ~core~ | AI+KhwnsHzSPqyIyAObx7EBBTBXFZPiXb2/Qcts8zEI= |
+| ~thing~ | KIwQT5eGOl9w1qOa5I+2xx5kJH3z4xdpmirS/eGdsXY= |
+| ~dick~ | WAhrlGccPf/BaFS5bRtBE4hEyt3kDxCavmwZfVTsfGs= |
+| ~dicks-phone~ | oG/Kou9HOBCBwHAZGypPA1cZWUL6nR6WoxBiXc/OQWQ= |
+| ~dicks-razr~ | IGNcF0VpkIBcJQAcLZ9jgRmk0SYyUr/WwSNXZoXXUWQ= |
+
+*** Ansible Test Authorization
-Once the administrator's notebook is authorized to access the
-privileged accounts on the virtual machines, they are prepared for
-configuration by Ansible.
+Part of each machine's preparation is to authorize password-less SSH
+connections from Ansible, which will be using the public key in
+=Secret/ssh_admin/=. This is common to all machines and so is
+provided here tagged with ~test-auth~ and used via noweb reference
+~<<test-auth>>~ in each machine's preparatory script.
+
+#+NAME: test-auth
+#+CAPTION: ~test-auth~
+#+BEGIN_SRC sh
+( cd
+ umask 077
+ if [ ! -d .ssh ]; then mkdir .ssh; fi
+ ( echo -n "ssh-rsa"
+ echo -n " AAAAB3NzaC1yc2EAAAADAQABAAABgQDXxXnqFaUq3WAmmW/P8OMm3cf"
+ echo -n "AGJoL1UC8yjbsRzt63RmusID2CvPTJfO/sbNAxDKHPBvYJqiwBY8Wh2V"
+ echo -n "BDXoO2lWAK9JOSvXMZZRmBh7Yk6+NsPSbeZ6H3DgzdmKubs4E5XEdkmO"
+ echo -n "iivyiGBWiwzDKAOqWvb60yWDDNEuHyGNznKjyL+nAOzul1hP5f23vX3e"
+ echo -n "VhTxV0zdClksvIppGsYY3EvhMxasnjvGOhECz1Pq/9PPxakY1kBKMFj8"
+ echo -n "yh75UfYJyRiUcFUVZD/dQyDMj7gtihv4ANiUAIgn94I4Gt9t8a2OiLyr"
+ echo -n "KhJAwTQrs4CA+suY+3uDcp2FuQAvuzpa2moUufNetQn9YYCpCQaio8I3"
+ echo -n "N9N5POqPGtNT/8Fv1wwWsl/T363NJma7lrtQXKgq52YYmaUNnHxPFqLP"
+ echo -n "/9ELaAKbKrXTel0ew/LyVEO6QJ6fU7lE3LYMF5DngleOpuOHyQdIJKvS"
+ echo -n "oCb7ilDuG8ekZd3ZEROhtyHlr7UcHrtmZMYjhlRc="
+ echo " A Small Institute Administrator" ) \
+ >>.ssh/authorized_keys )
+#+END_SRC
*** A Test Machine
create_vm
#+END_SRC
-Soon after starting, the machine console should show the installer's
-first prompt: to choose a system language. Installation on the small
-machines, ~front~ and ~gate~, may put the installation into "low
-memory mode", in which case the installation is textual, the system
-language is English, and the first prompt is for location. The
-appropriate responses to the prompts are given in the list below.
+Soon after starting, the machine console shows the Debian GNU/Linux
+installer menu and the default "Graphical Install" is chosen. On the
+machines with only 512MB of RAM, ~front~ and ~gate~, the installer
+switches to a text screen and warns it is using a "Low memory mode".
+The installation proceeds in English and its first prompt is for a
+location. The appropriate responses to this and subsequent prompts
+are given in the list below.
- Select a language (unless in low memory mode!)
+ Language: English - English
- Select your location
- + Country, territory or area: United States
+ + Continent or region: 9 (North America, if in low memory mode!)
+ + Country, territory or area: 4 (United States)
- Configure the keyboard
- + Keymap to use: American English
+ + Keymap to use: 1 (American English)
- Configure the network
+ Hostname: front (gate, core, etc.)
+ Domain name: small.example.org (small.private)
+ Username for your account: sysadm
+ Choose a password for the new user: fubar
- Configure the clock
- + Select your time zone: Eastern
+ + Select your time zone: 3 (Mountain)
- Partition disks
- + Partitioning method: Guided - use entire disk
- + Select disk to partition: SCSI3 (0,0,0) (sda) - ...
- + Partitioning scheme: All files in one partition
- + Finish partitioning and write changes to disk: Continue
- + Write the changes to disks? Yes
-- Install the base system
+ + Partitioning method: 1 (Guided - use entire disk)
+ + Select disk to partition: 1 (SCSI2 (0,0,0) (sda) - ...)
+ + Partitioning scheme: 1 (All files in one partition)
+ + 12 (Finish partitioning and write changes to disk ...)
+ + Write the changes to disks? 1 (Yes)
+- Installing the base system
- Configure the package manager
- + Scan extra installation media? No
- + Debian archive mirror country: United States
- + Debian archive mirror: deb.debian.org
- + HTTP proxy information (blank for none): <blank>
+ + Scan extra installation media? 2 (No)
+ + Debian archive mirror country: 62 (United States)
+ + Debian archive mirror: 1 (deb.debian.org)
+ + HTTP proxy information (blank for none): <localnet apt cache>
- Configure popularity-contest
+ Participate in the package usage survey? No
- Software selection
- + SSH server
- + standard system utilities
+ + Choose software to install: SSH server, standard system utilities
- Install the GRUB boot loader
+ Install the GRUB boot loader to your primary drive? Yes
+ Device for boot loader installation: /dev/sda (ata-VBOX...
-After the reboot, the machine's console should produce a ~login:~
-prompt. The administrator logs in here, with username ~sysadm~ and
-password ~fubar~, before continuing with the specific machine's
-preparation (below).
+After the reboot, the machine's console produces a ~login:~ prompt.
+The administrator logs in here, with username ~sysadm~ and password
+~fubar~, before continuing with the specific machine's preparation
+(below).
*** The Test Front Machine
Debian 12.5.0 (recently downloaded) in its CDROM drive. The exact
command lines were given in the previous section.
-After Debian is installed (as detailed above) ~front~ is shut down and
-its primary network interface moved to the simulated Internet, the NAT
-network ~premises~. ~front~ also gets a second network interface, on
-the host-only network ~vboxnet1~, to make it directly accessible to
-the administrator's notebook (as described in [[*The Test Networks][The Test Networks]]).
+After Debian is installed (as detailed above) and the machine
+rebooted, the administrator copies the following script to the machine
+and executes it.
-#+BEGIN_SRC sh
-VBoxManage modifyvm front --nic1 natnetwork --natnetwork1 premises
-VBoxManage modifyvm front --nic2 hostonly --hostonlyadapter2 vboxnet1
+The script is copied through an intermediary, an account on the local
+network thus accessible to both the host and guests on the host's NAT
+networks. If ~USER@SERVER~ is such an account, the script would be
+copied and executed thusly:
+
+: notebook$ scp private/test-front-prep USER@SERVER:
+: notebook$ scp -r Secret/ssh_front/ USER@SERVER:
+
+: sysadm@front$ scp USER@SERVER:test-front-prep ./
+: sysadm@front$ scp -r USER@SERVER:ssh_front/ ./
+: sysadm@front$ ./test-front-prep
+
+The script starts by installing additional software packages. The
+~wireguard~ package is installed so that =/etc/wireguard/= is created.
+The ~systemd-resolved~ package is installed because a reboot seems the
+only way to get name service working afterwards. As ~front~ will
+always have Internet access in the cloud, the rest of the packages are
+installed just to shorten Ansible's work later.
+
+#+CAPTION: [[file:private/test-front-prep][=private/test-front-prep=]]
+#+BEGIN_SRC sh :tangle private/test-front-prep :tangle-mode u=rwx,g=,o=
+#!/bin/bash -e
+
+sudo apt install wireguard systemd-resolved \
+ unattended-upgrades postfix dovecot-imapd rsync apache2 kamailio
#+END_SRC
-After Debian is installed and the machine rebooted, the administrator
-logs in and configures the "extra" network interface with a static IP
-address using a drop-in configuration file:
-=/etc/network/interfaces.d/eth1=.
+# A similar list of packages is installed on "The Front Machine".
+# That list should be kept in sync with this list!
-#+CAPTION: =eth1=
-#+BEGIN_SRC conf
+The Postfix installation prompts for a couple settings. The defaults,
+listed below, are fine.
+
+- General type of mail configuration: Internet Site
+- System mail name: small.example.org
+
+The script can now install the private WireGuard™ key, as well as
+Ansible's public SSH key.
+
+#+CAPTION: [[file:private/test-front-prep][=private/test-front-prep=]]
+#+BEGIN_SRC sh :tangle private/test-front-prep :noweb no-export
+
+( umask 377
+ echo "AJkzVxfTm/KvRjzTN/9X2jYy+CAugiwZfN5F3JTegms=" \
+ | sudo tee /etc/wireguard/private-key >/dev/null )
+
+<<test-auth>>
+#+END_SRC
+
+Next, the "extra" network interface is configured with a static IP
+address.
+
+#+CAPTION: [[file:private/test-front-prep][=private/test-front-prep=]]
+#+BEGIN_SRC sh :tangle private/test-front-prep :tangle-mode u=rwx,g=,o=
+
+cat <<EOF | sudo tee /etc/network/interfaces.d/enp0s8 >/dev/null
auto enp0s8
iface enp0s8 inet static
- address 192.168.57.3/24
+ address 192.168.58.3/24
+EOF
#+END_SRC
-A ~sudo ifup enp0s8~ command then brings the interface up.
+Ansible expects ~front~ to use the SSH host keys in
+=Secret/ssh_front/=, so it is prepared with these keys in advance.
+(If Ansible installed them, ~front~ would change identities while
+Ansible was configuring it. Ansible would lose subsequent access
+until the administrator's =~/.ssh/known_hosts= was updated!)
-Note that there is no pre-provisioning for ~front~, which is never
-deployed on a frontier, always in the cloud. Additional Debian
-packages are assumed to be readily available. Thus Ansible installs
-them as necessary, but first the administrator authorizes remote
-access by following the instructions in the final section: [[* Ansible Test Authorization][Ansible
-Test Authorization]].
+#+CAPTION: [[file:private/test-front-prep][=private/test-front-prep=]]
+#+BEGIN_SRC sh :tangle private/test-front-prep
+
+( cd ssh_front/etc/ssh/
+ chmod 600 ssh_host_*
+ chmod 644 ssh_host_*.pub
+ sudo cp -b ssh_host_* /etc/ssh/ )
+#+END_SRC
+
+With the preparatory script successfully executed, ~front~ is shut
+down and moved to the simulated cloud (from the default NAT network).
+
+The following ~VBoxManage~ commands effect the move, connecting the
+primary NIC to ~premises~ and a second NIC to the host-only network
+~vboxnet2~ (making it directly accessible to the administrator's
+notebook as described in [[*The Test Networks][The Test Networks]]).
+
+#+BEGIN_SRC sh
+VBoxManage modifyvm front --nic1 natnetwork --natnetwork1 premises
+VBoxManage modifyvm front --nic2 hostonly --hostonlyadapter2 vboxnet2
+#+END_SRC
+
+~front~ is now prepared for configuration by Ansible.
*** The Test Gate Machine
The ~gate~ machine is created with the same amount of RAM and disk as
~front~. Assuming the ~RAM~, ~DISK~, and ~ISO~ shell variables have
-not changed, ~gate~ can be created with two commands.
+not changed, ~gate~ can be created with one command.
#+BEGIN_SRC sh
-NAME=gate
-create_vm
+NAME=gate create_vm
#+END_SRC
After Debian is installed (as detailed in [[*A Test Machine][A Test Machine]]) and the
-machine rebooted, the administrator logs in and installs several
-additional software packages.
+machine rebooted, the administrator copies the following script to the
+machine and executes it.
-# Similar lists are given in "The Gate Machine" and should be
-# kept up-to-date.
+: notebook$ scp private/test-gate-prep USER@SERVER:
-#+BEGIN_SRC sh
-sudo apt install systemd-resolved unattended-upgrades \
- ufw postfix wireguard
+: sysadm@gate$ scp USER@SERVER:test-gate-prep
+: sysadm@gate$ ./test-gate-prep
+
+The script starts by installing additional software packages.
+
+#+CAPTION: [[file:private/test-gate-prep][=private/test-gate-prep=]]
+#+BEGIN_SRC sh :tangle private/test-gate-prep :tangle-mode u=rwx,g=,o=
+#!/bin/bash -e
+
+sudo apt install wireguard systemd-resolved unattended-upgrades \
+ postfix ufw lm-sensors nagios-nrpe-server
#+END_SRC
-Again, the Postfix installation prompts for a couple settings. The
-defaults, listed below, are fine.
+# A similar list of packages is installed on "The Gate Machine".
+# That list should be kept in sync with this list!
+
+The Postfix installation prompts for a couple settings. The defaults,
+listed below, are fine.
- General type of mail configuration: Internet Site
- System mail name: gate.small.private
-~gate~ can then move to the campus. It is shut down before the
-following ~VBoxManage~ commands are executed. The commands disconnect
-the primary Ethernet interface from ~premises~ and connect it to
-~vboxnet0~. They also create two new interfaces, ~isp~ and ~wild~,
-connected to the simulated ISP and campus wireless access point.
+The script then installs the private WireGuard™ key, as well as
+Ansible's public SSH key.
+
+#+CAPTION: [[file:private/test-gate-prep][=private/test-gate-prep=]]
+#+BEGIN_SRC sh :tangle private/test-gate-prep :noweb no-export
+( umask 377
+ echo "yOBdLbXh6KBwYQvvb5mhiku8Fxkqc5Cdyz6gNgjc/2U=" \
+ | sudo tee /etc/wireguard/private-key >/dev/null )
+
+<<test-auth>>
+#+END_SRC
+
+Next, the script configures the primary NIC with =10-lan.link= and
+=10-lan.network= files installed in =/etc/systemd/network/=. (This is
+sufficient to allow remote access by Ansible.)
+
+#+CAPTION: [[file:private/test-gate-prep][=private/test-gate-prep=]]
+#+BEGIN_SRC sh :tangle private/test-gate-prep
+
+cat <<EOD | sudo tee /etc/systemd/network/10-lan.link >/dev/null
+[Match]
+MACAddress=08:00:27:f3:16:79
+
+[Link]
+Name=lan
+EOD
+
+cat <<EOD | sudo tee /etc/systemd/network/10-lan.network >/dev/null
+[Match]
+MACAddress=08:00:27:f3:16:79
+
+[Network]
+Address=192.168.56.2/24
+DNS=192.168.56.1
+Domains=small.private
+EOD
+
+sudo systemctl --quiet enable systemd-networkd
+#+END_SRC
+
+With the preparatory script successfully executed, ~gate~ is shut down
+and moved to the campus network (from the default NAT network).
+
+The following ~VBoxManage~ commands effect the move, connecting the
+primary NIC to ~vboxnet0~ and creating two new interfaces, ~isp~ and
+~wild~. These are connected to the simulated ISP and the simulated
+wild Ethernet (e.g. campus wireless access points, IoT, whatnot).
#+BEGIN_SRC sh
VBoxManage modifyvm gate --mac-address1=080027f31679
| ~enp0s8~ | ~premises~ | campus ISP | ~gate_isp_mac~ |
| ~enp0s9~ | ~vboxnet1~ | campus IoT | ~gate_wild_mac~ |
-After ~gate~ boots up with its new network interfaces, the primary
-Ethernet interface is temporarily configured with an IP address.
-
-#+BEGIN_SRC sh
-sudo ip address add 192.168.56.2/24 dev enp0s3
-#+END_SRC
-
-Finally, the administrator authorizes remote access by following the
-instructions in the final section: [[* Ansible Test Authorization][Ansible Test Authorization]].
+~gate~ is now prepared for configuration by Ansible.
*** The Test Core Machine
The ~core~ machine is created with 1GiB of RAM and 6GiB of disk.
Assuming the ~ISO~ shell variable has not changed, ~core~ can be
-created with following commands.
+created with following command.
#+BEGIN_SRC sh
-NAME=core
-RAM=2048
-DISK=6144
-create_vm
+NAME=core RAM=2048 DISK=6144 create_vm
#+END_SRC
After Debian is installed (as detailed in [[*A Test Machine][A Test Machine]]) and the
-machine rebooted, the administrator logs in and installs several
-additional software packages.
+machine rebooted, the administrator copies the following script to the
+machine and executes it.
-# Similar lists are given in "The Core Machine" and should be
-# kept up-to-date.
+: notebook$ scp private/test-core-prep USER@SERVER:
-#+BEGIN_SRC sh
-sudo apt install systemd-resolved unattended-upgrades \
- ntp isc-dhcp-server bind9 apache2 wireguard \
- postfix dovecot-imapd fetchmail expect rsync \
- gnupg
-sudo apt install mariadb-server php php-{apcu,bcmath,curl,gd,gmp}\
+: sysadm@core$ scp USER@SERVER:test-core-prep
+: sysadm@core$ ./test-core-prep
+
+The script starts by installing additional software packages.
+
+#+CAPTION: [[file:private/test-core-prep][=private/test-core-prep=]]
+#+BEGIN_SRC sh :tangle private/test-core-prep :tangle-mode u=rwx,g=,o=
+#!/bin/bash -e
+
+sudo apt install wireguard systemd-resolved unattended-upgrades \
+ chrony isc-dhcp-server bind9 apache2 postfix \
+ dovecot-imapd fetchmail expect rsync gnupg \
+ mariadb-server php php-{apcu,bcmath,curl,gd,gmp}\
php-{json,mysql,mbstring,intl,imagick,xml,zip} \
- libapache2-mod-php
-sudo apt install nagios4 monitoring-plugins-basic lm-sensors \
+ libapache2-mod-php \
+ nagios4 monitoring-plugins-basic lm-sensors \
nagios-nrpe-plugin
#+END_SRC
-Again the Postfix installation prompts for a couple settings. The
-defaults, listed below, are fine.
+# Similar lists of packages are installed on "The Core Machine".
+# Those lists should be kept in sync with these lists!
+
+The Postfix installation prompts for a couple settings. The defaults,
+listed below, are fine.
- General type of mail configuration: Internet Site
- System mail name: core.small.private
-And domain name resolution may be broken after installing
-~systemd-resolved~. A reboot is often needed after the first ~apt
-install~ command above.
+The script can now install the private WireGuard™ key, as well as
+Ansible's public SSH key.
-Before shutting down, the name of the primary Ethernet interface
-should be compared to the example variable setting in
-[[file:private/vars.yml][=private/vars.yml=]]. The value assigned to ~core_ethernet~ should
-match the interface name.
+#+CAPTION: [[file:private/test-core-prep][=private/test-core-prep=]]
+#+BEGIN_SRC sh :tangle private/test-core-prep :noweb no-export
+( umask 377
+ echo "AI+KhwnsHzSPqyIyAObx7EBBTBXFZPiXb2/Qcts8zEI=" \
+ | sudo tee /etc/wireguard/private-key >/dev/null )
-~core~ can now move to the campus. It is shut down before the
-following ~VBoxManage~ command is executed. The command connects the
-machine's NIC to ~vboxnet0~, which simulates the campus's private
-Ethernet.
-
-#+BEGIN_SRC sh
-VBoxManage modifyvm core --nic1 hostonly --hostonlyadapter1 vboxnet0
+<<test-auth>>
#+END_SRC
-After ~core~ boots up with its new network connection, its primary NIC
-is temporarily configured with an IP address.
+Next, the script configures the primary NIC with =10-lan.link= and
+=10-lan.network= files installed in =/etc/systemd/network/=.
-#+BEGIN_SRC sh
-sudo ip address add 192.168.56.1/24 dev enp0s3
-#+END_SRC
+#+CAPTION: [[file:private/test-core-prep][=private/test-core-prep=]]
+#+BEGIN_SRC sh :tangle private/test-core-prep
-Finally, the administrator authorizes remote access by following the
-instructions in the next section: [[* Ansible Test Authorization][Ansible Test Authorization]].
-
-*** Ansible Test Authorization
+cat <<EOD | sudo tee /etc/systemd/network/10-lan.link >/dev/null
+[Match]
+MACAddress=08:00:27:b3:e5:5f
-To authorize Ansible's access to the three test machines, they must
-allow remote access to their ~sysadm~ accounts. In the following
-commands, the administrator must use IP addresses to copy the public
-key to each test machine.
+[Link]
+Name=lan
+EOD
-#+BEGIN_SRC sh
-SRC=Secret/ssh_admin/id_rsa.pub
-scp $SRC sysadm@192.168.57.3:admin_key # Front
-scp $SRC sysadm@192.168.56.2:admin_key # Gate
-scp $SRC sysadm@192.168.56.1:admin_key # Core
-#+END_SRC
+cat <<EOD | sudo tee /etc/systemd/network/10-lan.network >/dev/null
+[Match]
+MACAddress=08:00:27:b3:e5:5f
-Then the key must be installed on each machine with the following
-command line (entered at each console, or in an SSH session with
-each machine).
+[Network]
+Address=192.168.56.1/24
+Gateway=192.168.56.2
+DNS=192.168.56.1
+Domains=small.private
+EOD
-#+BEGIN_SRC sh
-( cd; umask 077; mkdir .ssh; cp admin_key .ssh/authorized_keys )
+sudo systemctl --quiet enable systemd-networkd
#+END_SRC
-The ~front~ machine needs a little additional preparation. Ansible
-will configure ~front~ with the host keys in =Secret/=. These should
-be installed there now so that ~front~ does not appear to change
-identities while Ansible is configuring.
+With the preparatory script successfully executed, ~core~ is shut down
+and moved to the campus network (from the default NAT network).
-First, the host keys are securely copied to ~front~ with the following
-command.
+The following ~VBoxManage~ commands effect the move, connecting the
+primary NIC to ~vboxnet0~.
#+BEGIN_SRC sh
-scp Secret/ssh_front/etc/ssh/ssh_host_* sysadm@192.168.57.3:
-#+END_SRC
-
-Then they are installed with these commands.
-
-#+BEGIN_SRC sh
-chmod 600 ssh_host_*
-chmod 644 ssh_host_*.pub
-sudo cp -b ssh_host_* /etc/ssh/
+VBoxManage modifyvm core --mac-address1=080027b3e55f
+VBoxManage modifyvm core --nic1 hostonly --hostonlyadapter1 vboxnet0
#+END_SRC
-Finally, the system administrator removes the old identity of ~front~.
-
-: ssh-keygen -f ~/.ssh/known_hosts -R 192.168.57.3
+~core~ is now prepared for configuration by Ansible.
** Configure Test Machines
ready to be configured by Ansible.
To configure the test machines, the ~./inst config~ command is
-executed and ~core~ restarted. Note that this first run should
-exercise all of the handlers, /and/ that subsequent runs probably /do
-not/.
+executed. Note that this first run should exercise all of the
+handlers, /and/ that subsequent runs probably /do not/.
** Test Basics