A Small Institute
The Ansible scripts herein configure a small institute's hosts according to their roles in the institute's network of public and private servers. The network topology allows the institute to present an expendable public face (easily wiped clean) while maintaining a secure and private campus that can function with or without the Internet.
1. Overview
This small institute has a public server on the Internet, Front, that handles the institute's email, web site, and cloud. Front is small, cheap, and expendable, contains only public information, and functions mostly as a VPN server relaying to a campus network.
The campus network is one or more machines physically connected via Ethernet (or a similarly secure medium) for private, un-encrypted communication in a core locality. One of the machines on this Ethernet is Core, the small institute's main server. Core provides a number of essential localnet services (DHCP, DNS, NTP), and a private, campus web site. It is also the home of the institute cloud and is where all of the institute's data actually reside. When the campus ISP (Internet Service Provider) is connected, a separate host, Gate, routes campus traffic to the ISP (via NAT). Through Gate, Core connects to Front making the institute email, cloud, etc. available to members off campus.
= _|||_ =-The-Institute-= = = = = = = = = =====-Front-===== | ----------------- ( ) ( The Internet(s) )----(Hotel Wi-Fi) ( ) | ----------------- +----Member's notebook off campus | =============== | ================================================== | Premises (Campus ISP) | +----Member's notebook on campus | | | +----(Campus Wi-Fi) | | ============== Gate ================================================ | Private +----(Ethernet switch) | +----Core +----Servers (NAS, DVR, etc.)
Members of the institute use commodity notebooks and open source desktops. When off campus, members access institute resources via the VPN on Front (via hotel Wi-Fi). When on campus, members can use the much faster and always available (despite Internet connectivity issues) VPN on Gate (via campus Wi-Fi). A member's Android phones and devices can use the same Wi-Fis, VPNs (via the OpenVPN app) and services. On a desktop or by phone, at home or abroad, members can access their email and the institute's private web and cloud.
The institute email service reliably delivers messages in seconds, so it is the main mode of communication amongst the membership, which uses OpenPGP encryption to secure message content.
2. Caveats
This small institute prizes its privacy, so there is little or no accommodation for spyware (aka commercial software). The members of the institute are dedicated to refining good tools, making the best use of software that does not need nor want our hearts, our money, nor even our attention.
Unlike a commercial cloud service with redundant hardware and multiple ISPs, Gate is a real choke point. When Gate cannot reach the Internet, members abroad will not be able to reach Core, their email folders, nor the institute cloud. They can chat privately with other members abroad or consult the public web site on Front. Members on campus will have their email and cloud, but no Internet and thus no new email and no chat with members abroad. Keeping our data on campus means we can keep operating without the Internet if we are on campus.
Keeping your data secure on campus, not on the Internet, means when your campus goes up in smoke, so does your data, unless you made an off-site (or at least fire-safe!) backup copy.
Security and privacy are the focus of the network architecture and configuration, not anonymity. There is no support for Tor. The VPNs do not funnel all Internet traffic through anonymizing services. They do not try to defeat geo-fencing.
This is not a showcase of the latest technologies. It is not expected to change except slowly.
The services are intended for the SOHO (small office, home office, 4-H chapter, medical clinic, gun-running biker gang, etc.) with a small, fairly static membership. Front can be small and cheap (10USD per month) because of this assumption.
3. The Services
The small institute's network is designed to provide a number of services. Understanding how institute hosts co-operate is essential to understanding the configuration of specific hosts. This chapter covers institute services from a network wide perspective, and gets right down in its subsections to the Ansible code that enforces its policies. On first reading, those subsections should be skipped; they reference particulars first introduced in the following chapter.
3.1. The Name Service
The institute has a public domain, e.g. small.example.org
, and a
private domain, e.g. small.private
. The public has access only to
the former and, as currently configured, to only one address (A
record): Front's public IP address. Members connected to the campus,
via wire or VPN, use the campus name server which can resolve
institute private domain names like core.small.private
. If
small.private
is also used as a search domain, members can use short
names like core
.
3.2. The Email Service
Front provides the public SMTP (Simple Mail Transfer Protocol) service
that accepts email from the Internet, delivering messages addressed to
the institute's domain name, e.g. to postmaster@small.example.org
.
Its Postfix server accepts email for member accounts and any public
aliases (e.g. postmaster
). Messages are delivered to member
~/Maildir/
directories via Dovecot.
If the campus is connected to the Internet, the new messages are
quickly picked up by Core and stored in member ~/Maildir/
directories there. Securely stored on Core, members can decrypt and
sort their email using common, IMAP-based tools. (Most mail apps can
use IMAP, the Internet Message Access Protocol.)
Core transfers messages from Front using Fetchmail's --idle
option,
which instructs Fetchmail to maintain a connection to Front so that it
can (with good campus connectivity) get notifications to pick up new
email. Members of the institute typically employ email apps that work
similarly, alerting them to new email on Core. Thus members enjoy
email messages that arrive as fast as text messages (but with the
option of real, end-to-end encryption).
If the campus loses connectivity to the Internet, new email
accumulates in ~/Maildir/
directories on Front. If a member is
abroad, with Internet access, their new emails can be accessed via
Front's IMAPS (IMAP Secured [with SSL/TLS]) service, available at the
institute domain name. When the campus regains Internet connectivity,
Core will collect the new email.
Core is the campus mail hub, securely storing members' incoming emails, and relaying their outgoing emails. It is the "smarthost" for the campus. Campus machines send all outgoing email to Core, and Core's Postfix server accepts messages from any of the institute's networks.
Core delivers messages addressed to internal host names locally. For
example webmaster@test.small.private
is delivered to webmaster
on
Core. Core relays other messages to its smarthost, Front, which is
declared by the institute's SPF (Sender Policy Framework) DNS record
to be the only legitimate sender of institute emails. Thus the
Internet sees the institute's outgoing email coming from a server at
an address matching the domain's SPF record. The institute does not
sign outgoing emails per DKIM (Domain Keys Identified Mail) yet.
TXT v=spf1 ip4:159.65.75.60 -all
There are a number of configuration settings that, for
interoperability, should be in agreement on the Postfix servers and
the campus clients. Policy also requires certain settings on both
Postfix or both Dovecot servers. To ensure that the same settings are
applied on both, the shared settings are defined here and included via
noweb reference in the server configurations. For example the Postfix
setting for the maximum message size is given in a code block labeled
postfix-message-size
below and then included in both Postfix
configurations wherever <<postfix-message-size>>
appears.
3.2.1. The Postfix Configurations
The institute aims to accommodate encrypted email containing short videos, messages that can quickly exceed the default limit of 9.77MiB, so the institute uses a limit 10 times greater than the default, 100MiB. Front should always have several gigabytes free to spool a modest number (several 10s) of maximally sized messages. Furthermore a maxi-message's time in the spool is nominally a few seconds, after which it moves on to Core (the big disks). This Postfix setting should be the same throughout the institute, so that all hosts can handle maxi-messages.
postfix-message-size
- { p: message_size_limit, v: 104857600 }
Queue warning and bounce times were shortened at the institute. Email should be delivered in seconds. If it cannot be delivered in an hour, the recipient has been cut off, and a warning is appropriate. If it cannot be delivered in 4 hours, the information in the message is probably stale and further attempts to deliver it have limited and diminishing value. The sender should decide whether to continue by re-sending the bounce (or just grabbing the go-bag!).
postfix-queue-times
- { p: delay_warning_time, v: 1h } - { p: maximal_queue_lifetime, v: 4h } - { p: bounce_queue_lifetime, v: 4h }
The Debian default Postfix configuration enables SASL authenticated relaying and opportunistic TLS with a self-signed, "snake oil" certificate. The institute substitutes its own certificates and disables relaying (other than for the local networks).
postfix-relaying
- p: smtpd_relay_restrictions v: permit_mynetworks reject_unauth_destination
Dovecot is configured to store emails in each member's ~/Maildir/
.
The same instruction is given to Postfix for the belt-and-suspenders
effect.
postfix-maildir
- { p: home_mailbox, v: Maildir/ }
The complete Postfix configurations for Front and Core use these common settings as well as several host-specific settings as discussed in the respective roles below.
3.2.2. The Dovecot Configurations
The Dovecot settings on both Front and Core disable POP and require TLS.
The official documentation for Dovecot once was a Wiki but now is
https://doc.dovecot.org, yet the Wiki is still distributed in
/usr/share/doc/dovecot-core/wiki/
.
dovecot-tls
protocols = imap ssl = required
Both servers should accept only IMAPS connections. The following
configuration keeps them from even listening at the IMAP port
(e.g. for STARTTLS
commands).
dovecot-ports
service imap-login { inet_listener imap { port = 0 } }
Both Dovecot servers store member email in members' local ~/Maildir/
directories.
dovecot-maildir
mail_location = maildir:~/Maildir
The complete Dovecot configurations for Front and Core use these
common settings with host specific settings for ssl_cert
and
ssl_key
.
3.3. The Web Services
Front provides the public HTTP service that serves institute web pages
at e.g. https://small.example.org/
. The small institute initially
runs with a self-signed, "snake oil" server certificate, causing
browsers to warn of possible fraud, but this certificate is easily
replaced by one signed by a recognized authority, as discussed in The
Front Role.
The Apache2 server finds its web pages in the /home/www/
directory
tree. Pages can also come from member home directories. For
example the HTML for https://small.example.org/~member
would come
from the /home/member/Public/HTML/index.html
file.
The server does not run CGI scripts. This keeps Front's CPU requirements cheap. CGI scripts can be used on Core. Indeed Nextcloud on Core uses PHP and the whole LAMP (Linux, Apache, MySQL, PHP) stack.
Core provides a campus HTTP service with several virtual hosts.
These web sites can only be accessed via the campus Ethernet or an
institute VPN. In either situation Core's many private domain names
become available, e.g. www.small.private
. In many cases these
domain names can be shortened e.g. to www
. Thus the campus home
page is accessible in a dozen keystrokes: http://www/
(plus Enter).
Core's web sites:
http://www/
- is the small institute's campus web site. It
serves files from the staff-writable
/WWW/campus/
directory tree. http://live/
- is a local copy of the institute's public web
site. It serves the files in the
/WWW/live/
directory tree, which is mirrored to Front. http://test/
- is a test copy of the institute's public web
site. It tests new web designs in the
/WWW/test/
directory tree. Changes here are merged into the live tree,/WWW/live/
, once they are complete and tested. http://core/
- is the Debian default site. The institute does not munge this site, to avoid conflicts with Debian-packaged web services (e.g. Nextcloud, Zoneminder, MythTV's MythWeb).
Core runs a cron job under a system account named monkey
that
mirrors /WWW/live/
to Front's /home/www/
every 15 minutes.
Vandalism on Front should not be possible, but if it happens Monkey
will automatically wipe it within 15 minutes.
3.4. The Cloud Service
Core runs Nextcloud to provide a private institute cloud at
http://core.small.private/nextcloud/
. It is managed manually per
The Nextcloud Server Administration Guide. The code and data,
including especially database dumps, are stored in /Nextcloud/
which
is included in Core's backup procedure as described in Backups. The
default Apache2 configuration expects to find the web scripts in
/var/www/nextcloud/
, so the institute symbolically links this to
/Nextcloud/nextcloud/
.
Note that authenticating to a non-HTTPS URL like
http://core.small.private/
is often called out as insecure, but the
domain name is private and the service is on a directly connected
private network.
3.5. The VPN Services
The institute's public and campus VPNs have many common configuration
options that are discussed here. These are included, with example
certificates and network addresses, in the complete server
configurations of The Front Role and The Gate Role, as well as the
matching client configurations in The Core Role and the .ovpn
files
generated by The Client Command. The configurations are based on the
documentation for OpenVPN v2.4: the openvpn(8)
manual page and this
web page.
3.5.1. The VPN Configuration Options
The institute VPNs use UDP on a subnet topology (rather than
point-to-point) with "split tunneling". The UDP support accommodates
real-time, connection-less protocols. The split tunneling is for
efficiency with frontier bandwidth. The subnet topology, with the
client-to-client
option, allows members to "talk" to each other on
the VPN subnets using any (experimental) protocol.
openvpn-dev-mode
dev-type tun dev ovpn topology subnet client-to-client
A keepalive
option is included on the servers so that clients detect
an unreachable server and reset the TLS session. The option's default
is doubled to 2 minutes out of respect for frontier service
interruptions.
openvpn-keepalive
keepalive 10 120
As mentioned in The Name Service, the institute uses a campus name server. OpenVPN is instructed to push its address and the campus search domain.
openvpn-dns
push "dhcp-option DOMAIN {{ domain_priv }}" push "dhcp-option DNS {{ core_addr }}"
The institute does not put the OpenVPN server in a chroot
jail, but
it does drop privileges to run as user nobody:nobody
. The
persist-
options are needed because nobody
cannot open the tunnel
device nor the key files.
openvpn-drop-priv
user nobody group nogroup persist-key persist-tun
The institute does a little additional hardening, sacrificing some
compatibility with out-of-date clients. Such clients are generally
frowned upon at the institute. Here cipher
is set to AES-256-GCM
,
the default for OpenVPN v2.4, and auth
is upped to SHA256
from
SHA1
.
openvpn-crypt
cipher AES-256-GCM auth SHA256
Finally, a max-client
limit was chosen to frustrate flooding while
accommodating a few members with a handful of devices each.
openvpn-max
max-clients 20
The institute's servers are lightly loaded so a few debugging options
are appropriate. To help recognize host addresses in the logs, and
support direct client-to-client communication, host IP addresses are
made "persistent" in the ipp.txt
file. The server's status is
periodically written to the openvpn-status.log
and verbosity is
raised from the default level 1 to level 3 (just short of a deluge).
openvpn-debug
ifconfig-pool-persist ipp.txt status openvpn-status.log verb 3
3.6. Accounts
A small institute has just a handful of members. For simplicity (and
thus security) static configuration files are preferred over complex
account management systems, LDAP, Active Directory, and the like. The
Ansible scripts configure the same set of user accounts on Core and
Front. The Institute Commands (e.g. ./inst new dick
) capture the
processes of enrolling, modifying and retiring members of the
institute. They update the administrator's membership roll, and run
Ansible to create (and disable) accounts on Core, Front, Nextcloud,
etc.
The small institute does not use disk quotas nor access control lists. It relies on Unix group membership and permissions. It is Debian based and thus uses "user groups" by default. Sharing is typically accomplished via the campus cloud and the resulting desktop files can all be private (readable and writable only by the owner) by default.
3.6.1. The Administration Accounts
The institute avoids the use of the root
account (uid 0
) because
it is exempt from the normal Unix permissions checking. The sudo
command is used to consciously (conscientiously!) run specific scripts
and programs as root
. When installation of a Debian OS leaves the
host with no user accounts, just the root
account, the next step is
to create a system administrator's account named sysadm
and to give
it permission to use the sudo
command (e.g. as described in The
Front Machine). When installation prompts for the name of an
initial, privileged user account the same name is given (e.g. as
described in The Core Machine). Installation may not prompt and
still create an initial user account with a distribution specific name
(e.g. pi
). Any name can be used as long as it is provided as the
value of ansible_user
in hosts
. Its password is specified by a
vault-encrypted variable in the Secret/become.yml
file. (The
hosts
and Secret/become.yml
files are described in The Ansible
Configuration.)
3.6.2. The Monkey Accounts
The institute's Core uses a special account named monkey
to run
background jobs with limited privileges. One of those jobs is to keep
the public web site mirror up-to-date, so a corresponding monkey
account is created on Front as well.
3.7. Keys
The institute keeps its "master secrets" in an encrypted
volume on an off-line hard drive, e.g. a LUKS (Linux Unified Key
Setup) format partition on a USB pen/stick. The Secret/
sub-directory is actually a symbolic link to this partition's
automatic mount point, e.g. /media/sysadm/ADE7-F866/
. Unless this
volume is mounted (unlocked) at Secret/
, none of the ./inst
commands will work.
Chief among the institute's master secrets is the SSH key authorized
to access privileged accounts on all of the institute servers. It
is stored in Secret/ssh_admin/id_rsa
. The complete list of the
institute's SSH keys:
Secret/ssh_admin/
- The SSH key pair for A Small Institute Administrator.
Secret/ssh_monkey/
- The key pair used by Monkey to update the website on Front (and other unprivileged tasks).
Secret/ssh_front/
- The host key pair used by Front to authenticate itself. The automatically generated key pair is not used. (Thus Core's configuration does not depend on Front's.)
The institute uses a number of X.509 certificates to authenticate VPN
clients and servers. They are created by the EasyRSA Certificate
Authority stored in Secret/CA/
.
Secret/CA/pki/ca.crt
- The institute CA certificate, used to sign the other certificates.
Secret/CA/pki/issued/small.example.org.crt
- The public Apache, Postfix, and OpenVPN servers on Front.
Secret/CA/pki/issued/gate.small.private.crt
- The campus OpenVPN server on Gate.
Secret/CA/pki/issued/core.small.private.crt
- The campus Apache (thus Nextcloud), and Dovecot-IMAPd servers.
Secret/CA/pki/issued/core.crt
- Core's client certificate, by which it authenticates to Front.
The ./inst client
command creates client certificates and keys, and
can generate OpenVPN configuration (.ovpn
) files for Android and
Debian. The command updates the institute membership roll, requiring
the member's username, keeping a list of the member's clients (in case
all authorizations need to be revoked quickly). The list of client
certificates that have been revoked is stored along with the
membership roll (in private/members.yml
as the value of revoked
).
Finally, the institute uses an OpenPGP key to secure sensitive emails (containing passwords or private keys) to Core.
Secret/root.gnupg/
- The "home directory" used to create the public/secret key pair.
Secret/root-pub.pem
- The ASCII armored OpenPGP public key for
e.g.
root@core.small.private
. Secret/root-sec.pem
- The ASCII armored OpenPGP secret key.
The institute administrator updates a couple encrypted copies of this drive after enrolling new members, changing a password, issuing VPN credentials, etc.
rsync -a Secret/ Secret2/ rsync -a Secret/ Secret3/
This is out of consideration for the fragility of USB drives, and the importance of a certain SSH private key, without which the administrator will have to login with a password, hopefully stored in the administrator's password keep, to install a new SSH key.
3.8. Backups
The small institute backs up its data, but not so much so that nothing
can be deleted. It actually mirrors user directories (/home/
), the
web sites (/WWW/
), Nextcloud (/Nextcloud/
), and any capitalized
root directory entry, to a large off-line disk. Where incremental
backups are desired, a CMS like git
is used.
Off-site backups are not a priority due to cost and trust issues, and the low return on the investment given the minuscule risk of a catastrophe big enough to obliterate all local copies. And the institute's public contributions are typically replicated in public code repositories like GitHub and GNU Savannah.
The following example /usr/local/sbin/backup
script pauses
Nextcloud, dumps its database, rsyncs /home/
, /WWW/
and
/Nextcloud/
to a /backup/
volume (mounting and unmounting
/backup/
if necessary), then continues Nextcloud. The script
assumes the backup volume is labeled Backup
and formatted per LUKS
version 2.
Given the -n
flag, the script does a "pre-sync" which does not pause
Nextcloud nor dump its DB. A pre-sync gets the big file (video)
copies done while Nextcloud continues to run. A follow-up sudo
backup
(without -n
) produces the complete copy (with all the
files mentioned in the Nextcloud database dump).
private/backup
#!/bin/bash -e # # DO NOT EDIT. Maintained (will be replaced) by Ansible. # # sudo backup [-n] if [ `id -u` != "0" ] then echo "This script must be run as root." exit 1 fi if [ "$1" = "-n" ] then presync=yes shift fi if [ "$#" != "0" ] then echo "usage: $0 [-n]" exit 2 fi function cleanup () { sleep 2 finish } trap cleanup SIGHUP SIGINT SIGQUIT SIGPIPE SIGTERM function start () { if ! mountpoint -q /backup/ then echo "Mounting /backup/." cryptsetup luksOpen /dev/disk/by-partlabel/Backup backup mount /dev/mapper/backup /backup mounted=indeed else echo "Found /backup/ already mounted." mounted= fi if [ ! -d /backup/home ] then echo "The backup device should be mounted at /backup/" echo "yet there is no /backup/home/ directory." exit 2 fi if [ ! $presync ] then echo "Putting nextcloud into maintenance mode." ( cd /Nextcloud/nextcloud/ sudo -u www-data php occ maintenance:mode --on &>/dev/null ) echo "Dumping nextcloud database." ( cd /Nextcloud/ umask 07 BAK=`date +"%Y%m%d"`-dbbackup.bak.gz CNF=/Nextcloud/dbbackup.cnf mysqldump --defaults-file=$CNF nextcloud | gzip > $BAK chmod 440 $BAK ) fi } function finish () { if [ ! $presync ] then echo "Putting nextcloud back into service." ( cd /Nextcloud/nextcloud/ sudo -u www-data php occ maintenance:mode --off &>/dev/null ) fi if [ $mounted ] then echo "Unmounting /backup/." umount /backup cryptsetup luksClose backup mounted= fi echo "Done." echo "The backup device can be safely disconnected." } start for D in /home /[A-Z]*; do echo "Updating /backup$D/." ionice --class Idle --ignore \ rsync -av --delete --exclude=.NoBackups $D/ /backup$D/ done finish
4. The Particulars
This chapter introduces Ansible variables intended to simplify
changes, like customization for another institute's particulars. The
variables are separated into public information (e.g. an institute's
name) or private information (e.g. a network interface address), and
stored in separate files: public/vars.yml
and private/vars.yml
.
The example settings in this document configure VirtualBox VMs as described in the Testing chapter. For more information about how a small institute turns the example Ansible code into a working Ansible configuration, see chapter The Ansible Configuration.
4.1. Generic Particulars
The small institute's domain name is used quite frequently in the
Ansible code. The example used here is small.example.org
. The
following line sets domain_name
to that value. (Ansible will then
replace {{ domain_name }}
in the code with small.example.org
.)
public/vars.yml
--- domain_name: small.example.org
The institute's private domain is treated as sensitive information,
and so is "tangled" into the example file private/vars.yml
rather
than public/vars.yml
. The example file is used for testing, and
serves as the template for an actual, private, private/var.yml
file
that customizes this Ansible code for an actual, private, small
institute.
The institute's private domain name should end with one of the
top-level domains set aside for this purpose: .intranet
,
.internal
, .private
, .corp
, .home
or .lan
.1 It is
hoped that doing so will increase that chances that some abomination
like DNS-over-HTTPS will pass us by.
private/vars.yml
--- domain_priv: small.private
4.2. Subnets
The small institute uses a private Ethernet, two VPNs, and an
untrusted Ethernet (for the campus Wi-Fi access point). Each must
have a unique private network address. Hosts using the VPNs are also
using foreign private networks, e.g. a notebook on a hotel Wi-Fi. To
better the chances that all of these networks get unique addresses,
the small institute uses addresses in the IANA's (Internet Assigned
Numbers Authority's) private network address ranges except the
192.168
address range already in widespread use. This still leaves
69,632 8 bit networks (each addressing up to 254 hosts) from which to
choose. The following table lists their CIDRs (subnet numbers in
Classless Inter-Domain Routing notation) in abbreviated form (eliding
69,624 rows).
Subnet CIDR | Host Addresses |
---|---|
10.0.0.0/24 | 10.0.0.1 – 10.0.0.254 |
10.0.1.0/24 | 10.0.1.1 – 10.0.1.254 |
10.0.2.0/24 | 10.0.2.1 – 10.0.2.254 |
… | … |
10.255.255.0/24 | 10.255.255.1 – 10.255.255.254 |
172.16.0.0/24 | 172.16.0.1 – 172.16.0.254 |
172.16.1.0/24 | 172.16.1.1 – 172.16.1.254 |
172.16.2.0/24 | 172.16.2.1 – 172.16.2.254 |
… | … |
172.31.255.0/24 | 172.31.255.1 – 172.31.255.254 |
The following Emacs Lisp randomly chooses one of these 8 bit subnets. The small institute used it to pick its four private subnets. An example result follows the code.
(let ((bytes (let ((i (random (+ 256 16)))) (if (< i 256) (list 10 i (1+ (random 254))) (list 172 (+ 16 (- i 256)) (1+ (random 254))))))) (format "%d.%d.%d.0/24" (car bytes) (cadr bytes) (caddr bytes)))
=> 10.62.17.0/24
The four private networks are named and given example CIDRs in the
code block below. The small institute treats these addresses as
sensitive information so again the code block below "tangles" into
private/vars.yml
rather than public/vars.yml
. Two of the
addresses are in 192.168
subnets because they are part of a test
configuration using mostly-default VirtualBoxes (described here).
private/vars.yml
private_net_cidr: 192.168.56.0/24 public_vpn_net_cidr: 10.177.86.0/24 campus_vpn_net_cidr: 10.84.138.0/24 gate_wifi_net_cidr: 192.168.57.0/24
The network addresses are needed in several additional formats, e.g.
network address and subnet mask (10.84.138.0 255.255.255.0
). The
following boilerplate uses Ansible's ipaddr
filter to set several
corresponding variables, each with an appropriate suffix,
e.g. _net_and_mask
rather than _net_cidr
.
private/vars.yml
private_net: "{{ private_net_cidr | ansible.utils.ipaddr('network') }}" private_net_mask: "{{ private_net_cidr | ansible.utils.ipaddr('netmask') }}" private_net_and_mask: "{{ private_net }} {{ private_net_mask }}" public_vpn_net: "{{ public_vpn_net_cidr | ansible.utils.ipaddr('network') }}" public_vpn_net_mask: "{{ public_vpn_net_cidr | ansible.utils.ipaddr('netmask') }}" public_vpn_net_and_mask: "{{ public_vpn_net }} {{ public_vpn_net_mask }}" campus_vpn_net: "{{ campus_vpn_net_cidr | ansible.utils.ipaddr('network') }}" campus_vpn_net_mask: "{{ campus_vpn_net_cidr | ansible.utils.ipaddr('netmask') }}" campus_vpn_net_and_mask: "{{ campus_vpn_net }} {{ campus_vpn_net_mask }}" gate_wifi_net: "{{ gate_wifi_net_cidr | ansible.utils.ipaddr('network') }}" gate_wifi_net_mask: "{{ gate_wifi_net_cidr | ansible.utils.ipaddr('netmask') }}" gate_wifi_net_and_mask: "{{ gate_wifi_net }} {{ gate_wifi_net_mask }}" gate_wifi_broadcast: "{{ gate_wifi_net_cidr | ansible.utils.ipaddr('broadcast') }}"
The institute prefers to configure its services with IP addresses rather than domain names, and one of the most important for secure and reliable operation is Front's public IP address known to the world by the institute's Internet domain name.
public/vars.yml
front_addr: 192.168.15.5
The example address is a private network address because the example
configuration is intended to run in a test jig made up of VirtualBox
virtual machines and networks, and the VirtualBox user manual uses
192.168.15.0
in its example configuration of a "NAT Network"
(simulating Front's ISP's network).
Finally, five host addresses are needed frequently in the Ansible
code. The first two are Core's and Gate's addresses on the private
Ethernet. The next two are Gate's and the campus Wi-Fi's addresses on
the Gate-WiFi subnet, the tiny Ethernet (gate_wifi_net
) between Gate
and the (untrusted) campus Wi-Fi access point. The last is Front's
address on the public VPN, perversely called front_private_addr
.
The following code block picks the obvious IP addresses for Core
(host 1) and Gate (host 2).
private/vars.yml
core_addr_cidr: "{{ private_net_cidr | ansible.utils.ipaddr('1') }}" gate_addr_cidr: "{{ private_net_cidr | ansible.utils.ipaddr('2') }}" gate_wifi_addr_cidr: "{{ gate_wifi_net_cidr | ansible.utils.ipaddr('1') }}" wifi_wan_addr_cidr: "{{ gate_wifi_net_cidr | ansible.utils.ipaddr('2') }}" front_private_addr_cidr: "{{ public_vpn_net_cidr | ansible.utils.ipaddr('1') }}" core_addr: "{{ core_addr_cidr | ansible.utils.ipaddr('address') }}" gate_addr: "{{ gate_addr_cidr | ansible.utils.ipaddr('address') }}" gate_wifi_addr: "{{ gate_wifi_addr_cidr | ansible.utils.ipaddr('address') }}" wifi_wan_addr: "{{ wifi_wan_addr_cidr | ansible.utils.ipaddr('address') }}" front_private_addr: "{{ front_private_addr_cidr | ansible.utils.ipaddr('address') }}"
5. The Hardware
The small institute's network was built by its system administrator using Ansible on a trusted notebook. The Ansible configuration and scripts were generated by "tangling" the Ansible code included here. (The Ansible Configuration describes how to do this.) The following sections describe how Front, Gate and Core were prepared for Ansible.
5.1. The Front Machine
Front is the small institute's public facing server, a virtual machine on the Internets. It needs only as much disk as required by the institute's public web site. Often the cheapest offering (4GB RAM, 1 core, 20GB disk) is sufficient. The provider should make it easy and fast to (re)initialize the machine to a factory fresh Debian Server, and install additional Debian software packages. Indeed it should be possible to quickly re-provision a new Front machine from a frontier Internet café using just the administrator's notebook.
5.1.1. A Digital Ocean Droplet
The following example prepared a new front on a Digital Ocean droplet.
The institute administrator opened an account at Digital Ocean,
registered an ssh key, and used a Digital Ocean control panel to
create a new machine (again, one of the cheapest, smallest available)
with Ubuntu Server 20.04LTS installed. Once created, the machine and
its IP address (159.65.75.60
) appeared on the panel. Using that
address, the administrator logged into the new machine with ssh
.
On the administrator's notebook (in a terminal):
notebook$ ssh root@159.65.75.60 root@ubuntu#
The freshly created Digital Ocean droplet came with just one account,
root
, but the small institute avoids remote access to the "super
user" account (per the policy in The Administration Accounts), so the
administrator created a sysadm
account with the ability to request
escalated privileges via the sudo
command.
root@ubuntu# adduser sysadm ... New password: givitysticangout Retype new password: givitysticangout ... Full Name []: System Administrator ... Is the information correct? [Y/n] root@ubuntu# adduser sysadm sudo root@ubuntu# logout notebook$
The password was generated by gpw
, saved in the administrator's
password keep, and later added to Secret/become.yml
as shown below.
(Producing a working Ansible configuration with Secret/become.yml
file is described in The Ansible Configuration.)
notebook$ gpw 1 16 givitysticangout notebook$ echo -n "become_front: " >>Secret/become.yml notebook$ ansible-vault encrypt_string givitysticangout \ notebook_ >>Secret/become.yml
After creating the sysadm
account on the droplet, the administrator
concatenated a personal public ssh key and the key found in
Secret/ssh_admin/
(created by The CA Command) into an admin_keys
file, copied it to the droplet, and installed it as the
authorized_keys
for sysadm
.
notebook$ cat ~/.ssh/id_rsa.pub Secret/ssh_admin/id_rsa.pub \ notebook_ > admin_keys notebook$ scp admin_keys sysadm@159.65.75.60: The authenticity of host '159.65.75.60' can't be established. .... Are you sure you want to continue connecting (...)? yes ... sysadm@159.65.75.60's password: givitysticangout notebook$ ssh sysadm@159.65.75.60 sysadm@159.65.75.60's password: givitysticangout sysadm@ubuntu$ ( umask 077; mkdir .ssh; \ sysadm@ubuntu_ cp admin_keys .ssh/authorized_keys; \ sysadm@ubuntu_ rm admin_keys ) sysadm@ubuntu$ logout notebook$ rm admin_keys notebook$
The Ansible configuration expects certain host keys on the new front.
The administrator should install them now, and deal with the machine's
change of SSH identity. The following commands copied the host keys
in Secret/ssh_front/
to the droplet and restarted the SSH server.
notebook$ scp Secret/ssh_front/etc/ssh/ssh_host_* sysadm@159.65.75.60: notebook$ ssh sysadm@159.65.75.60 sysadm@ubuntu$ chmod 600 ssh_host_* sysadm@ubuntu$ chmod 644 ssh_host_*.pub sysadm@ubuntu$ sudo cp -b ssh_host_* /etc/ssh/ sysadm@ubuntu$ sudo systemctl restart ssh sysadm@ubuntu$ logout notebook$ ssh-keygen -f ~/.ssh/known_hosts -R 159.65.75.60
The last command removes the old host key from the administrator's
known_hosts
file. The next SSH connection should ask to confirm the
new host identity.
The administrator then tested the password-less ssh login as well as the privilege escalation command.
notebook$ ssh sysadm@159.65.75.60 sysadm@ubuntu$ sudo head -1 /etc/shadow [sudo] password for sysadm: root:*:18355:0:99999:7:::
After passing the above test, the administrator disabled root logins on the droplet. The last command below tested that root logins were indeed denied.
sysadm@ubuntu$ sudo rm -r /root/.ssh sysadm@ubuntu# logout notebook$ ssh root@159.65.75.60 root@159.65.75.60: Permission denied (publickey). notebook$
At this point the droplet was ready for configuration by Ansible.
Later, provisioned with all of Front's services and tested, the
institute's domain name was changed, making 159.65.75.60
its new
address.
5.2. The Core Machine
Core is the small institute's private file, email, cloud and whatnot server. It should have some serious horsepower (RAM, cores, GHz) and storage (hundreds of gigabytes). An old desktop system might be sufficient and if later it proves it is not, moving Core to new hardware is "easy" and good practice. It is also straightforward to move the heaviest workloads (storage, cloud, internal web sites) to additional machines.
Core need not have a desktop, and will probably be more reliable if it is not also playing games. It will run continuously 24/7 and will benefit from a UPS (uninterruptible power supply). It's file system and services are critical.
The following example prepared a new core on a PC with Debian 11
freshly installed. During installation, the machine was named core
,
no desktop or server software was installed, no root password was set,
and a privileged account named sysadm
was created (per the policy in
The Administration Accounts).
New password: oingstramextedil Retype new password: oingstramextedil ... Full Name []: System Administrator ... Is the information correct? [Y/n]
The password was generated by gpw
, saved in the administrator's
password keep, and later added to Secret/become.yml
as shown below.
(Producing a working Ansible configuration with Secret/become.yml
file is described in The Ansible Configuration.)
notebook$ gpw 1 16 oingstramextedil notebook$ echo -n "become_core: " >>Secret/become.yml notebook$ ansible-vault encrypt_string oingstramextedil \ notebook_ >>Secret/become.yml
With Debian freshly installed, Core needed several additional software packages. The administrator temporarily plugged Core into a cable modem and installed them as shown below.
$ sudo apt install netplan.io systemd-resolved unattended-upgrades \ _ ntp isc-dhcp-server bind9 apache2 openvpn \ _ postfix dovecot-imapd fetchmail expect rsync \ _ gnupg openssh-server
The Nextcloud configuration requires Apache2, MariaDB and a number of PHP modules. Installing them while Core was on a cable modem sped up final configuration "in position" (on a frontier).
$ sudo apt install mariadb-server php php-{apcu,bcmath,curl,gd,gmp}\ _ php-{json,mysql,mbstring,intl,imagick,xml,zip} \ _ libapache2-mod-php
Similarly, the NAGIOS configuration requires a handful of packages that were pre-loaded via cable modem (to test a frontier deployment).
$ sudo apt install nagios4 monitoring-plugins-basic lm-sensors \ _ nagios-nrpe-plugin
Next, the administrator concatenated a personal public ssh key and the
key found in Secret/ssh_admin/
(created by The CA Command) into an
admin_keys
file, copied it to Core, and installed it as the
authorized_keys
for sysadm
.
notebook$ cat ~/.ssh/id_rsa.pub Secret/ssh_admin/id_rsa.pub \ notebook_ > admin_keys notebook$ scp admin_keys sysadm@core.lan: The authenticity of host 'core.lan' can't be established. .... Are you sure you want to continue connecting (...)? yes ... sysadm@core.lan's password: oingstramextedil notebook$ ssh sysadm@core.lan sysadm@core.lan's password: oingstramextedil sysadm@core$ ( umask 077; mkdir .ssh; \ sysadm@core_ cp admin_keys .ssh/authorized_keys ) sysadm@core$ rm admin_keys sysadm@core$ logout notebook$ rm admin_keys notebook$
Note that the name core.lan
should be known to the cable modem's DNS
service. An IP address might be used instead, discovered with an ip
a
on Core.
Now Core no longer needed the Internets so it was disconnected from the cable modem and connected to the campus Ethernet switch. Its primary Ethernet interface was temporarily (manually) configured with a new, private IP address and a default route.
In the example command lines below, the address 10.227.248.1
was
generated by the random subnet address picking procedure described in
Subnets, and is named core_addr
in the Ansible code. The second
address, 10.227.248.2
, is the corresponding address for Gate's
Ethernet interface, and is named gate_addr
in the Ansible
code.
sysadm@core$ sudo ip address add 10.227.248.1 dev enp82s0 sysadm@core$ sudo ip route add default via 10.227.248.2 dev enp82s0
At this point Core was ready for provisioning with Ansible.
5.3. The Gate Machine
Gate is the small institute's route to the Internet, and the campus Wi-Fi's route to the private Ethernet. It has three network interfaces.
lan
is its main Ethernet interface, connected to the campus's private Ethernet switch.wifi
is its second Ethernet interface, connected to the campus Wi-Fi access point's WAN Ethernet interface (with a cross-over cable).isp
is its third network interface, connected to the campus ISP. This could be an Ethernet device connected to a cable modem. It could be a USB port tethered to a phone, a USB-Ethernet adapter, or a wireless adapter connected to a campground Wi-Fi access point, etc.
=============== | ================================================== | Premises (Campus ISP) | +----Member's notebook on campus | | | +----(Campus Wi-Fi) | | ============== Gate ================================================ | Private +----Ethernet switch
5.3.1. Alternate Gate Topology
While Gate and Core really need to be separate machines for security reasons, the campus Wi-Fi and the ISP's Wi-Fi can be the same machine. This avoids the need for a second Wi-Fi access point and leads to the following topology.
=============== | ================================================== | Premises (House ISP) (House Wi-Fi)-----------Member's notebook on campus (House Ethernet) | ============== Gate ================================================ | Private +----Ethernet switch
In this case Gate has two interfaces and there is no Gate-WiFi subnet.
Support for this "alternate" topology is planned but not yet implemented. Like the original topology, it should require no changes to a standard cable modem's default configuration (assuming its Ethernet and Wi-Fi clients are allowed to communicate).
5.3.2. Original Gate Topology
The Ansible code in this document is somewhat dependent on the physical network shown in the Overview wherein Gate has three network interfaces.
The following example prepared a new gate on a PC with Debian 11
freshly installed. During installation, the machine was named gate
,
no desktop or server software was installed, no root password was set,
and a privileged account named sysadm
was created (per the policy in
The Administration Accounts).
New password: icismassssadestm Retype new password: icismassssadestm ... Full Name []: System Administrator ... Is the information correct? [Y/n]
The password was generated by gpw
, saved in the administrator's
password keep, and later added to Secret/become.yml
as shown below.
(Producing a working Ansible configuration with Secret/become.yml
file is described in The Ansible Configuration.)
notebook$ gpw 1 16 icismassssadestm notebook$ echo -n "become_gate: " >>Secret/become.yml notebook$ ansible-vault encrypt_string icismassssadestm \ notebook_ >>Secret/become.yml
With Debian freshly installed, Gate needed a couple additional software packages. The administrator temporarily plugged Gate into a cable modem and installed them as shown below.
$ sudo apt install netplan.io systemd-resolved unattended-upgrades \ _ ufw isc-dhcp-server postfix openvpn \ _ openssh-server
Next, the administrator concatenated a personal public ssh key and the
key found in Secret/ssh_admin/
(created by The CA Command) into an
admin_keys
file, copied it to Gate, and installed it as the
authorized_keys
for sysadm
.
notebook$ cat ~/.ssh/id_rsa.pub Secret/ssh_admin/id_rsa.pub \ notebook_ > admin_keys notebook$ scp admin_keys sysadm@gate.lan: The authenticity of host 'gate.lan' can't be established. .... Are you sure you want to continue connecting (...)? yes ... sysadm@gate.lan's password: icismassssadestm notebook$ ssh sysadm@gate.lan sysadm@gate.lan's password: icismassssadestm sysadm@gate$ ( umask 077; mkdir .ssh; \ sysadm@gate_ cp admin_keys .ssh/authorized_keys ) sysadm@core$ rm admin_keys sysadm@core$ logout notebook$ rm admin_keys notebook$
Note that the name gate.lan
should be known to the cable modem's DNS
service. An IP address might be used instead, discovered with an ip
a
command on Gate.
Now Gate no longer needed the Internets so it was disconnected from the cable modem and connected to the campus Ethernet switch. Its primary Ethernet interface was temporarily (manually) configured with a new, private IP address.
In the example command lines below, the address 10.227.248.2
was
generated by the random subnet address picking procedure described in
Subnets, and is named gate_addr
in the Ansible code.
$ sudo ip address add 10.227.248.2 dev eth0
Gate was also connected to the USB Ethernet dongles cabled to the
campus Wi-Fi access point and the campus ISP. The three network
adapters are known by their MAC addresses, the values of the variables
gate_lan_mac
, gate_wifi_mac
, and gate_isp_mac
. (For more
information, see the Gate role's Configure Netplan task.)
At this point Gate was ready for provisioning with Ansible.
6. The All Role
The all
role contains tasks that are executed on all of the
institute's servers. At the moment there is just the one.
6.1. Include Particulars
The all
role's task contains a reference to a common institute
particular, the institute's domain_name
, a variable found in the
public/vars.yml
file. Thus the first task of the all
role is to
include the variables defined in this file (described in The
Particulars). The code block below is the first to tangle into
roles/all/tasks/main.yml
.
roles/all/tasks/main.yml
--- - name: Include public variables. include_vars: ../public/vars.yml tags: accounts
6.2. Enable Systemd Resolved
The systemd-networkd
and systemd-resolved
service units are not
enabled by default in Debian, but are the default in Ubuntu. The
institute attempts to make use of their link-local name resolution, so
they are enabled on all institute hosts.
The /usr/share/doc/systemd/README.Debian.gz
file recommends both
services be enabled and /etc/resolv.conf
be replaced with a
symbolic link to /run/systemd/resolve/resolv.conf
. The institute
follows these recommendations (and not the suggestion to enable
"persistent logging", yet). In Debian 12 there is a
systemd-resolved
package that symbolically links /etc/resolv.conf
(and provides /lib/systemd/systemd-resolved
, formerly part of the
systemd
package).
roles_t/all/tasks/main.yml
- name: Install systemd-resolved. become: yes apt: pkg=systemd-resolved when: - ansible_distribution == 'Debian' - 11 < ansible_distribution_major_version|int - name: Enable/Start systemd-networkd. become: yes systemd: service: systemd-networkd enabled: yes state: started - name: Enable/Start systemd-resolved. become: yes systemd: service: systemd-resolved enabled: yes state: started - name: Link /etc/resolv.conf. become: yes file: path: /etc/resolv.conf src: /run/systemd/resolve/resolv.conf state: link force: yes when: - ansible_distribution == 'Debian' - 12 > ansible_distribution_major_version|int
6.3. Trust Institute Certificate Authority
All servers should recognize the institute's Certificate Authority as trustworthy, so its certificate is added to the set of trusted CAs on each host. More information about how the small institute manages its X.509 certificates is available in Keys.
roles_t/all/tasks/main.yml
- name: Trust the institute CA.
become: yes
copy:
src: ../Secret/CA/pki/ca.crt
dest: /usr/local/share/ca-certificates/{{ domain_name }}.crt
mode: u=r,g=r,o=r
owner: root
group: root
notify: Update CAs.
roles_t/all/handlers/main.yml
- name: Update CAs. become: yes command: update-ca-certificates
7. The Front Role
The front
role installs and configures the services expected on the
institute's publicly accessible "front door": email, web, VPN. The
virtual machine is prepared with an Ubuntu Server install and remote
access to a privileged, administrator's account. (For details, see
The Front Machine.)
Front initially presents the same self-signed, "snake oil" server
certificate for its HTTP, SMTP and IMAP services, created by the
institute's certificate authority but "snake oil" all the same
(assuming the small institute is not a well recognized CA). The HTTP,
SMTP and IMAP servers are configured to use the certificate (and
private key) in /etc/server.crt
(and /etc/server.key
), so
replacing the "snake oil" is as easy as replacing these two files,
perhaps with symbolic links to, for example,
/etc/letsencrypt/live/small.example.org/fullchain.pem
.
Note that the OpenVPN server does not use /etc/server.crt
. It
uses the institute's CA and server certificates, and expects client
certificates signed by the institute CA.
7.1. Include Particulars
The first task, as in The All Role, is to include the institute
particulars. The front
role refers to private variables and the
membership roll, so these are included was well.
roles/front/tasks/main.yml
---
- name: Include public variables.
include_vars: ../public/vars.yml
tags: accounts
- name: Include private variables.
include_vars: ../private/vars.yml
tags: accounts
- name: Include members.
include_vars: "{{ lookup('first_found', membership_rolls) }}"
tags: accounts
7.2. Configure Hostname
This task ensures that Front's /etc/hostname
and /etc/mailname
are
correct. The correct /etc/mailname
is essential to proper email
delivery.
roles_t/front/tasks/main.yml
- name: Configure hostname. become: yes copy: content: "{{ domain_name }}\n" dest: "{{ item }}" loop: - /etc/hostname - /etc/mailname notify: Update hostname.
roles_t/front/handlers/main.yml
--- - name: Update hostname. become: yes command: hostname -F /etc/hostname
7.3. Add Administrator to System Groups
The administrator often needs to read (directories of) log files owned
by groups root
and adm
. Adding the administrator's account to
these groups speeds up debugging.
roles_t/front/tasks/main.yml
- name: Add {{ ansible_user }} to system groups.
become: yes
user:
name: "{{ ansible_user }}"
append: yes
groups: root,adm
7.4. Configure SSH
The SSH service on Front needs to be known to Monkey. The following
tasks ensure this by replacing the automatically generated keys with
those stored in Secret/ssh_front/etc/ssh/
and restarting the server.
roles_t/front/tasks/main.yml
- name: Install SSH host keys. become: yes copy: src: ../Secret/ssh_front/etc/ssh/{{ item.name }} dest: /etc/ssh/{{ item.name }} mode: "{{ item.mode }}" loop: - { name: ssh_host_ecdsa_key, mode: "u=rw,g=,o=" } - { name: ssh_host_ecdsa_key.pub, mode: "u=rw,g=r,o=r" } - { name: ssh_host_ed25519_key, mode: "u=rw,g=,o=" } - { name: ssh_host_ed25519_key.pub, mode: "u=rw,g=r,o=r" } - { name: ssh_host_rsa_key, mode: "u=rw,g=,o=" } - { name: ssh_host_rsa_key.pub, mode: "u=rw,g=r,o=r" } notify: Reload SSH server.
roles_t/front/handlers/main.yml
- name: Reload SSH server. become: yes systemd: service: ssh state: reloaded
7.5. Configure Monkey
The small institute runs cron jobs and web scripts that generate
reports and perform checks. The un-privileged jobs are run by a
system account named monkey
. One of Monkey's more important jobs on
Core is to run rsync
to update the public web site on Front. Monkey
on Core will login as monkey
on Front to synchronize the files (as
described in *Configure Apache2). To do that without needing a
password, the monkey
account on Front should authorize Monkey's SSH
key on Core.
roles_t/front/tasks/main.yml
- name: Create monkey. become: yes user: name: monkey system: yes - name: Authorize monkey@core. become: yes vars: pubkeyfile: ../Secret/ssh_monkey/id_rsa.pub authorized_key: user: monkey key: "{{ lookup('file', pubkeyfile) }}" manage_dir: yes - name: Add {{ ansible_user }} to monkey group. become: yes user: name: "{{ ansible_user }}" append: yes groups: monkey
7.6. Install Rsync
Monkey uses Rsync to keep the institute's public web site up-to-date.
roles_t/front/tasks/main.yml
- name: Install rsync.
become: yes
apt: pkg=rsync
7.7. Install Unattended Upgrades
The institute prefers to install security updates as soon as possible.
roles_t/front/tasks/main.yml
- name: Install basic software.
become: yes
apt: pkg=unattended-upgrades
7.8. Configure User Accounts
User accounts are created immediately so that Postfix and Dovecot can
start delivering email immediately, without returning "no such
recipient" replies. The Account Management chapter describes the
members
and usernames
variables used below.
roles_t/front/tasks/main.yml
- name: Create user accounts. become: yes user: name: "{{ item }}" password: "{{ members[item].password_front }}" update_password: always home: /home/{{ item }} loop: "{{ usernames }}" when: members[item].status == 'current' tags: accounts - name: Disable former users. become: yes user: name: "{{ item }}" password: "!" loop: "{{ usernames }}" when: members[item].status != 'current' tags: accounts - name: Revoke former user authorized_keys. become: yes file: path: /home/{{ item }}/.ssh/authorized_keys state: absent loop: "{{ usernames }}" when: members[item].status != 'current' tags: accounts
7.9. Install Server Certificate
The servers on Front use the same certificate (and key) to
authenticate themselves to institute clients. They share the
/etc/server.crt
and /etc/server.key
files, the latter only
readable by root
.
roles_t/front/tasks/main.yml
- name: Install server certificate/key. become: yes copy: src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }} dest: /etc/server.{{ item.typ }} mode: "{{ item.mode }}" force: no loop: - { path: "issued/{{ domain_name }}", typ: crt, mode: "u=r,g=r,o=r" } - { path: "private/{{ domain_name }}", typ: key, mode: "u=r,g=,o=" } notify: - Restart Postfix. - Restart Dovecot.
7.10. Configure Postfix on Front
Front uses Postfix to provide the institute's public SMTP service, and uses the institute's domain name for its host name. The default Debian configuration (for an "Internet Site") is nearly sufficient. Manual installation may prompt for configuration type and mail name. The appropriate answers are listed here but will be checked (corrected) by Ansible tasks below.
- General type of mail configuration: Internet Site
- System mail name: small.example.org
As discussed in The Email Service above, Front's Postfix configuration includes site-wide support for larger message sizes, shorter queue times, the relaying configuration, and the common path to incoming emails. These and a few Front-specific Postfix configurations settings make up the complete configuration (below).
Front relays messages from the institute's public VPN via which Core relays messages from the campus.
postfix-front-networks
- p: mynetworks v: >- {{ public_vpn_net_cidr }} 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
Front uses one recipient restriction to make things difficult for
spammers, with permit_mynetworks
at the start to not make things
difficult for internal hosts, who do not have (public) domain names.
postfix-front-restrictions
- p: smtpd_recipient_restrictions v: >- permit_mynetworks reject_unauth_pipelining reject_unauth_destination reject_unknown_sender_domain
Front uses Postfix header checks to strip Received
headers from
outgoing messages. These headers contain campus host and network
names and addresses in the clear (un-encrypted). Stripping them
improves network privacy and security. Front also strips User-Agent
headers just to make it harder to target the program(s) members use to
open their email. These headers should be stripped only from outgoing
messages; incoming messages are delivered locally, without
smtp_header_checks
.
postfix-header-checks
- p: smtp_header_checks v: regexp:/etc/postfix/header_checks.cf
postfix-header-checks-content
/^Received:/ IGNORE /^User-Agent:/ IGNORE
The complete Postfix configuration for Front follows. In addition to
the options already discussed, it must override the loopback-only
Debian default for inet_interfaces
.
postfix-front
- { p: smtpd_tls_cert_file, v: /etc/server.crt } - { p: smtpd_tls_key_file, v: /etc/server.key } <<postfix-front-networks>> <<postfix-front-restrictions>> <<postfix-relaying>> <<postfix-message-size>> <<postfix-queue-times>> <<postfix-maildir>> <<postfix-header-checks>>
The following Ansible tasks install Postfix, modify
/etc/postfix/main.cf
according to the settings given above, and
start and enable the service.
roles_t/front/tasks/main.yml
- name: Install Postfix. become: yes apt: pkg=postfix - name: Configure Postfix. become: yes lineinfile: path: /etc/postfix/main.cf regexp: "^ *{{ item.p }} *=" line: "{{ item.p }} = {{ item.v }}" loop: <<postfix-front>> notify: Restart Postfix. - name: Install Postfix header_checks. become: yes copy: content: | <<postfix-header-checks-content>> dest: /etc/postfix/header_checks.cf notify: Postmap header checks. - name: Enable/Start Postfix. become: yes systemd: service: postfix enabled: yes state: started
roles_t/front/handlers/main.yml
- name: Restart Postfix. become: yes systemd: service: postfix state: restarted - name: Postmap header checks. become: yes command: chdir: /etc/postfix/ cmd: postmap header_checks.cf notify: Restart Postfix.
7.11. Configure Public Email Aliases
The institute's Front needs to deliver email addressed to a number of
common aliases as well as those advertised on the web site. System
daemons like cron(8)
may also send email to system accounts like
monkey
. The following aliases make these customary mailboxes
available. The aliases are installed in /etc/aliases
in a block
with a special marker so that additional blocks can be installed by
other Ansible roles. Note that the postmaster
alias forwards to
root
in the default Debian configuration, and the following aliases
do not include the crucial root
alias that forwards to the
administrator. It could be included here or in a separate block
created by a more specialized role.
roles_t/front/tasks/main.yml
- name: Install institute email aliases.
become: yes
blockinfile:
block: |
abuse: root
webmaster: root
admin: root
monkey: monkey@{{ front_private_addr }}
root: {{ ansible_user }}
path: /etc/aliases
marker: "# {mark} INSTITUTE MANAGED BLOCK"
notify: New aliases.
roles_t/front/handlers/main.yml
- name: New aliases. become: yes command: newaliases
7.12. Configure Dovecot IMAPd
Front uses Dovecot's IMAPd to allow user Fetchmail jobs on Core to pick up messages. Front's Dovecot configuration is largely the Debian default with POP and IMAP (without TLS) support disabled. This is a bit "over the top" given that Core accesses Front via VPN, but helps to ensure privacy even when members must, in extremis, access recent email directly from their accounts on Front. For more information about Front's role in the institute's email services, see The Email Service.
The institute follows the recommendation in the package
README.Debian
(in /usr/share/dovecot-core/
). Note that the
default "snake oil" certificate can be replaced with one signed by a
recognized authority (e.g. Let's Encrypt) so that email apps will not
ask about trusting the self-signed certificate.
The following Ansible tasks install Dovecot's IMAP daemon and its
/etc/dovecot/local.conf
configuration file, then starts the service
and enables it to start at every reboot.
roles_t/front/tasks/main.yml
- name: Install Dovecot IMAPd. become: yes apt: pkg=dovecot-imapd - name: Configure Dovecot IMAPd. become: yes copy: content: | <<dovecot-tls>> ssl_cert = </etc/server.crt ssl_key = </etc/server.key <<dovecot-ports>> <<dovecot-maildir>> dest: /etc/dovecot/local.conf notify: Restart Dovecot. - name: Enable/Start Dovecot. become: yes systemd: service: dovecot enabled: yes state: started
roles_t/front/handlers/main.yml
- name: Restart Dovecot. become: yes systemd: service: dovecot state: restarted
7.13. Configure Apache2
This is the small institute's public web site. It is simple, static,
and thus (hopefully) difficult to subvert. There are no server-side
scripts to run. The standard Debian install runs the server under the
www-data
account, which does not need any permissions. It will
serve only world-readable files.
The server's document root, /home/www/
, is separate from the Debian
default /var/www/html/
and (presumably) on the largest disk
partition. The directory tree, from the document root to the leaf
HTML files, should be owned by monkey
, and only writable by its
owner. It should not be writable by the Apache2 server (running as
www-data
).
The institute uses several SSL directives to trim protocol and cipher suite compatibility down, eliminating old and insecure methods and providing for forward secrecy. Along with an up-to-date Let's Encrypt certificate, these settings win the institute's web site an A rating from Qualys SSL Labs (https://www.ssllabs.com/).
The apache-ciphers
block below is included last in the Apache2
configuration, so that its SSLCipherSuite
directive can override
(narrow) any list of ciphers set earlier (e.g. by Let's
Encrypt!2). The protocols and cipher suites specified here were
taken from https://www.ssllabs.com/projects/best-practices in 2022.
apache-ciphers
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1 SSLHonorCipherOrder on SSLCipherSuite {{ [ 'ECDHE-ECDSA-AES128-GCM-SHA256', 'ECDHE-ECDSA-AES256-GCM-SHA384', 'ECDHE-ECDSA-AES128-SHA', 'ECDHE-ECDSA-AES256-SHA', 'ECDHE-ECDSA-AES128-SHA256', 'ECDHE-ECDSA-AES256-SHA384', 'ECDHE-RSA-AES128-GCM-SHA256', 'ECDHE-RSA-AES256-GCM-SHA384', 'ECDHE-RSA-AES128-SHA', 'ECDHE-RSA-AES256-SHA', 'ECDHE-RSA-AES128-SHA256', 'ECDHE-RSA-AES256-SHA384', 'DHE-RSA-AES128-GCM-SHA256', 'DHE-RSA-AES256-GCM-SHA384', 'DHE-RSA-AES128-SHA', 'DHE-RSA-AES256-SHA', 'DHE-RSA-AES128-SHA256', 'DHE-RSA-AES256-SHA256', '!aNULL', '!eNULL', '!LOW', '!3DES', '!MD5', '!EXP', '!PSK', '!SRP', '!DSS', '!RC4' ] |join(":") }}
The institute supports public member (static) web pages. A member can
put an index.html
file in their ~/Public/HTML/
directory on Front
and it will be served as https://small.example.org/~member/
(if the
member's account name is member
and the file is world readable).
On Front, a member's web pages are available only when they appear in
/home/www-users/
(via a symbolic link), giving the administration
more control over what appears on the public web site. The tasks
below create or remove the symbolic links.
The following are the necessary Apache2 directives: a UserDir
directive naming /home/www-users/
and matching Directory
block
that includes the standard Require
and AllowOverride
directives
used on all of the institute's web sites.
apache-userdir-front
UserDir /home/www-users <Directory /home/www-users/> Require all granted AllowOverride None </Directory>
The institute requires the use of HTTPS on Front, so its default HTTP virtual host permanently redirects requests to their corresponding HTTPS URLs.
apache-redirect-front
<VirtualHost *:80> Redirect permanent / https://{{ domain_name }}/ </VirtualHost>
The complete Apache2 configuration for Front is given below. It is
installed in /etc/apache2/sites-available/{{ domain_name }}.conf
(as
expected by Let's Encrypt's Certbot). It includes the fragments
described above and adds a VirtualHost
block for the HTTPS service
(also as expected by Certbot). The VirtualHost
optionally includes
an additional configuration file to allow other Ansible roles to
specialize this configuration without disturbing the institute file.
The DocumentRoot
directive is accompanied by a Directory
block
that authorizes access to the tree, and ensures .htaccess
files
within the tree are disabled for speed and security. This and most of
Front's Apache2 directives (below) are intended for the top level, not
the inside of a VirtualHost
block. They should apply globally.
apache-front
ServerName {{ domain_name }} ServerAdmin webmaster@{{ domain_name }} DocumentRoot /home/www <Directory /home/www/> Require all granted AllowOverride None </Directory> <<apache-userdir-front>> ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined <<apache-redirect-front>> <VirtualHost *:443> SSLEngine on SSLCertificateFile /etc/server.crt SSLCertificateKeyFile /etc/server.key IncludeOptional \ /etc/apache2/sites-available/{{ domain_name }}-vhost.conf </VirtualHost> <<apache-ciphers>>
Ansible installs the configuration above in
e.g. /etc/apache2/sites-available/small.example.org.conf
and runs
a2ensite -q small.example.org
to enable it.
roles_t/front/tasks/main.yml
- name: Install Apache2. become: yes apt: pkg=apache2 - name: Enable Apache2 modules. become: yes apache2_module: name: "{{ item }}" loop: [ ssl, userdir ] notify: Restart Apache2. - name: Create DocumentRoot. become: yes file: path: /home/www state: directory owner: monkey group: monkey - name: Configure web site. become: yes copy: content: | <<apache-front>> dest: /etc/apache2/sites-available/{{ domain_name }}.conf notify: Restart Apache2. - name: Enable web site. become: yes command: cmd: a2ensite -q {{ domain_name }} creates: /etc/apache2/sites-enabled/{{ domain_name }}.conf notify: Restart Apache2. - name: Enable/Start Apache2. become: yes systemd: service: apache2 enabled: yes state: started
roles_t/front/handlers/main.yml
- name: Restart Apache2. become: yes systemd: service: apache2 state: restarted
Furthermore, the default web site and its HTTPS version is disabled so that it does not interfere with its replacement.
roles_t/front/tasks/main.yml
- name: Disable default vhosts. become: yes file: path: /etc/apache2/sites-enabled/{{ item }} state: absent loop: [ 000-default.conf, default-ssl.conf ] notify: Restart Apache2.
The redundant default other-vhosts-access-log
configuration option
is also disabled. There are no other virtual hosts, and it stores the
same records as access.log
.
roles_t/front/tasks/main.yml
- name: Disable other-vhosts-access-log option. become: yes file: path: /etc/apache2/conf-enabled/other-vhosts-access-log.conf state: absent notify: Restart Apache2.
Finally, the UserDir
is created and populated with symbolic links to
the users' ~/Public/HTML/
directories.
roles_t/front/tasks/main.yml
- name: Create UserDir. become: yes file: path: /home/www-users/ state: directory - name: Create UserDir links. become: yes file: path: /home/www-users/{{ item }} src: /home/{{ item }}/Public/HTML state: link force: yes loop: "{{ usernames }}" when: members[item].status == 'current' tags: accounts - name: Disable former UserDir links. become: yes file: path: /home/www-users/{{ item }} state: absent loop: "{{ usernames }}" when: members[item].status != 'current' tags: accounts
7.14. Configure OpenVPN
Front uses OpenVPN to provide the institute's public VPN service. The
configuration is straightforward with one complication. OpenVPN needs
to know how to route to the campus VPN, which is only accessible when
Core is connected. OpenVPN supports these dynamic routes internally
with client-specific configuration files. The small institute uses
one of these, /etc/openvpn/ccd/core
, so that OpenVPN will know to
route packets for the campus networks to Core.
openvpn-ccd-core
iroute {{ private_net_and_mask }} iroute {{ campus_vpn_net_and_mask }}
The VPN clients are not configured to route all of their traffic through the VPN, so Front pushes routes to the other institute networks. The clients thus know to route traffic for the private Ethernet or campus VPN to Front on the public VPN. (If the clients were configured to route all traffic through the VPN, the one default route is all that would be needed.) Front itself is in the same situation, outside the institute networks with a default route through some ISP, and thus needs the same routes as the clients.
openvpn-front-routes
route {{ private_net_and_mask }} route {{ campus_vpn_net_and_mask }} push "route {{ private_net_and_mask }}" push "route {{ campus_vpn_net_and_mask }}"
The complete OpenVPN configuration for Front includes a server
option, the client-config-dir
option, the routes mentioned above,
and the common options discussed in The VPN Service.
openvpn-front
server {{ public_vpn_net_and_mask }} client-config-dir /etc/openvpn/ccd <<openvpn-front-routes>> <<openvpn-dev-mode>> <<openvpn-keepalive>> <<openvpn-dns>> <<openvpn-drop-priv>> <<openvpn-crypt>> <<openvpn-max>> <<openvpn-debug>> ca /usr/local/share/ca-certificates/{{ domain_name }}.crt cert server.crt key server.key dh dh2048.pem tls-auth ta.key 0
Finally, here are the tasks (and handler) required to install and configure the OpenVPN server on Front.
roles_t/front/tasks/main.yml
- name: Install OpenVPN. become: yes apt: pkg=openvpn - name: Enable IP forwarding. become: yes sysctl: name: net.ipv4.ip_forward value: "1" state: present - name: Create OpenVPN client configuration directory. become: yes file: path: /etc/openvpn/ccd state: directory notify: Restart OpenVPN. - name: Install OpenVPN client configuration for Core. become: yes copy: content: | <<openvpn-ccd-core>> dest: /etc/openvpn/ccd/core notify: Restart OpenVPN. - name: Disable former VPN clients. become: yes copy: content: "disable\n" dest: /etc/openvpn/ccd/{{ item }} loop: "{{ revoked }}" tags: accounts - name: Install OpenVPN server certificate/key. become: yes copy: src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }} dest: /etc/openvpn/server.{{ item.typ }} mode: "{{ item.mode }}" loop: - { path: "issued/{{ domain_name }}", typ: crt, mode: "u=r,g=r,o=r" } - { path: "private/{{ domain_name }}", typ: key, mode: "u=r,g=,o=" } notify: Restart OpenVPN. - name: Install OpenVPN secrets. become: yes copy: src: ../Secret/{{ item.src }} dest: /etc/openvpn/{{ item.dest }} mode: u=r,g=,o= loop: - { src: front-dh2048.pem, dest: dh2048.pem } - { src: front-ta.key, dest: ta.key } notify: Restart OpenVPN. - name: Configure OpenVPN. become: yes copy: content: | <<openvpn-front>> dest: /etc/openvpn/server.conf mode: u=r,g=r,o= notify: Restart OpenVPN. - name: Enable/Start OpenVPN. become: yes systemd: service: openvpn@server enabled: yes state: started
roles_t/front/handlers/main.yml
- name: Restart OpenVPN. become: yes systemd: service: openvpn@server state: restarted
7.15. Configure Kamailio
Front uses Kamailio to provide a SIP service on the public VPN so that members abroad can chat privately. This is a connection-less UDP service that can be used with or without encryption. The VPN's encryption can be relied upon or an extra layer can be used when necessary. (Apps cannot tell if a network is secure and often assume the luser is an idiot, so they insist on doing some encryption.)
Kamailio listens on all network interfaces by default, but the
institute expects its SIP traffic to be aggregated and encrypted via
the public VPN. To enforce this expectation, Kamailio is instructed
to listen only on Front's public VPN. The private name
sip.small.private
resolves to this address for the convenience
of members configuring SIP clients. The server configuration
specifies the actual IP, known here as front_private_addr
.
kamailio
listen=udp:{{ front_private_addr }}:5060
The Ansible tasks that install and configure Kamailio follow, but before Kamailio is configured (thus started), the service is tweaked by a configuration drop (which must notify Systemd before the service starts).
The first step is to install Kamailio.
roles_t/front/tasks/main.yml
- name: Install Kamailio.
become: yes
apt: pkg=kamailio
Now the configuration drop concerns the network device on which
Kamailio will be listening, the tun
device created by OpenVPN. The
added configuration settings inform Systemd that Kamailio should not
be started before the tun
device has appeared.
roles_t/front/tasks/main.yml
- name: Create Kamailio/Systemd configuration drop. become: yes file: path: /etc/systemd/system/kamailio.service.d state: directory - name: Create Kamailio dependence on OpenVPN server. become: yes copy: content: | [Unit] Requires=sys-devices-virtual-net-ovpn.device After=sys-devices-virtual-net-ovpn.device dest: /etc/systemd/system/kamailio.service.d/depend.conf notify: Reload Systemd.
roles_t/front/handlers/main.yml
- name: Reload Systemd. become: yes systemd: daemon-reload: yes
Finally, Kamailio can be configured and started.
roles_t/front/tasks/main.yml
- name: Configure Kamailio. become: yes copy: content: | <<kamailio>> dest: /etc/kamailio/kamailio-local.cfg notify: Restart Kamailio. - name: Enable/Start Kamailio. become: yes systemd: service: kamailio enabled: yes state: started
roles_t/front/handlers/main.yml
- name: Restart Kamailio. become: yes systemd: service: kamailio state: restarted
8. The Core Role
The core
role configures many essential campus network services as
well as the institute's private cloud, so the core machine has
horsepower (CPUs and RAM) and large disks and is prepared with a
Debian install and remote access to a privileged, administrator's
account. (For details, see The Core Machine.)
8.1. Include Particulars
The first task, as in The Front Role, is to include the institute particulars and membership roll.
roles_t/core/tasks/main.yml
---
- name: Include public variables.
include_vars: ../public/vars.yml
tags: accounts
- name: Include private variables.
include_vars: ../private/vars.yml
tags: accounts
- name: Include members.
include_vars: "{{ lookup('first_found', membership_rolls) }}"
tags: accounts
8.2. Configure Hostname
This task ensures that Core's /etc/hostname
and /etc/mailname
are
correct. Core accepts email addressed to the institute's public or
private domain names, e.g. to dick@small.example.org
as well as
dick@small.private
. The correct /etc/mailname
is essential to
proper email delivery.
roles_t/core/tasks/main.yml
- name: Configure hostname. become: yes copy: content: "{{ item.name }}\n" dest: "{{ item.file }}" loop: - { name: "core.{{ domain_priv }}", file: /etc/mailname } - { name: "{{ inventory_hostname }}", file: /etc/hostname } notify: Update hostname.
roles_t/core/handlers/main.yml
--- - name: Update hostname. become: yes command: hostname -F /etc/hostname
8.3. Configure Systemd Resolved
Core runs the campus name server, so Resolved is configured to use it
(or dns.google
), to include the institute's domain in its search
list, and to disable its cache and stub listener.
roles_t/core/tasks/main.yml
- name: Configure resolved. become: yes lineinfile: path: /etc/systemd/resolved.conf regexp: "{{ item.regexp }}" line: "{{ item.line }}" loop: - { regexp: '^ *DNS *=', line: "DNS=127.0.0.1" } - { regexp: '^ *FallbackDNS *=', line: "FallbackDNS=8.8.8.8" } - { regexp: '^ *Domains *=', line: "Domains={{ domain_priv }}" } - { regexp: '^ *Cache *=', line: "Cache=no" } - { regexp: '^ *DNSStubListener *=', line: "DNSStubListener=no" } notify: - Reload Systemd. - Restart Systemd resolved.
roles_t/core/handlers/main.yml
- name: Reload Systemd. become: yes systemd: daemon-reload: yes - name: Restart Systemd resolved. become: yes systemd: service: systemd-resolved state: restarted
8.4. Configure Netplan
Core's network interface is statically configured using Netplan and an
/etc/netplan/60-core.yaml
file. That file provides Core's address
on the private Ethernet, the campus name server and search domain, and
the default route through Gate to the campus ISP. A second route,
through Core itself to Front, is advertised to other hosts, but is not
created here. It is created by OpenVPN when Core connects to Front's
VPN.
Core's Netplan needs the name of its main (only) Ethernet interface,
an example of which is given here. (A clever way to extract that name
from ansible_facts
would be appreciated. The ansible_default_ipv4
fact was an empty hash at first boot on a simulated campus Ethernet.)
private/vars.yml
core_ethernet: enp0s3
roles_t/core/tasks/main.yml
- name: Install netplan. become: yes apt: pkg=netplan.io - name: Configure netplan. become: yes copy: content: | network: renderer: networkd ethernets: {{ core_ethernet }}: dhcp4: false addresses: [ {{ core_addr_cidr }} ] nameservers: search: [ {{ domain_priv }} ] addresses: [ {{ core_addr }} ] gateway4: {{ gate_addr }} dest: /etc/netplan/60-core.yaml mode: u=rw,g=r,o= notify: Apply netplan.
roles_t/core/handlers/main.yml
- name: Apply netplan. become: yes command: netplan apply
8.5. Configure DHCP For the Private Ethernet
Core speaks DHCP (Dynamic Host Configuration Protocol) using the Internet Software Consortium's DHCP server. The server assigns unique network addresses to hosts plugged into the private Ethernet as well as advertising local net services, especially the local Domain Name Service.
The example configuration file, private/core-dhcpd.conf
, uses
RFC3442's extension to encode a second (non-default) static route.
The default route is through the campus ISP at Gate. A second route
directs campus traffic to the Front VPN through Core. This is just an
example file. The administrator adds and removes actual machines from
the actual private/core-dhcpd.conf
file.
private/core-dhcpd.conf
option domain-name "small.private"; option domain-name-servers 192.168.56.1; default-lease-time 3600; max-lease-time 7200; ddns-update-style none; authoritative; log-facility daemon; option rfc3442-routes code 121 = array of integer 8; subnet 192.168.56.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option broadcast-address 192.168.56.255; option routers 192.168.56.2; option ntp-servers 192.168.56.1; option rfc3442-routes 24, 10,177,86, 192,168,56,1, 0, 192,168,56,2; } host core { hardware ethernet 08:00:27:45:3b:a2; fixed-address 192.168.56.1; } host gate { hardware ethernet 08:00:27:e0:79:ab; fixed-address 192.168.56.2; } host server { hardware ethernet 08:00:27:f3:41:66; fixed-address 192.168.56.3; }
The following tasks install the ISC's DHCP server and configure it
with the real private/core-dhcpd.conf
(not the example above).
roles_t/core/tasks/main.yml
- name: Install DHCP server. become: yes apt: pkg=isc-dhcp-server - name: Configure DHCP interface. become: yes lineinfile: path: /etc/default/isc-dhcp-server line: INTERFACESv4="{{ core_ethernet }}" regexp: ^INTERFACESv4= notify: Restart DHCP server. - name: Configure DHCP subnet. become: yes copy: src: ../private/core-dhcpd.conf dest: /etc/dhcp/dhcpd.conf notify: Restart DHCP server. - name: Enable/Start DHCP server. become: yes systemd: service: isc-dhcp-server enabled: yes state: started
roles_t/core/handlers/main.yml
- name: Restart DHCP server. become: yes systemd: service: isc-dhcp-server state: restarted
8.6. Configure BIND9
Core uses BIND9 to provide name service for the institute as described in The Name Service. The configuration supports reverse name lookups, resolving many private network addresses to private domain names.
The following tasks install and configure BIND9 on Core.
roles_t/core/tasks/main.yml
- name: Install BIND9.
become: yes
apt: pkg=bind9
- name: Configure BIND9 with named.conf.options.
become: yes
copy:
content: |
<<bind-options>>
dest: /etc/bind/named.conf.options
notify: Reload BIND9.
- name: Configure BIND9 with named.conf.local.
become: yes
copy:
content: |
<<bind-local>>
dest: /etc/bind/named.conf.local
notify: Reload BIND9.
- name: Install BIND9 zonefiles.
become: yes
copy:
src: ../private/db.{{ item }}
dest: /etc/bind/db.{{ item }}
loop: [ domain, private, public_vpn, campus_vpn ]
notify: Reload BIND9.
- name: Enable/Start BIND9.
become: yes
systemd:
service: bind9
enabled: yes
state: started
roles_t/core/handlers/main.yml
- name: Reload BIND9. become: yes systemd: service: bind9 state: reloaded
Examples of the necessary zone files, for the "Install BIND9 zonefiles." task above, are given below. If the campus ISP provided one or more IP addresses for stable name servers, those should probably be used as forwarders rather than Google.
bind-options
acl "trusted" { {{ private_net_cidr }}; {{ public_vpn_net_cidr }}; {{ campus_vpn_net_cidr }}; {{ gate_wifi_net_cidr }}; localhost; }; options { directory "/var/cache/bind"; forwarders { 8.8.4.4; 8.8.8.8; }; allow-query { any; }; allow-recursion { trusted; }; allow-query-cache { trusted; }; listen-on { {{ core_addr }}; localhost; }; };
bind-local
include "/etc/bind/zones.rfc1918"; zone "{{ domain_priv }}." { type master; file "/etc/bind/db.domain"; }; zone "{{ private_net_cidr | ansible.utils.ipaddr('revdns') | regex_replace('^0\.','') }}" { type master; file "/etc/bind/db.private"; }; zone "{{ public_vpn_net_cidr | ansible.utils.ipaddr('revdns') | regex_replace('^0\.','') }}" { type master; file "/etc/bind/db.public_vpn"; }; zone "{{ campus_vpn_net_cidr | ansible.utils.ipaddr('revdns') | regex_replace('^0\.','') }}" { type master; file "/etc/bind/db.campus_vpn"; };
private/db.domain
; ; BIND data file for a small institute's PRIVATE domain names. ; $TTL 604800 @ IN SOA small.private. root.small.private. ( 1 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS core.small.private. $TTL 7200 mail IN CNAME core.small.private. smtp IN CNAME core.small.private. ns IN CNAME core.small.private. www IN CNAME core.small.private. test IN CNAME core.small.private. live IN CNAME core.small.private. ntp IN CNAME core.small.private. sip IN A 10.177.86.1 ; core IN A 192.168.56.1 gate IN A 192.168.56.2
private/db.private
; ; BIND reverse data file for a small institute's private Ethernet. ; $TTL 604800 @ IN SOA small.private. root.small.private. ( 1 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS core.small.private. $TTL 7200 1 IN PTR core.small.private. 2 IN PTR gate.small.private.
private/db.public_vpn
; ; BIND reverse data file for a small institute's public VPN. ; $TTL 604800 @ IN SOA small.private. root.small.private. ( 1 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS core.small.private. $TTL 7200 1 IN PTR front-p.small.private. 2 IN PTR core-p.small.private.
private/db.campus_vpn
; ; BIND reverse data file for a small institute's campus VPN. ; $TTL 604800 @ IN SOA small.private. root.small.private. ( 1 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS core.small.private. $TTL 7200 1 IN PTR gate-c.small.private.
8.7. Add Administrator to System Groups
The administrator often needs to read (directories of) log files owned
by groups root
and adm
. Adding the administrator's account to
these groups speeds up debugging.
roles_t/core/tasks/main.yml
- name: Add {{ ansible_user }} to system groups.
become: yes
user:
name: "{{ ansible_user }}"
append: yes
groups: root,adm
8.8. Configure Monkey
The small institute runs cron jobs and web scripts that generate
reports and perform checks. The un-privileged jobs are run by a
system account named monkey
. One of Monkey's more important jobs on
Core is to run rsync
to update the public web site on Front (as
described in *Configure Apache2).
roles_t/core/tasks/main.yml
- name: Create monkey. become: yes user: name: monkey system: yes append: yes groups: staff - name: Add {{ ansible_user }} to staff groups. become: yes user: name: "{{ ansible_user }}" append: yes groups: monkey,staff - name: Create /home/monkey/.ssh/. become: yes file: path: /home/monkey/.ssh state: directory mode: u=rwx,g=,o= owner: monkey group: monkey - name: Configure monkey@core. become: yes copy: src: ../Secret/ssh_monkey/{{ item.name }} dest: /home/monkey/.ssh/{{ item.name }} mode: "{{ item.mode }}" owner: monkey group: monkey loop: - { name: config, mode: "u=rw,g=r,o=" } - { name: id_rsa.pub, mode: "u=rw,g=r,o=r" } - { name: id_rsa, mode: "u=rw,g=,o=" } - name: Configure Monkey SSH known hosts. become: yes vars: pubkeypath: ../Secret/ssh_front/etc/ssh pubkeyfile: "{{ pubkeypath }}/ssh_host_ecdsa_key.pub" pubkey: "{{ lookup('file', pubkeyfile) }}" lineinfile: regexp: "^{{ domain_name }},{{ front_addr }} ecdsa-sha2-nistp256 " line: "{{ domain_name }},{{ front_addr }} {{ pubkey }}" path: /home/monkey/.ssh/known_hosts create: yes owner: monkey group: monkey mode: "u=rw,g=,o="
8.9. Install Unattended Upgrades
The institute prefers to install security updates as soon as possible.
roles_t/core/tasks/main.yml
- name: Install basic software.
become: yes
apt: pkg=unattended-upgrades
8.10. Install Expect
The expect
program is used by The Institute Commands to interact
with Nextcloud on the command line.
roles_t/core/tasks/main.yml
- name: Install expect.
become: yes
apt: pkg=expect
8.11. Configure User Accounts
User accounts are created immediately so that backups can begin
restoring as soon as possible. The Account Management chapter
describes the members
and usernames
variables.
roles_t/core/tasks/main.yml
- name: Create user accounts. become: yes user: name: "{{ item }}" password: "{{ members[item].password_core }}" update_password: always home: /home/{{ item }} loop: "{{ usernames }}" when: members[item].status == 'current' tags: accounts - name: Disable former users. become: yes user: name: "{{ item }}" password: "!" loop: "{{ usernames }}" when: members[item].status != 'current' tags: accounts - name: Revoke former user authorized_keys. become: yes file: path: /home/{{ item }}/.ssh/authorized_keys state: absent loop: "{{ usernames }}" when: members[item].status != 'current' tags: accounts
8.12. Install Server Certificate
The servers on Core use the same certificate (and key) to authenticate
themselves to institute clients. They share the /etc/server.crt
and
/etc/server.key
files, the latter only readable by root
.
roles_t/core/tasks/main.yml
- name: Install server certificate/key. become: yes copy: src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }} dest: /etc/server.{{ item.typ }} mode: "{{ item.mode }}" loop: - { path: "issued/core.{{ domain_priv }}", typ: crt, mode: "u=r,g=r,o=r" } - { path: "private/core.{{ domain_priv }}", typ: key, mode: "u=r,g=,o=" } notify: - Restart Postfix. - Restart Dovecot. - Restart OpenVPN.
8.13. Install NTP
Core uses NTP to provide a time synchronization service to the campus. The default daemon's default configuration is fine.
roles_t/core/tasks/main.yml
- name: Install NTP.
become: yes
apt: pkg=ntp
8.14. Configure Postfix on Core
Core uses Postfix to provide SMTP service to the campus. The default Debian configuration (for an "Internet Site") is nearly sufficient. Manual installation may prompt for configuration type and mail name. The appropriate answers are listed here but will be checked (corrected) by Ansible tasks below.
- General type of mail configuration: Internet Site
- System mail name: core.small.private
As discussed in The Email Service above, Core delivers email addressed to any internal domain name locally, and uses its smarthost Front to relay the rest. Core is reachable only on institute networks, so there is little benefit in enabling TLS, but it does need to handle larger messages and respect the institute's expectation of shortened queue times.
Core relays messages from any institute network.
postfix-core-networks
- p: mynetworks v: >- {{ private_net_cidr }} {{ public_vpn_net_cidr }} {{ campus_vpn_net_cidr }} 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
Core uses Front to relay messages to the Internet.
postfix-core-relayhost
- { p: relayhost, v: "[{{ front_private_addr }}]" }
Core uses a Postfix transport file, /etc/postfix/transport
, to
specify local delivery for email addressed to any internal domain
name. Note the leading dot at the beginning of the sole line in the
file.
postfix-transport
.{{ domain_name }} local:$myhostname .{{ domain_priv }} local:$myhostname
The complete list of Core's Postfix settings for
/etc/postfix/main.cf
follow.
postfix-core
<<postfix-relaying>>
- { p: smtpd_tls_security_level, v: none }
- { p: smtp_tls_security_level, v: none }
<<postfix-message-size>>
<<postfix-queue-times>>
<<postfix-maildir>>
<<postfix-core-networks>>
<<postfix-core-relayhost>>
- { p: inet_interfaces, v: "127.0.0.1 {{ core_addr }}" }
The following Ansible tasks install Postfix, modify
/etc/postfix/main.cf
, create /etc/postfix/transport
, and start and
enable the service. Whenever /etc/postfix/transport
is changed, the
postmap transport
command must also be run.
roles_t/core/tasks/main.yml
- name: Install Postfix. become: yes apt: pkg=postfix - name: Configure Postfix. become: yes lineinfile: path: /etc/postfix/main.cf regexp: "^ *{{ item.p }} *=" line: "{{ item.p }} = {{ item.v }}" loop: <<postfix-core>> - { p: transport_maps, v: "hash:/etc/postfix/transport" } notify: Restart Postfix. - name: Configure Postfix transport. become: yes copy: content: | <<postfix-transport>> dest: /etc/postfix/transport notify: Postmap transport. - name: Enable/Start Postfix. become: yes systemd: service: postfix enabled: yes state: started
roles_t/core/handlers/main.yml
- name: Restart Postfix. become: yes systemd: service: postfix state: restarted - name: Postmap transport. become: yes command: chdir: /etc/postfix/ cmd: postmap transport notify: Restart Postfix.
8.15. Configure Private Email Aliases
The institute's Core needs to deliver email addressed to institute
aliases including those advertised on the campus web site, in VPN
certificates, etc. System daemons like cron(8)
may also send email
to e.g. monkey
. The following aliases are installed in
/etc/aliases
with a special marker so that additional blocks can be
installed by more specialized roles.
roles_t/core/tasks/main.yml
- name: Install institute email aliases.
become: yes
blockinfile:
block: |
webmaster: root
admin: root
www-data: root
monkey: root
root: {{ ansible_user }}
path: /etc/aliases
marker: "# {mark} INSTITUTE MANAGED BLOCK"
notify: New aliases.
roles_t/core/handlers/main.yml
- name: New aliases. become: yes command: newaliases
8.16. Configure Dovecot IMAPd
Core uses Dovecot's IMAPd to store and serve member emails. As on Front, Core's Dovecot configuration is largely the Debian default with POP and IMAP (without TLS) support disabled. This is a bit "over the top" given that Core is only accessed from private (encrypted) networks, but helps to ensure privacy even when members accidentally attempt connections from outside the private networks. For more information about Core's role in the institute's email services, see The Email Service.
The institute follows the recommendation in the package
README.Debian
(in /usr/share/dovecot-core/
) but replaces the
default "snake oil" certificate with another, signed by the institute.
(For more information about the institute's X.509 certificates, see
Keys.)
The following Ansible tasks install Dovecot's IMAP daemon and its
/etc/dovecot/local.conf
configuration file, then starts the service
and enables it to start at every reboot.
roles_t/core/tasks/main.yml
- name: Install Dovecot IMAPd. become: yes apt: pkg=dovecot-imapd - name: Configure Dovecot IMAPd. become: yes copy: content: | <<dovecot-tls>> ssl_cert = </etc/server.crt ssl_key = </etc/server.key <<dovecot-maildir>> dest: /etc/dovecot/local.conf notify: Restart Dovecot. - name: Enable/Start Dovecot. become: yes systemd: service: dovecot enabled: yes state: started
roles_t/core/handlers/main.yml
- name: Restart Dovecot. become: yes systemd: service: dovecot state: restarted
8.17. Configure Fetchmail
Core runs a fetchmail
for each member of the institute. Individual
fetchmail
jobs can run with the --idle
option and thus can
download new messages instantly. The jobs run as Systemd services and
so are monitored and started at boot.
In the ~/.fetchmailrc
template below, the item
variable is a
username, and members[item]
is the membership record associated with
the username. The template is only used when the record has a
password_fetchmail
key providing the member's plain-text password.
fetchmail-config
# Permissions on this file may be no greater than 0600. set no bouncemail set no spambounce set no syslog #set logfile /home/{{ item }}/.fetchmail.log poll {{ front_private_addr }} protocol imap timeout 15 username {{ item }} password "{{ members[item].password_fetchmail }}" fetchall ssl sslproto tls1.2+ sslcertck sslcommonname {{ domain_name }}
The Systemd service description.
fetchmail-service
[Unit] Description=Fetchmail --idle task for {{ item }}. AssertPathExists=/home/{{ item }}/.fetchmailrc After=openvpn@front.service Wants=sys-devices-virtual-net-ovpn.device [Service] User={{ item }} ExecStart=/usr/bin/fetchmail --idle Restart=always RestartSec=1m NoNewPrivileges=true [Install] WantedBy=default.target
The following tasks install fetchmail, a ~/.fetchmailrc
and Systemd
.service
file for each current member, start the services, and
enable them to start on boot. To accommodate any member of the
institute who may wish to run their own fetchmail job on their
notebook, only members with a fetchmail_password
key will be
provided the Core service.
roles_t/core/tasks/main.yml
- name: Install fetchmail. become: yes apt: pkg=fetchmail - name: Configure user fetchmails. become: yes copy: content: | <<fetchmail-config>> dest: /home/{{ item }}/.fetchmailrc owner: "{{ item }}" group: "{{ item }}" mode: u=rw,g=,o= loop: "{{ usernames }}" when: - members[item].status == 'current' - members[item].password_fetchmail is defined tags: accounts - name: Create user fetchmail services. become: yes copy: content: | <<fetchmail-service>> dest: /etc/systemd/system/fetchmail-{{ item }}.service loop: "{{ usernames }}" when: - members[item].status == 'current' - members[item].password_fetchmail is defined tags: accounts - name: Enable/Start user fetchmail services. become: yes systemd: service: fetchmail-{{ item }}.service enabled: yes state: started loop: "{{ usernames }}" when: - members[item].status == 'current' - members[item].password_fetchmail is defined tags: accounts
Finally, any former member's Fetchmail service on Core should be stopped and disabled from restarting at boot, deleted even.
roles_t/core/tasks/main.yml
- name: Stop former user fetchmail services. become: yes systemd: service: fetchmail-{{ item }} state: stopped enabled: no loop: "{{ usernames }}" when: - members[item].status != 'current' - members[item].password_fetchmail is defined tags: accounts
If the .service
file is deleted, then Ansible cannot use the
systemd
module to stop it, nor check that it is still stopped.
Otherwise the following task might be appropriate.
- name: Delete former user fetchmail services. become: yes file: path: /etc/systemd/system/fetchmail-{{ item }}.service state: absent loop: "{{ usernames }}" when: - members[item].status != 'current' - members[item].password_fetchmail is defined tags: accounts
8.18. Configure Apache2
This is the small institute's campus web server. It hosts several web sites as described in The Web Services.
URL | Doc.Root | Description |
---|---|---|
http://live/ |
/WWW/live/ |
The live, public site. |
http://test/ |
/WWW/test/ |
The next public site. |
http://www/ |
/WWW/campus/ |
Campus home page. |
http://core/ |
/var/www/ |
whatnot, e.g. Nextcloud |
The live (and test) web site content (eventually) is intended to be
copied to Front, so the live and test sites are configured as
identically to Front's as possible. The directories and files are
owned by monkey
but are world readable, thus readable by www-data
,
the account running Apache2.
The campus web site is much more permissive. Its directories are
owned by root
but writable by the staff
group. It runs CGI
scripts found in any of its directories, any executable with a .cgi
file name. It runs them as www-data
so CGI scripts that need access
to private data must Set-UID to the appropriate account.
The UserDir
directives for all of Core's web sites are the same, and
punt the indirection through a /home/www-users/
directory, simply
naming a sub-directory in the member's home directory on Core. The
<Directory>
block is the same as the one used on Front.
apache-userdir-core
UserDir Public/HTML <Directory /home/*/Public/HTML/> Require all granted AllowOverride None </Directory>
The virtual host for the live web site is given below. It should look like Front's top-level web configuration without the permanent redirect, the encryption ciphers and certificates.
apache-live
<VirtualHost *:80> ServerName live ServerAlias live.{{ domain_priv }} ServerAdmin webmaster@core.{{ domain_priv }} DocumentRoot /WWW/live <Directory /WWW/live/> Require all granted AllowOverride None </Directory> <<apache-userdir-core>> ErrorLog ${APACHE_LOG_DIR}/live-error.log CustomLog ${APACHE_LOG_DIR}/live-access.log combined IncludeOptional /etc/apache2/sites-available/live-vhost.conf </VirtualHost>
The virtual host for the test web site is given below. It should look familiar.
apache-test
<VirtualHost *:80> ServerName test ServerAlias test.{{ domain_priv }} ServerAdmin webmaster@core.{{ domain_priv }} DocumentRoot /WWW/test <Directory /WWW/test/> Require all granted AllowOverride None </Directory> <<apache-userdir-core>> ErrorLog ${APACHE_LOG_DIR}/test-error.log CustomLog ${APACHE_LOG_DIR}/test-access.log combined IncludeOptional /etc/apache2/sites-available/test-vhost.conf </VirtualHost>
The virtual host for the campus web site is given below. It too
should look familiar, but with a notably loose Directory
directive.
It assumes /WWW/campus/
is secure, writable only by properly
trained staffers, monitored by a revision control system, etc.
apache-campus
<VirtualHost *:80> ServerName www ServerAlias www.{{ domain_priv }} ServerAdmin webmaster@core.{{ domain_priv }} DocumentRoot /WWW/campus <Directory /WWW/campus/> Options Indexes FollowSymLinks MultiViews ExecCGI AddHandler cgi-script .cgi Require all granted AllowOverride None </Directory> <<apache-userdir-core>> ErrorLog ${APACHE_LOG_DIR}/campus-error.log CustomLog ${APACHE_LOG_DIR}/campus-access.log combined IncludeOptional /etc/apache2/sites-available/www-vhost.conf </VirtualHost>
The tasks below install Apache2 and edit its default configuration.
roles_t/core/tasks/main.yml
- name: Install Apache2. become: yes apt: pkg=apache2 - name: Enable Apache2 modules. become: yes apache2_module: name: "{{ item }}" loop: [ userdir, cgid ] notify: Restart Apache2.
With Apache installed there is a /etc/apache/sites-available/
directory into which the above site configurations can be installed.
The a2ensite
command enables them.
roles_t/core/tasks/main.yml
- name: Install live web site. become: yes copy: content: | <<apache-live>> dest: /etc/apache2/sites-available/live.conf mode: u=rw,g=r,o=r notify: Restart Apache2. - name: Install test web site. become: yes copy: content: | <<apache-test>> dest: /etc/apache2/sites-available/test.conf mode: u=rw,g=r,o=r notify: Restart Apache2. - name: Install campus web site. become: yes copy: content: | <<apache-campus>> dest: /etc/apache2/sites-available/www.conf mode: u=rw,g=r,o=r notify: Restart Apache2. - name: Enable web sites. become: yes command: cmd: a2ensite -q {{ item }} creates: /etc/apache2/sites-enabled/{{ item }}.conf loop: [ live, test, www ] notify: Restart Apache2. - name: Enable/Start Apache2. become: yes systemd: service: apache2 enabled: yes state: started
roles_t/core/handlers/main.yml
- name: Restart Apache2. become: yes systemd: service: apache2 state: restarted
8.19. Configure Website Updates
Monkey on Core runs /usr/local/sbin/webupdate
every 15 minutes via a
cron
job. The example script mirrors /WWW/live/
on Core to
/home/www/
on Front.
private/webupdate
#!/bin/bash -e # # DO NOT EDIT. This file was tangled from institute.org. cd /WWW/live/ rsync -avz --delete --chmod=g-w \ --filter='exclude *~' \ --filter='exclude .git*' \ ./ {{ domain_name }}:/home/www/
The following tasks install the webupdate
script from private/
,
and create Monkey's cron
job. An example webupdate
script is
provided here.
roles_t/core/tasks/main.yml
- name: "Install Monkey's webupdate script." become: yes copy: src: ../private/webupdate dest: /usr/local/sbin/webupdate mode: u=rx,g=rx,o= owner: monkey group: staff - name: "Create Monkey's webupdate job." become: yes cron: minute: "*/15" job: "[ -d /WWW/live ] && /usr/local/sbin/webupdate" name: webupdate user: monkey
8.20. Configure OpenVPN Connection to Front
Core connects to Front's public VPN to provide members abroad with a
route to the campus networks. As described in the configuration of
Front's OpenVPN service, Front expects Core to connect using a client
certificate with Common Name Core
.
Core's OpenVPN client configuration uses the Debian default Systemd
service unit to keep Core connected to Front. The configuration
is installed in /etc/openvpn/front.conf
so the Systemd service is
called openvpn@front
.
openvpn-core
client dev-type tun dev ovpn remote {{ front_addr }} nobind <<openvpn-drop-priv>> <<openvpn-crypt>> remote-cert-tls server verify-x509-name {{ domain_name }} name verb 3 ca /usr/local/share/ca-certificates/{{ domain_name }}.crt cert client.crt key client.key tls-auth ta.key 1
The tasks that install and configure the OpenVPN client configuration for Core.
roles_t/core/tasks/main.yml
- name: Install OpenVPN. become: yes apt: pkg=openvpn - name: Enable IP forwarding. become: yes sysctl: name: net.ipv4.ip_forward value: "1" state: present - name: Install OpenVPN secret. become: yes copy: src: ../Secret/front-ta.key dest: /etc/openvpn/ta.key mode: u=r,g=,o= notify: Restart OpenVPN. - name: Install OpenVPN client certificate/key. become: yes copy: src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }} dest: /etc/openvpn/client.{{ item.typ }} mode: "{{ item.mode }}" loop: - { path: "issued/core", typ: crt, mode: "u=r,g=r,o=r" } - { path: "private/core", typ: key, mode: "u=r,g=,o=" } notify: Restart OpenVPN. - name: Configure OpenVPN. become: yes copy: content: | <<openvpn-core>> dest: /etc/openvpn/front.conf mode: u=r,g=r,o= notify: Restart OpenVPN. - name: Enable/Start OpenVPN. become: yes systemd: service: openvpn@front state: started enabled: yes
roles_t/core/handlers/main.yml
- name: Restart OpenVPN. become: yes systemd: service: openvpn@front state: restarted
8.21. Configure NAGIOS
Core runs a nagios4
server to monitor "services" on institute hosts.
The following tasks install the necessary packages and configure the
server. The last task installs the monitoring configuration in
/etc/nagios4/conf.d/institute.cfg
. This configuration file,
nagios.cfg
, is tangled from code blocks described in subsequent
subsections.
The institute NAGIOS configuration includes a customized version of
the check_sensors
plugin named inst_sensors
. Both versions rely
on the sensors
command (from the lm-sensors
package). The custom
version (below) is installed in /usr/local/sbin/inst_sensors
on both
Core and Campus (and thus Gate) machines.
roles_t/core/tasks/main.yml
- name: Install NAGIOS4. become: yes apt: pkg: [ nagios4, monitoring-plugins-basic, nagios-nrpe-plugin, lm-sensors ] - name: Install inst_sensors NAGIOS plugin. become: yes copy: src: inst_sensors dest: /usr/local/sbin/inst_sensors mode: u=rwx,g=rx,o=rx - name: Configure NAGIOS4. become: yes lineinfile: path: /etc/nagios4/nagios.cfg regexp: "{{ item.regexp }}" line: "{{ item.line }}" backrefs: yes loop: - { regexp: "^( *cfg_file *= *localhost.cfg)", line: "# \\1" } - { regexp: "^( *admin_email *= *)", line: "\\1{{ ansible_user }}@localhost" } notify: Reload NAGIOS4. - name: Configure NAGIOS4 contacts. become: yes lineinfile: path: /etc/nagios4/objects/contacts.cfg regexp: "^( *email +)" line: "\\1sysadm@localhost" backrefs: yes notify: Reload NAGIOS4. - name: Configure NAGIOS4 monitors. become: yes template: src: nagios.cfg dest: /etc/nagios4/conf.d/institute.cfg notify: Reload NAGIOS4. - name: Enable/Start NAGIOS4. become: yes systemd: service: nagios4 enabled: yes state: started
roles_t/core/handlers/main.yml
- name: Reload NAGIOS4. become: yes systemd: service: nagios4 state: reloaded
8.21.1. Configure NAGIOS Monitors for Core
The first block in nagios.cfg
specifies monitors for services on
Core. The monitors are simple, local plugins, and the block is very
similar to the default objects/localhost.cfg
file. The commands
used here may specify plugin arguments.
roles_t/core/templates/nagios.cfg
define host { use linux-server host_name core address 127.0.0.1 } define service { use local-service host_name core service_description Root Partition check_command check_local_disk!20%!10%!/ } define service { use local-service host_name core service_description Current Users check_command check_local_users!20!50 } define service { use local-service host_name core service_description Zombie Processes check_command check_local_procs!5!10!Z } define service { use local-service host_name core service_description Total Processes check_command check_local_procs!150!200!RSZDT } define service { use local-service host_name core service_description Current Load check_command check_local_load!5.0,4.0,3.0!10.0,6.0,4.0 } define service { use local-service host_name core service_description Swap Usage check_command check_local_swap!20%!10% } define service { use local-service host_name core service_description SSH check_command check_ssh } define service { use local-service host_name core service_description HTTP check_command check_http }
8.21.2. Custom NAGIOS Monitor inst_sensors
The check_sensors
plugin is included in the package
monitoring-plugins-basic
, but it does not report any readings. The
small institute substitutes a slightly modified version,
inst_sensors
, that reports core CPU temperatures.
roles_t/core/files/inst_sensors
#!/bin/sh PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin" export PATH PROGNAME=`basename $0` REVISION="2.3.1" . /usr/lib/nagios/plugins/utils.sh print_usage() { echo "Usage: $PROGNAME" [--ignore-fault] } print_help() { print_revision $PROGNAME $REVISION echo "" print_usage echo "" echo "This plugin checks hardware status using the lm_sensors package." echo "" support exit $STATE_OK } brief_data() { echo "$1" | sed -n -E -e ' /^ *Core [0-9]+:/ { s/^ *Core [0-9]+: +([-+]?[0-9.]+).*/ \1/; H } $ { x; s/\n//g; p }' } case "$1" in --help) print_help exit $STATE_OK ;; -h) print_help exit $STATE_OK ;; --version) print_revision $PROGNAME $REVISION exit $STATE_OK ;; -V) print_revision $PROGNAME $REVISION exit $STATE_OK ;; *) sensordata=`sensors 2>&1` status=$? if test ${status} -eq 127; then text="SENSORS UNKNOWN - command not found" text="$text (did you install lmsensors?)" exit=$STATE_UNKNOWN elif test ${status} -ne 0; then text="WARNING - sensors returned state $status" exit=$STATE_WARNING elif echo ${sensordata} | egrep ALARM > /dev/null; then text="SENSOR CRITICAL -`brief_data "${sensordata}"`" exit=$STATE_CRITICAL elif echo ${sensordata} | egrep FAULT > /dev/null \ && test "$1" != "-i" -a "$1" != "--ignore-fault"; then text="SENSOR UNKNOWN - Sensor reported fault" exit=$STATE_UNKNOWN else text="SENSORS OK -`brief_data "${sensordata}"`" exit=$STATE_OK fi echo "$text" if test "$1" = "-v" -o "$1" = "--verbose"; then echo ${sensordata} fi exit $exit ;; esac
The following block defines the command and monitors it (locally) on Core.
roles_t/core/templates/nagios.cfg
define command { command_name inst_sensors command_line /usr/local/sbin/inst_sensors } define service { use local-service host_name core service_description Temperature Sensors check_command inst_sensors }
8.21.3. Configure NAGIOS Monitors for Remote Hosts
The following sections contain code blocks specifying monitors for services on other campus hosts. The NAGIOS server on Core will contact the NAGIOS Remote Plugin Executor (NRPE) servers on the other campus hosts and request the results of several commands. For security reasons, the NRPE servers do not accept command arguments.
The institute defines several NRPE commands, using a inst_
prefix to
distinguish their names. The commands take no arguments but execute a
plugin with pre-defined arguments appropriate for the institute. The
commands are defined in code blocks interleaved with the blocks that
monitor them. The command blocks are appended to nrpe.cfg
and the
monitoring blocks to nagios.cfg
. The nrpe.cfg
file is installed
on each campus host by the campus role's Configure NRPE tasks.
8.21.4. Configure NAGIOS Monitors for Gate
Define the monitored host, gate
. Monitor its response to network
pings.
roles_t/core/templates/nagios.cfg
define host {
use linux-server
host_name gate
address {{ gate_addr }}
}
For all campus NRPE servers: an inst_root
command to check the free
space on the root partition.
roles_t/campus/files/nrpe.cfg
command[inst_root]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /
Monitor inst_root
on Gate.
roles_t/core/templates/nagios.cfg
define service {
use generic-service
host_name gate
service_description Root Partition
check_command check_nrpe!inst_root
}
Monitor check_load
on Gate.
roles_t/core/templates/nagios.cfg
define service {
use generic-service
host_name gate
service_description Current Load
check_command check_nrpe!check_load
}
Monitor check_zombie_procs
and check_total_procs
on Gate.
roles_t/core/templates/nagios.cfg
define service { use generic-service host_name gate service_description Zombie Processes check_command check_nrpe!check_zombie_procs } define service { use generic-service host_name gate service_description Total Processes check_command check_nrpe!check_total_procs }
For all campus NRPE servers: an inst_swap
command to check the swap
usage.
roles_t/campus/files/nrpe.cfg
command[inst_swap]=/usr/lib/nagios/plugins/check_swap -w 20% -c 10%
Monitor inst_swap
on Gate.
roles_t/core/templates/nagios.cfg
define service {
use generic-service
host_name gate
service_description Swap Usage
check_command check_nrpe!inst_swap
}
For all campus NRPE servers: an inst_sensors
command to report core
CPU temperatures.
roles_t/campus/files/nrpe.cfg
command[inst_sensors]=/usr/local/sbin/inst_sensors
Monitor inst_sensors
on Gate.
roles_t/core/templates/nagios.cfg
define service {
use generic-service
host_name gate
service_description Temperature Sensors
check_command check_nrpe!inst_sensors
}
8.22. Configure Backups
The following task installs the backup
script from private/
. An
example script is provided in here.
roles_t/core/tasks/main.yml
- name: Install backup script.
become: yes
copy:
src: ../private/backup
dest: /usr/local/sbin/backup
mode: u=rx,g=r,o=
8.23. Configure Nextcloud
Core runs Nextcloud to provide a private institute cloud, as described in The Cloud Service. Installing, restoring (from backup), and upgrading Nextcloud are manual processes documented in The Nextcloud Admin Manual, Maintenance. However Ansible can help prepare Core before an install or restore, and perform basic security checks afterwards.
8.23.1. Prepare Core For Nextcloud
The Ansible code contained herein prepares Core to run Nextcloud by installing required software packages, configuring the web server, and installing a cron job.
roles_t/core/tasks/main.yml
- name: Install packages required by Nextcloud. become: yes apt: pkg: [ apache2, mariadb-server, php, php-apcu, php-bcmath, php-curl, php-gd, php-gmp, php-json, php-mysql, php-mbstring, php-intl, php-imagick, php-xml, php-zip, libapache2-mod-php ]
Next, a number of Apache2 modules are enabled.
roles_t/core/tasks/main.yml
- name: Enable Apache2 modules for Nextcloud.
become: yes
apache2_module:
name: "{{ item }}"
loop: [ rewrite, headers, env, dir, mime ]
The Apache2 configuration is then extended with the following
/etc/apache2/sites-available/nextcloud.conf
file, which is installed
and enabled with a2ensite
. The same configuration lines are given
in the "Installation on Linux" section of the Nextcloud Server
Administration Guide (sub-section Apache Web server configuration).
roles_t/core/files/nextcloud.conf
Alias /nextcloud "/var/www/nextcloud/"
<Directory /var/www/nextcloud/>
Require all granted
AllowOverride All
Options FollowSymlinks MultiViews
<IfModule mod_dav.c>
Dav off
</IfModule>
</Directory>
roles_t/core/tasks/main.yml
- name: Install Nextcloud web configuration. become: yes copy: src: nextcloud.conf dest: /etc/apache2/sites-available/nextcloud.conf notify: Restart Apache2. - name: Enable Nextcloud web configuration. become: yes command: cmd: a2ensite nextcloud creates: /etc/apache2/sites-enabled/nextcloud.conf notify: Restart Apache2.
The institute supports "Service discovery" as recommended at the end
of the "Apache Web server configuration" subsection. The prescribed
rewrite rules are included in a Directory
block for the default
virtual host's document root.
roles_t/core/files/nextcloud.conf
<Directory /var/www/html/> <IfModule mod_rewrite.c> RewriteEngine on # LogLevel alert rewrite:trace3 RewriteRule ^\.well-known/carddav \ /nextcloud/remote.php/dav [R=301,L] RewriteRule ^\.well-known/caldav \ /nextcloud/remote.php/dav [R=301,L] RewriteRule ^\.well-known/webfinger \ /nextcloud/index.php/.well-known/webfinger [R=301,L] RewriteRule ^\.well-known/nodeinfo \ /nextcloud/index.php/.well-known/nodeinfo [R=301,L] </IfModule> </Directory>
The institute also includes additional Apache2 configuration
recommended by Nextcloud 20's Settings > Administration > Overview web
page. The following portion of nextcloud.conf
sets a
Strict-Transport-Security
header with a max-age
of 6 months.
roles_t/core/files/nextcloud.conf
<IfModule mod_headers.c>
Header always set \
Strict-Transport-Security "max-age=15552000; includeSubDomains"
</IfModule>
Nextcloud's directories and files are typically readable only by the
web server's user www-data
and the www-data
group. The
administrator is added to this group to ease (speed) the debugging of
cloud FUBARs.
roles_t/core/tasks/main.yml
- name: Add {{ ansible_user }} to web server group.
become: yes
user:
name: "{{ ansible_user }}"
append: yes
groups: www-data
Nextcloud is configured with a cron job to run periodic background jobs.
roles_t/core/tasks/main.yml
- name: Create Nextcloud cron job.
become: yes
cron:
minute: 11,26,41,56
job: >-
[ -r /var/www/nextcloud/cron.php ]
&& /usr/bin/php -f /var/www/nextcloud/cron.php
name: Nextcloud
user: www-data
Nextcloud's MariaDB database (and user) are created by the following
tasks. The user's password is taken from the nextcloud_dbpass
variable, kept in private/vars.yml
, and generated e.g. with
the apg -n 1 -x 12 -m 12
command.
private/vars.yml
nextcloud_dbpass: ippAgmaygyob
When the mysql_db
Ansible module supports check_implicit_admin
,
the following task can create Nextcloud's DB.
- name: Create Nextcloud DB. become: yes mysql_db: check_implicit_admin: yes name: nextcloud collation: utf8mb4_general_ci encoding: utf8mb4
Unfortunately it does not currently, yet the institute prefers the
more secure Unix socket authentication method. Rather than create
such a user, the nextcloud
database and nextclouduser
user are
created manually.
The following task would work (mysql_user
supports
check_implicit_admin
) but the nextcloud
database was not created
above. Thus both database and user are created manually, with SQL
given in the 8.23.5 subsection below, before occ
maintenance:install
can run.
- name: Create Nextcloud DB user. become: yes mysql_user: check_implicit_admin: yes name: nextclouduser password: "{{ nextcloud_dbpass }}" update_password: always priv: 'nextcloud.*:all'
Finally, a symbolic link positions /Nextcloud/nextcloud/
at
/var/www/nextcloud/
as expected by the Apache2 configuration above.
Nextcloud itself should always believe that /var/www/nextcloud/
is
its document root.
roles_t/core/tasks/main.yml
- name: Link /var/www/nextcloud. become: yes file: path: /var/www/nextcloud src: /Nextcloud/nextcloud state: link force: yes follow: no
8.23.2. Configure PHP
The following tasks set a number of PHP parameters for better performance, as recommended by Nextcloud.
roles_t/core/tasks/main.yml
- name: Set PHP memory_limit for Nextcloud. become: yes lineinfile: path: /etc/php/8.2/apache2/php.ini regexp: memory_limit *= line: memory_limit = 768M - name: Include PHP parameters for Nextcloud. become: yes copy: content: | ; priority=20 apc.enable_cli=1 opcache.enable=1 opcache.enable_cli=1 opcache.interned_strings_buffer=12 opcache.max_accelerated_files=10000 opcache.memory_consumption=128 opcache.save_comments=1 opcache.revalidate_freq=1 dest: /etc/php/8.2/mods-available/nextcloud.ini notify: Restart Apache2. - name: Enable Nextcloud PHP modules. become: yes command: cmd: phpenmod {{ item }} creates: /etc/php/8.2/apache2/conf.d/20-{{ item }}.ini loop: [ nextcloud, apcu ] notify: Restart Apache2.
8.23.3. Create /Nextcloud/
The Ansible tasks up to this point have completed Core's LAMP stack and made Core ready to run Nextcloud, but they have not installed Nextcloud. Nextcloud must be manually installed or restored from a backup copy. Until then, attempts to access the institute cloud will just produce errors.
Installing or restoring Nextcloud starts by creating the
/Nextcloud/
directory. It may be a separate disk or just a new
directory on an existing partition. The commands involved will vary
greatly depending on circumstances, but the following examples might
be helpful.
The following command line creates /Nextcloud/
in the root
partition. This is appropriate for one-partition machines like the
test machines.
sudo mkdir /Nextcloud sudo chmod 775 /Nextcloud
The following command lines create /Nextcloud/
on an existing,
large, separate (from the root) partition. A popular choice for a
second partition is mounted at /home/
.
sudo mkdir /home/nextcloud sudo chmod 775 /home/nextcloud sudo ln -s /home/nextcloud /Nextcloud
These commands create /Nextcloud/
on an entire (without
partitioning) second hard drive, /dev/sdb
.
sudo mkfs -t ext4 /dev/sdb sudo mkdir /Nextcloud echo "/dev/sdb /Nextcloud ext4 errors=remount-ro 0 2" \ | sudo tee -a /etc/fstab >/dev/null sudo mount /Nextcloud
8.23.4. Restore Nextcloud
Restoring Nextcloud in the newly created /Nextcloud/
presumably
starts with plugging in the portable backup drive and unlocking it so
that it is automounted at /media/sysadm/Backup
per its drive label:
Backup
. Assuming this, the following command restores /Nextcloud/
from the backup (and can be repeated as many times as necessary to get
a successful, complete copy).
rsync -a /media/sysadm/Backup/Nextcloud/ /Nextcloud/
Mirroring a backup onto a new server may cause UID/GID mismatches.
All of the files in /Nextcloud/nextcloud/
must be owned by user
www-data
and group www-data
. If not, the following command will
make it so.
sudo chown -R www-data.www-data /Nextcloud/nextcloud/
The database is restored with the following commands, which assume the
last dump was made February 20th 2022 and thus was saved in
/Nextcloud/20220220.bak
. The database will need to be
created first as when installing Nextcloud. The appropriate SQL are
given in Install Nextcloud below.
cd /Nextcloud/ sudo mysql --defaults-file=dbbackup.cnf nextcloud < 20220220.bak cd nextcloud/ sudo -u www-data php occ maintenance:data-fingerprint
Finally the administrator surfs to http://core/nextcloud/
,
authenticates, and addresses any warnings on the Administration >
Overview web page.
8.23.5. Install Nextcloud
Installing Nextcloud in the newly created /Nextcloud/
starts with
downloading and verifying a recent release tarball. The following
example command lines unpacked Nextcloud 23 in nextcloud/
in
/Nextcloud/
and set the ownerships and permissions of the new
directories and files.
cd /Nextcloud/ tar xzf ~/Downloads/nextcloud-23.0.0.tar.bz2 sudo chown -R www-data.www-data nextcloud sudo find nextcloud -type d -exec chmod 750 {} \; sudo find nextcloud -type f -exec chmod 640 {} \;
According to the latest installation instructions in version 24's
administration guide, after unpacking and setting file permissions,
the following occ
command takes care of everything. This command
currently expects Nextcloud's database and user to exist. The
following SQL commands create the database and user (entered at the
SQL prompt of the sudo mysql
command). The shell command then runs
occ
.
create database nextcloud character set utf8mb4 collate utf8mb4_general_ci; grant all on nextcloud.* to 'nextclouduser'@'localhost' identified by 'ippAgmaygyobwyt5'; flush privileges;
cd /var/www/nextcloud/ sudo -u www-data php occ maintenance:install \ --data-dir=/var/www/nextcloud/data \ --database=mysql --database-name=nextcloud \ --database-user=nextclouduser \ --database-pass=ippAgmaygyobwyt5 \ --admin-user=sysadm --admin-pass=PASSWORD
The nextcloud/config/config.php
is created by the above command, but
gets the trusted_domains
and overwrite.cli.url
settings wrong,
using localhost
where core.small.private
is wanted. The
only way the institute cloud should be accessed is by that name, so
adjusting the config.php
file is straightforward. The settings
should be corrected by hand for immediate testing, but the
"Afterwards" tasks (below) will check (or update) these settings when
Core is next checked (or updated) e.g. with ./inst config -n core
.
Before calling Nextcloud "configured", the administrator runs ./inst
config core
, surfs to http://core.small.private/nextcloud/
,
logins in as sysadm
, and follows any reasonable
instructions (reasonable for a small organization) on the
Administration > Overview page.
8.23.6. Afterwards
Whether Nextcloud was restored or installed, there are a few things
Ansible can do to bolster reliability and security (aka privacy).
These Nextcloud "Afterwards" tasks would fail if they executed before
Nextcloud was installed, so the first "afterwards" task probes for
/Nextcloud/nextcloud
and registers the file status with the
nextcloud
variable. The nextcloud.stat.exists
condition on the
afterwards tasks causes them to skip rather than fail.
roles_t/core/tasks/main.yml
- name: Test for /Nextcloud/nextcloud/.
stat:
path: /Nextcloud/nextcloud
register: nextcloud
- debug:
msg: "/Nextcloud/ does not yet exist"
when: not nextcloud.stat.exists
The institute installed Nextcloud with the occ maintenance:install
command, which produced a simple nextcloud/config/config.php
with
incorrect trusted_domains
and overwrite.cli.url
settings. These
are fixed during installation, but the institute may also have
restored Nextcloud, including the config.php
file. (This file is
edited by the web scripts and so is saved/restored in the backup
copy.) The restored settings may be different from those Ansible used
to create the database user.
The following task checks (or updates) the trusted_domains
and
dbpassword
settings, to ensure they are consistent with the Ansible
variables domain_priv
and nextcloud_dbpass
. The
overwrite.cli.url
setting is fixed by the tasks that implement
Pretty URLs (below).
roles_t/core/tasks/main.yml
- name: Configure Nextcloud trusted domains. become: yes replace: path: /var/www/nextcloud/config/config.php regexp: "^( *)'trusted_domains' *=>[^)]*[)],$" replace: |- \1'trusted_domains' => \1array ( \1 0 => 'core.{{ domain_priv }}', \1), when: nextcloud.stat.exists - name: Configure Nextcloud dbpasswd. become: yes lineinfile: path: /var/www/nextcloud/config/config.php regexp: "^ *'dbpassword' *=> *'.*', *$" line: " 'dbpassword' => '{{ nextcloud_dbpass }}'," insertbefore: "^[)];" firstmatch: yes when: nextcloud.stat.exists
The institute uses the php-apcu
package to provide Nextcloud with a
local memory cache. The following memcache.local
Nextcloud setting
enables it.
roles_t/core/tasks/main.yml
- name: Configure Nextcloud memcache. become: yes lineinfile: path: /var/www/nextcloud/config/config.php regexp: "^ *'memcache.local' *=> *'.*', *$" line: " 'memcache.local' => '\\\\OC\\\\Memcache\\\\APCu'," insertbefore: "^[)];" firstmatch: yes when: nextcloud.stat.exists
The institute implements Pretty URLs as described in the Pretty URLs
subsection of the "Installation on Linux" section of the "Installation
and server configuration" chapter in the Nextcloud 22 Server
Administration Guide. Two settings are updated: overwrite.cli.url
and htaccess.RewriteBase
.
roles_t/core/tasks/main.yml
- name: Configure Nextcloud for Pretty URLs. become: yes lineinfile: path: /var/www/nextcloud/config/config.php regexp: "{{ item.regexp }}" line: "{{ item.line }}" insertbefore: "^[)];" firstmatch: yes vars: url: http://core.{{ domain_priv }}/nextcloud loop: - regexp: "^ *'overwrite.cli.url' *=>" line: " 'overwrite.cli.url' => '{{ url }}'," - regexp: "^ *'htaccess.RewriteBase' *=>" line: " 'htaccess.RewriteBase' => '/nextcloud'," when: nextcloud.stat.exists
The institute sets Nextcloud's default_phone_region
mainly to avoid
a complaint on the Settings > Administration > Overview web page.
private/vars.yml
nextcloud_region: US
roles_t/core/tasks/main.yml
- name: Configure Nextcloud phone region. become: yes lineinfile: path: /var/www/nextcloud/config/config.php regexp: "^ *'default_phone_region' *=> *'.*', *$" line: " 'default_phone_region' => '{{ nextcloud_region }}'," insertbefore: "^[)];" firstmatch: yes when: nextcloud.stat.exists
The next two tasks create /Nextcloud/dbbackup.cnf
if it does not
exist, and checks the password
setting in it when it does. It
should never be world readable (and probably shouldn't be group
readable). This file is needed by the institute's backup
command,
so ./inst config
and in particular these next two tasks need to
run before the next backup.
roles_t/core/tasks/main.yml
- name: Create /Nextcloud/dbbackup.cnf. no_log: yes become: yes copy: content: | [mysqldump] no-tablespaces single-transaction host=localhost user=nextclouduser password={{ nextcloud_dbpass }} dest: /Nextcloud/dbbackup.cnf mode: g=,o= force: no when: nextcloud.stat.exists - name: Update /Nextcloud/dbbackup.cnf password. become: yes lineinfile: path: /Nextcloud/dbbackup.cnf regexp: password= line: password={{ nextcloud_dbpass }} when: nextcloud.stat.exists
9. The Gate Role
The gate
role configures the services expected at the campus gate: a
VPN into the campus network via a campus Wi-Fi access point, and
Internet access via NAT to the Internet. The gate machine uses
three network interfaces (see The Gate Machine) configured with
persistent names used in its firewall rules.
lan
- The campus Ethernet.
wifi
- The campus Wi-Fi AP.
isp
- The campus ISP.
Requiring a VPN to access the campus network from the campus Wi-Fi bolsters the native Wi-Fi encryption and frustrates non-RYF (Respects Your Freedom) wireless equipment.
Gate is also a campus machine, so the more generic campus
role is
applied first, by which Gate gets a campus machine's DNS and Postfix
configurations, etc.
9.1. Include Particulars
The following should be familiar boilerplate by now.
roles_t/gate/tasks/main.yml
---
- name: Include public variables.
include_vars: ../public/vars.yml
tags: accounts
- name: Include private variables.
include_vars: ../private/vars.yml
tags: accounts
- name: Include members.
include_vars: "{{ lookup('first_found', membership_rolls) }}"
tags: accounts
9.2. Configure Netplan
Gate's network interfaces are configured using Netplan and two files.
/etc/netplan/60-gate.yaml
describes the static interfaces, to the
campus Ethernet and WiFi. /etc/netplan/60-isp.yaml
is expected to
be revised more frequently as the campus ISP changes.
Netplan is configured to identify the interfaces by their MAC
addresses, which must be provided in private/vars.yml
, as in the
example code here.
private/vars.yml
gate_lan_mac: 08:00:27:f3:16:79 gate_isp_mac: 08:00:27:3d:42:e5 gate_wifi_mac: 08:00:27:4a:de:d2
The following tasks install the two configuration files and apply the new network plan.
roles_t/gate/tasks/main.yml
- name: Install netplan (gate). become: yes apt: pkg=netplan.io - name: Configure netplan (gate). become: yes copy: content: | network: ethernets: lan: match: macaddress: {{ gate_lan_mac }} addresses: [ {{ gate_addr_cidr }} ] set-name: lan dhcp4: false nameservers: addresses: [ {{ core_addr }} ] search: [ {{ domain_priv }} ] routes: - to: {{ public_vpn_net_cidr }} via: {{ core_addr }} wifi: match: macaddress: {{ gate_wifi_mac }} addresses: [ {{ gate_wifi_addr_cidr }} ] set-name: wifi dhcp4: false dest: /etc/netplan/60-gate.yaml mode: u=rw,g=r,o= notify: Apply netplan. - name: Install netplan (ISP). become: yes copy: content: | network: ethernets: isp: match: macaddress: {{ gate_isp_mac }} set-name: isp dhcp4: true dhcp4-overrides: use-dns: false dest: /etc/netplan/60-isp.yaml mode: u=rw,g=r,o= force: no notify: Apply netplan.
roles_t/gate/handlers/main.yml
--- - name: Apply netplan. become: yes command: netplan apply
Note that the 60-isp.yaml
file is only updated (created) if it does
not already exists, so that it can be easily modified to debug a new
campus ISP without interference from Ansible.
9.3. UFW Rules
Gate uses the Uncomplicated FireWall (UFW) to install its packet
filters at boot-time. The institute does not use a firewall except to
configure Network Address Translation (NAT) and forwarding. Members
expect to be able to exercise experimental services on random ports.
The default policy settings in /etc/default/ufw
are ACCEPT
and
ACCEPT
for input and output, and DROP
for forwarded packets.
Forwarding was enabled in the kernel previously (when configuring
OpenVPN) using Ansible's sysctl
module. It does not need to be set
in /etc/ufw/sysctl.conf
.
NAT is enabled per the ufw-framework(8)
manual page, by introducing
nat
table rules in a block at the end of /etc/ufw/before.rules
.
They translate packets going to the ISP. These can come from the
private Ethernet or campus Wi-Fi. Hosts on the other institute
networks (the two VPNs) should not be routing their Internet traffic
through their VPN.
ufw-nat
-A POSTROUTING -s {{ private_net_cidr }} -o isp -j MASQUERADE -A POSTROUTING -s {{ gate_wifi_net_cidr }} -o isp -j MASQUERADE
Forwarding rules are also needed. The nat
table is a post routing
rule set, so the default routing policy (DENY
) will drop packets
before NAT can translate them. The following rules are added to allow
packets to be forwarded from the campus Ethernet or Gate-WiFi subnet
to an ISP on the isp
interface, and back (if related to an outgoing
packet).
ufw-forward-nat
-A FORWARD -i lan -o isp -j ACCEPT -A FORWARD -i wifi -o isp -j ACCEPT -A FORWARD -i isp -o lan {{ ACCEPT_RELATED }} -A FORWARD -i isp -o wifi {{ ACCEPT_RELATED }}
To keep the above code lines short, the template references an
ACCEPT_RELATED
variable, provided by the task, whose value includes
the following iptables(8)
rule specification parameters.
-m state --state ESTABLISHED,RELATED -j ACCEPT
If "the standard iptables-restore
syntax" as it is described in the
ufw-framework
manual page, allows continuation lines, please let us
know!
Forwarding rules are also needed to route packets from the campus VPN
(the ovpn
tunnel device) to the institute's LAN and back. The
public VPN on Front will also be included since its packets arrive at
Gate's lan
interface, coming from Core. Thus forwarding between
public and campus VPNs is also allowed.
ufw-forward-private
-A FORWARD -i lan -o ovpn -j ACCEPT -A FORWARD -i ovpn -o lan -j ACCEPT
Note that there are no forwarding rules to allow packets to pass from
the wifi
device to the lan
device, just the ovpn
device.
9.4. Install UFW
The following tasks install the Uncomplicated Firewall (UFW), set its
policy in /etc/default/ufw
, and install the above rules in
/etc/ufw/before.rules
. When Gate is configured by ./abbey config
gate
as in the example bootstrap, enabling the firewall should not be
a problem. But when configuring a new gate with ./abbey config
new-gate
, enabling the firewall could break Ansible's current and
future ssh sessions. For this reason, Ansible does not enable the
firewall. The administrator must login and execute the following
command after Gate is configured or new gate is "in position"
(connected to old Gate's wifi
and isp
networks).
sudo ufw enable
roles_t/gate/tasks/main.yml
- name: Install UFW. become: apt: pkg=ufw - name: Configure UFW policy. become: yes lineinfile: path: /etc/default/ufw line: "{{ item.line }}" regexp: "{{ item.regexp }}" loop: - { line: "DEFAULT_INPUT_POLICY=\"ACCEPT\"", regexp: "^DEFAULT_INPUT_POLICY=" } - { line: "DEFAULT_OUTPUT_POLICY=\"ACCEPT\"", regexp: "^DEFAULT_OUTPUT_POLICY=" } - { line: "DEFAULT_FORWARD_POLICY=\"DROP\"", regexp: "^DEFAULT_FORWARD_POLICY=" } - name: Configure UFW rules. become: yes vars: ACCEPT_RELATED: -m state --state ESTABLISHED,RELATED -j ACCEPT blockinfile: path: /etc/ufw/before.rules block: | *nat :POSTROUTING ACCEPT [0:0] <<ufw-nat>> COMMIT *filter <<ufw-forward-nat>> <<ufw-forward-private>> COMMIT insertafter: EOF
9.5. Configure DHCP For The Gate-WiFi Ethernet
To accommodate commodity Wi-Fi access points without re-configuring them, the institute attempts to look like an up-link, an ISP, e.g. a cable modem. Thus it expects the wireless AP to route non-local traffic out its WAN Ethernet port, and to get an IP address for the WAN port using DHCP. Thus Gate runs ISC's DHCP daemon configured to listen on one network interface, recognize exactly one client host, and provide that one client with an IP address and customary network parameters (default route, time server, etc.).
Two Ansible variables are needed to configure Gate's DHCP service,
specifically the sole subnet host: wifi_wan_name
is any word
appropriate for identifying the Wi-Fi AP, and wifi_wan_mac
is the
AP's MAC address.
private/vars.yml
wifi_wan_mac: 94:83:c4:19:7d:57 wifi_wan_name: campus-wifi-ap
If Gate is configured with ./abbey config gate
and then connected to
actual networks (i.e. not rebooted), the following command is
executed. If a new gate was configured with ./abbey config new-gate
and not rebooted, the following command would also be executed.
sudo systemctl start isc-dhcp-server
If physically moved or rebooted for some other reason, the above command would not be necessary.
Installation and configuration of the DHCP daemon follows. Note that
the daemon listens only on the Gate-WiFi network interface. Also
note the drop-in Requires
dependency, without which the DHCP server
intermittently fails, finding the wifi
interface has no IPv4
addresses (or perhaps finding no wifi
interface at all?).
roles_t/gate/tasks/main.yml
- name: Install DHCP server. become: yes apt: pkg=isc-dhcp-server - name: Configure DHCP interface. become: yes lineinfile: path: /etc/default/isc-dhcp-server line: INTERFACESv4="wifi" regexp: ^INTERFACESv4= notify: Restart DHCP server. - name: Configure DHCP server dependence on interface. become: yes copy: content: | [Unit] Requires=network-online.target dest: /etc/systemd/system/isc-dhcp-server.service.d/depend.conf notify: Reload Systemd. - name: Configure DHCP for WiFiAP service. become: yes copy: content: | default-lease-time 3600; max-lease-time 7200; ddns-update-style none; authoritative; log-facility daemon; subnet {{ gate_wifi_net }} netmask {{ gate_wifi_net_mask }} { option subnet-mask {{ gate_wifi_net_mask }}; option broadcast-address {{ gate_wifi_broadcast }}; option routers {{ gate_wifi_addr }}; } host {{ wifi_wan_name }} { hardware ethernet {{ wifi_wan_mac }}; fixed-address {{ wifi_wan_addr }}; } dest: /etc/dhcp/dhcpd.conf notify: Restart DHCP server. - name: Enable DHCP server. become: yes systemd: service: isc-dhcp-server enabled: yes
roles_t/gate/handlers/main.yml
- name: Restart DHCP server. become: yes systemd: service: isc-dhcp-server state: restarted - name: Reload Systemd. become: yes systemd: daemon-reload: yes
9.6. Install Server Certificate
The (OpenVPN) server on Gate uses an institute certificate (and key)
to authenticate itself to its clients. It uses the /etc/server.crt
and /etc/server.key
files just because the other servers (on Core
and Front) do.
roles_t/gate/tasks/main.yml
- name: Install server certificate/key. become: yes copy: src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }} dest: /etc/server.{{ item.typ }} mode: "{{ item.mode }}" loop: - { path: "issued/gate.{{ domain_priv }}", typ: crt, mode: "u=r,g=r,o=r" } - { path: "private/gate.{{ domain_priv }}", typ: key, mode: "u=r,g=,o=" } notify: Restart OpenVPN.
9.7. Configure OpenVPN
Gate uses OpenVPN to provide the institute's campus VPN service. Its clients are not configured to route all of their traffic through the VPN, so Gate pushes routes to the other institute networks. Gate itself is on the private Ethernet and thereby learns about the route to Front.
openvpn-gate-routes
push "route {{ private_net_and_mask }}" push "route {{ public_vpn_net_and_mask }}"
The complete OpenVPN configuration for Gate includes a server
option, the pushed routes mentioned above, and the common options
discussed in The VPN Services.
openvpn-gate
server {{ campus_vpn_net_and_mask }} client-config-dir /etc/openvpn/ccd <<openvpn-gate-routes>> <<openvpn-dev-mode>> <<openvpn-keepalive>> <<openvpn-dns>> <<openvpn-drop-priv>> <<openvpn-crypt>> <<openvpn-max>> <<openvpn-debug>> ca /usr/local/share/ca-certificates/{{ domain_name }}.crt cert /etc/server.crt key /etc/server.key dh dh2048.pem tls-auth ta.key 0
Finally, here are the tasks (and handler) required to install and configure the OpenVPN server on Gate.
roles_t/gate/tasks/main.yml
- name: Install OpenVPN. become: yes apt: pkg=openvpn - name: Enable IP forwarding. become: yes sysctl: name: net.ipv4.ip_forward value: "1" state: present - name: Create OpenVPN client configuration directory. become: yes file: path: /etc/openvpn/ccd state: directory notify: Restart OpenVPN. - name: Disable former VPN clients. become: yes copy: content: "disable\n" dest: /etc/openvpn/ccd/{{ item }} loop: "{{ revoked }}" notify: Restart OpenVPN. tags: accounts - name: Install OpenVPN secrets. become: yes copy: src: ../Secret/{{ item.src }} dest: /etc/openvpn/{{ item.dest }} mode: u=r,g=,o= loop: - { src: gate-dh2048.pem, dest: dh2048.pem } - { src: gate-ta.key, dest: ta.key } notify: Restart OpenVPN. - name: Configure OpenVPN. become: yes copy: content: | <<openvpn-gate>> dest: /etc/openvpn/server.conf mode: u=r,g=r,o= notify: Restart OpenVPN.
roles_t/gate/handlers/main.yml
- name: Restart OpenVPN. become: yes systemd: service: openvpn@server state: restarted
10. The Campus Role
The campus
role configures generic campus server machines: network
NAS, DVRs, wireless sensors, etc. These are simple Debian machines
administered remotely via Ansible. They should use the campus name
server, sync with the campus time server, trust the institute
certificate authority, and deliver email addressed to root
to the
system administrator's account on Core.
Wireless campus devices can get a key to the campus VPN from the
./inst client campus
command, but their OpenVPN client must be
configured manually.
10.1. Include Particulars
The following should be familiar boilerplate by now.
roles_t/campus/tasks/main.yml
--- - name: Include public variables. include_vars: ../public/vars.yml - name: Include private variables. include_vars: ../private/vars.yml
10.2. Configure Hostname
Clients should be using the expected host name.
roles_t/campus/tasks/main.yml
- name: Configure hostname. become: yes copy: content: "{{ item.content }}" dest: "{{ item.file }}" loop: - { file: /etc/hostname, content: "{{ inventory_hostname }}\n" } - { file: /etc/mailname, content: "{{ inventory_hostname }}.{{ domain_priv }}\n" } - name: Update hostname. become: yes command: hostname -F /etc/hostname when: inventory_hostname != ansible_hostname
10.3. Configure Systemd Resolved
Campus machines use the campus name server on Core (or dns.google
),
and include the institute's private domain in their search lists.
roles_t/campus/tasks/main.yml
- name: Configure resolved. become: yes lineinfile: path: /etc/systemd/resolved.conf regexp: "{{ item.regexp }}" line: "{{ item.line }}" loop: - { regexp: '^ *DNS *=', line: "DNS={{ core_addr }}" } - { regexp: '^ *FallbackDNS *=', line: "FallbackDNS=8.8.8.8" } - { regexp: '^ *Domains *=', line: "Domains={{ domain_priv }}" } notify: - Reload Systemd. - Restart Systemd resolved.
roles_t/campus/handlers/main.yml
--- - name: Reload Systemd. become: yes systemd: daemon-reload: yes - name: Restart Systemd resolved. become: yes systemd: service: systemd-resolved state: restarted
10.4. Configure Systemd Timesyncd
The institute uses a common time reference throughout the campus. This is essential to campus security, improving the accuracy of log and file timestamps.
roles_t/campus/tasks/main.yml
- name: Configure timesyncd.
become: yes
lineinfile:
path: /etc/systemd/timesyncd.conf
line: NTP=ntp.{{ domain_priv }}
notify: Restart systemd-timesyncd.
roles_t/campus/handlers/main.yml
- name: Restart systemd-timesyncd. become: yes systemd: service: systemd-timesyncd state: restarted
10.5. Add Administrator to System Groups
The administrator often needs to read (directories of) log files owned
by groups root
and adm
. Adding the administrator's account to
these groups speeds up debugging.
roles_t/campus/tasks/main.yml
- name: Add {{ ansible_user }} to system groups.
become: yes
user:
name: "{{ ansible_user }}"
append: yes
groups: root,adm
10.6. Install Unattended Upgrades
The institute prefers to install security updates as soon as possible.
roles_t/campus/tasks/main.yml
- name: Install basic software.
become: yes
apt: pkg=unattended-upgrades
10.7. Configure Postfix on Campus
The Postfix settings used by the campus include message size, queue
times, and the relayhost
Core. The default Debian configuration
(for an "Internet Site") is otherwise sufficient. Manual installation
may prompt for configuration type and mail name. The appropriate
answers are listed here but will be checked (corrected) by Ansible
tasks below.
- General type of mail configuration: Internet Site
- System mail name: new.small.private
roles_t/campus/tasks/main.yml
- name: Install Postfix. become: yes apt: pkg=postfix - name: Configure Postfix. become: yes lineinfile: path: /etc/postfix/main.cf regexp: "^ *{{ item.p }} *=" line: "{{ item.p }} = {{ item.v }}" loop: <<postfix-relaying>> <<postfix-message-size>> <<postfix-queue-times>> <<postfix-maildir>> - { p: myhostname, v: "{{ inventory_hostname }}.{{ domain_priv }}" } - { p: mydestination, v: "{{ postfix_mydestination | default('') }}" } - { p: relayhost, v: "[smtp.{{ domain_priv }}]" } - { p: inet_interfaces, v: loopback-only } notify: Restart Postfix. - name: Enable/Start Postfix. become: yes systemd: service: postfix enabled: yes state: started
roles_t/campus/handlers/main.yml
- name: Restart Postfix. become: yes systemd: service: postfix state: restarted
10.8. Set Domain Name
The host's fully qualified (private) domain name (FQDN) is set by an
alias in its /etc/hosts
file, as is customary on Debian. (See "The
"recommended method of setting the FQDN" in the hostname(1)
manpage.)
roles_t/campus/tasks/main.yml
- name: Set domain name. become: yes vars: name: "{{ inventory_hostname }}" lineinfile: path: /etc/hosts regexp: "^127.0.1.1[ ].*" line: "127.0.1.1 {{ name }}.{{ domain_priv }} {{ name }}"
10.9. Configure NRPE
Each campus host runs an NRPE (a NAGIOS Remote Plugin Executor) server so that the NAGIOS4 server on Core can collect statistics. The NAGIOS service is discussed in the Configure NRPE section of The Core Role.
roles_t/campus/tasks/main.yml
- name: Install NRPE. become: yes apt: pkg: [ nagios-nrpe-server, lm-sensors ] - name: Install inst_sensors NAGIOS plugin. become: yes copy: src: ../core/files/inst_sensors dest: /usr/local/sbin/inst_sensors mode: u=rwx,g=rx,o=rx - name: Configure NRPE server. become: yes copy: content: | allowed_hosts=127.0.0.1,::1,{{ core_addr }} dest: /etc/nagios/nrpe_local.cfg notify: Reload NRPE server. - name: Configure NRPE commands. become: yes copy: src: nrpe.cfg dest: /etc/nagios/nrpe.d/institute.cfg notify: Reload NRPE server. - name: Enable/Start NRPE server. become: yes systemd: service: nagios-nrpe-server enabled: yes state: started
roles_t/campus/handlers/main.yml
- name: Reload NRPE server. become: yes systemd: service: nagios-nrpe-server state: reloaded
11. The Ansible Configuration
The small institute uses Ansible to maintain the configuration of its
servers. The administrator keeps an Ansible inventory in hosts
, and
runs playbook site.yml
to apply the appropriate institutional
role(s) to each host. Examples of these files are included here, and
are used to test the roles. The example configuration applies the
institutional roles to VirtualBox machines prepared according to
chapter Testing.
The actual Ansible configuration is kept in a Git "superproject"
containing replacements for the example hosts
inventory and
site.yml
playbook, as well as the public/
and private/
particulars. Thus changes to this document and its tangle are easily
merged with git pull --recurse-submodules
or git submodule update
,
while changes to the institute's particulars are committed to a
separate revision history.
11.1. ansible.cfg
The Ansible configuration file ansible.cfg
contains just a handful
of settings, some included just to create a test jig as described in
Testing.
interpreter_python
is set to suppress a warning from Ansible's "automatic interpreter discovery" (described here). It declares that Python 3 can be expected on all institute hosts.vault_password_file
is set to suppress prompts for the vault password. The institute keeps its vault password inSecret/
(as described in Keys) and thus sets this parameter toSecret/vault-password
.inventory
is set to avoid specifying it on the command line.roles_path
is set to the recently tangled roles files inroles_t/
which are preferred in the test configuration.
ansible.cfg
[defaults] interpreter_python=/usr/bin/python3 vault_password_file=Secret/vault-password inventory=hosts roles_path=roles_t
11.2. hosts
The Ansible inventory file hosts
describes all of the institute's
machines starting with the main servers Front, Core and Gate. It
provides the IP addresses, administrator account names and passwords
for each machine. The IP addresses are all private, campus network
addresses except Front's public IP. The following example host file
describes three test servers named front
, core
and gate
.
hosts
all: vars: ansible_user: sysadm ansible_ssh_extra_args: -i Secret/ssh_admin/id_rsa hosts: front: ansible_host: 192.168.57.3 ansible_become_password: "{{ become_front }}" core: ansible_host: 192.168.56.1 ansible_become_password: "{{ become_core }}" gate: ansible_host: 192.168.56.2 ansible_become_password: "{{ become_gate }}" children: campus: hosts: gate:
The values of the ansible_become_password
key are references to
variables defined in Secret/become.yml
, which is loaded as
"extra" variables by a -e
option on the ansible-playbook
command
line.
Secret/become.yml
become_front: !vault | $ANSIBLE_VAULT;1.1;AES256 3563626131333733666466393166323135383838666338666131336335326 3656437663032653333623461633866653462636664623938356563306264 3438660a35396630353065383430643039383239623730623861363961373 3376663366566326137386566623164313635303532393335363063333632 363163316436380a336562323739306231653561613837313435383230313 1653565653431356362 become_core: !vault | $ANSIBLE_VAULT;1.1;AES256 3464643665363937393937633432323039653530326465346238656530303 8633066663935316365376438353439333034666366363739616130643261 3232380a66356462303034636332356330373465623337393938616161386 4653864653934373766656265613636343334356361396537343135393663 313562613133380a373334393963623635653264663538656163613433383 5353439633234666134 become_gate: !vault | $ANSIBLE_VAULT;1.1;AES256 3138306434313739626461303736666236336666316535356561343566643 6613733353434333962393034613863353330623761623664333632303839 3838350a37396462343738303331356134373634306238633030303831623 0636537633139366333373933396637633034383132373064393939363231 636264323132370a393135666335303361326330623438613630333638393 1303632663738306634
The passwords are individually encrypted just to make it difficult to
acquire a list of all institute privileged account passwords in one
glance. The multi-line values are generated by the ansible-vault
encrypt_string
command, which uses the ansible.cfg
file and thus
the Secret/vault-password
file.
11.3. playbooks/site.yml
The example playbooks/site.yml
playbook (below) applies the
appropriate institutional role(s) to the hosts and groups defined in
the example inventory: hosts
.
playbooks/site.yml
--- - name: Configure All hosts: all roles: [ all ] - name: Configure Front hosts: front roles: [ front ] - name: Configure Gate hosts: gate roles: [ gate ] - name: Configure Core hosts: core roles: [ core ] - name: Configure Campus hosts: campus roles: [ campus ]
11.4. Secret/vault-password
As already mentioned, the small institute keeps its Ansible vault
password, a "master secret", on the encrypted partition mounted at
Secret/
in a file named vault-password
. The administrator
generated a 16 character pronounceable password with gpw 1 16
and
saved it like so: gpw 1 16 >Secret/vault-password
. The following
example password matches the example encryptions above.
Secret/vault-password
alitysortstagess
11.5. Creating A Working Ansible Configuration
A working Ansible configuration can be "tangled" from this document to
produce the test configuration described in the Testing chapter. The
tangling is done by Emacs's org-babel-tangle
function and has
already been performed with the resulting tangle included in the
distribution with this document.
An institution using the Ansible configuration herein can include this
document and its tangle as a Git submodule, e.g. in institute/
, and
thus safely merge updates while keeping public and private particulars
separate, in sibling subdirectories public/
and private/
.
The following example commands create a new Git repo in ~/net/
and add an Institute/
submodule.
cd mkdir network cd network git init git submodule add git://birchwood-abbey.net/~puck/Institute git add Institute
An institute administrator would then need to add several more files.
- A top-level Ansible configuration file,
ansible.cfg
, would be created by copyingInstitute/ansible.cfg
and changing theroles_path
toroles:Institute/roles
. - A host inventory,
hosts
, would be created, perhaps by copyingInstitute/hosts
and changing its IP addresses. - A site playbook,
site.yml
, would be created in a newplaybooks/
subdirectory by copyingInstitute/playbooks/site.yml
with appropriate changes. - All of the files in
Institute/public/
andInstitute/private/
would be copied, with appropriate changes, into new subdirectoriespublic/
andprivate/
. ~/net/Secret
would be a symbolic link to the (auto-mounted?) location of the administrator's encrypted USB drive, as described in section Keys.
The files in Institute/roles_t/
were "tangled" from this document
and must be copied to Institute/roles/
for reasons discussed in the
next section. This document does not "tangle" directly into
roles/
to avoid clobbering changes to a working (debugged!)
configuration.
The playbooks/
directory must include the institutional playbooks,
which find their settings and templates relative to this directory,
e.g. in ../private/vars.yml
. Running institutional playbooks from
~/net/playbooks/
means they will use ~/net/private/
rather than
the example ~/net/Institute/private/
.
cp -r Institute/roles_t Institute/roles
( cd playbooks; ln -s ../Institute/playbooks/* . )
Given these preparations, the inst
script should work in the
super-project's directory.
./Institute/inst config -n
11.6. Maintaining A Working Ansible Configuration
The Ansible roles currently tangle into the roles_t/
directory to
ensure that debugged Ansible code in roles/
is not clobbered by code
tangled from this document. Comparing roles_t/
with roles/
will
reveal any changes made to roles/
during debugging that need to be
reconciled with this document as well as any policy changes in this
document that require changes to the current roles/
.
When debugging literate programs becomes A Thing, then this document
can tangle directly into roles/
, and literate debuggers can find
their way back to the code block in this document.
12. The Institute Commands
The institute's administrator uses a convenience script to reliably
execute standard procedures. The script is run with the command name
./inst
because it is intended to run "in" the same directory as the
Ansible configuration. The Ansible commands it executes are expected
to get their defaults from ./ansible.cfg
.
12.1. Sub-command Blocks
The code blocks in this chapter tangle into the inst
script. Each
block examines the script's command line arguments to determine
whether its sub-command was intended to run, and exits with an
appropriate code when it is done.
The first code block is the header of the ./inst
script.
inst
#!/usr/bin/perl -w # # DO NOT EDIT. This file was tangled from an institute.org file. use strict; use IO::File;
12.2. Sanity Check
The next code block does not implement a sub-command; it implements
part of all ./inst
sub-commands. It performs a "sanity check" on
the current directory, warning of missing files or directories, and
especially checking that all files in private/
have appropriate
permissions. It probes past the Secret/
mount point (probing for
Secret/become.yml
) to ensure the volume is mounted.
inst
sub note_missing_file_p ($); sub note_missing_directory_p ($); { my $missing = 0; if (note_missing_file_p "ansible.cfg") { $missing += 1; } if (note_missing_file_p "hosts") { $missing += 1; } if (note_missing_directory_p "Secret") { $missing += 1; } if (note_missing_file_p "Secret/become.yml") { $missing += 1; } if (note_missing_directory_p "playbooks") { $missing += 1; } if (note_missing_file_p "playbooks/site.yml") { $missing += 1; } if (note_missing_directory_p "roles") { $missing += 1; } if (note_missing_directory_p "public") { $missing += 1; } if (note_missing_directory_p "private") { $missing += 1; } for my $filename (glob "private/*") { my $perm = (stat $filename)[2]; if ($perm & 077) { print "$filename: not private\n"; } } die "$missing missing files\n" if $missing != 0; } sub note_missing_file_p ($) { my ($filename) = @_; if (! -f $filename) { print "$filename: missing\n"; return 1; } else { return 0; } } sub note_missing_directory_p ($) { my ($dirname) = @_; if (! -d $dirname) { print "$dirname: missing\n"; return 1; } else { return 0; } }
12.3. Importing Ansible Variables
To ensure that Ansible and ./inst
are sympatico vis-a-vi certain
variable values (esp. private values like network addresses), a
check-inst-vars.yml
playbook is used to update the Perl syntax file
private/vars.pl
before ./inst
loads it. The Perl code in inst
declares the necessary global variables and private/vars.pl
sets
them.
inst
sub mysystem (@) { my $line = join (" ", @_); print "$line\n"; my $status = system $line; die "status: $status\nCould not run $line: $!\n" if $status != 0; } mysystem "ansible-playbook playbooks/check-inst-vars.yml >/dev/null"; our ($domain_name, $domain_priv, $front_addr, $gate_wifi_addr); do "./private/vars.pl";
The playbook that updates private/vars.pl
:
playbooks/check-inst-vars.yml
- hosts: localhost gather_facts: no tasks: - include_vars: ../public/vars.yml - include_vars: ../private/vars.yml - copy: content: | $domain_name = "{{ domain_name }}"; $domain_priv = "{{ domain_priv }}"; $front_addr = "{{ front_addr }}"; $gate_wifi_addr = "{{ gate_wifi_addr }}"; dest: ../private/vars.pl mode: u=rw,g=,o=
12.4. The CA Command
The next code block implements the CA
sub-command, which creates a
new CA (certificate authority) in Secret/CA/
as well as SSH and PGP
keys for the administrator, Monkey, Front and root
, also in
sub-directories of Secret/
. The CA is created with the "common
name" provided by the full_name
variable. An example is given
here.
public/vars.yml
full_name: Small Institute LLC
The Secret/
directory is on an off-line, encrypted volume plugged in
just for the duration of ./inst
commands, so Secret/
is actually a
symbolic link to a volume's automount location.
ln -s /media/sysadm/ADE7-F866/ Secret
The Secret/CA/
directory is prepared using Easy RSA's make-cadir
command. The Secret/CA/vars
file thus created is edited to contain
the appropriate names (or just to set EASYRSA_DN
to cn_only
).
sudo apt install easy-rsa ( cd Secret/; make-cadir CA ) ./inst CA
Running ./inst CA
creates the new CA and keys. The command prompts
for the Common Name (or several levels of Organizational names) of the
certificate authority. The full_name
is given: Small Institute
LLC
. The CA is used to issue certificates for front
, gate
and
core
, which are installed on the servers during the next ./inst
config
.
inst
if (defined $ARGV[0] && $ARGV[0] eq "CA") { die "usage: $0 CA" if @ARGV != 1; die "Secret/CA/easyrsa: not an executable\n" if ! -x "Secret/CA/easyrsa"; die "Secret/CA/pki/: already exists\n" if -e "Secret/CA/pki"; umask 077; mysystem "cd Secret/CA; ./easyrsa init-pki"; mysystem "cd Secret/CA; ./easyrsa build-ca nopass"; # Common Name: small.example.org my $dom = $domain_name; my $pvt = $domain_priv; mysystem "cd Secret/CA; ./easyrsa build-server-full $dom nopass"; mysystem "cd Secret/CA; ./easyrsa build-server-full gate.$pvt nopass"; mysystem "cd Secret/CA; ./easyrsa build-server-full core.$pvt nopass"; mysystem "cd Secret/CA; ./easyrsa build-client-full core nopass"; umask 077; mysystem "openvpn --genkey secret Secret/front-ta.key"; mysystem "openvpn --genkey secret Secret/gate-ta.key"; mysystem "openssl dhparam -out Secret/front-dh2048.pem 2048"; mysystem "openssl dhparam -out Secret/gate-dh2048.pem 2048"; mysystem "mkdir --mode=700 Secret/root.gnupg"; mysystem ("gpg --homedir Secret/root.gnupg", " --batch --quick-generate-key --passphrase ''", " root\@core.$pvt"); mysystem ("gpg --homedir Secret/root.gnupg", " --export --armor --output Secret/root-pub.pem", " root\@core.$pvt"); chmod 0440, "root-pub.pem"; mysystem ("gpg --homedir Secret/root.gnupg", " --export-secret-key --armor --output Secret/root-sec.pem", " root\@core.$pvt"); chmod 0400, "root-sec.pem"; mysystem "mkdir Secret/ssh_admin"; chmod 0700, "Secret/ssh_admin"; mysystem ("ssh-keygen -q -t rsa" ." -C A\\ Small\\ Institute\\ Administrator", " -N '' -f Secret/ssh_admin/id_rsa"); mysystem "mkdir Secret/ssh_monkey"; chmod 0700, "Secret/ssh_monkey"; mysystem "echo 'HashKnownHosts no' >Secret/ssh_monkey/config"; mysystem ("ssh-keygen -q -t rsa -C monkey\@core", " -N '' -f Secret/ssh_monkey/id_rsa"); mysystem "mkdir Secret/ssh_front"; chmod 0700, "Secret/ssh_front"; mysystem "ssh-keygen -A -f Secret/ssh_front -C $dom"; exit; }
12.5. The Config Command
The next code block implements the config
sub-command, which
provisions network services by running the site.yml
playbook
described in playbooks/site.yml
. It recognizes an optional -n
flag indicating that the service configurations should just be
checked. Given an optional host name, it provisions (or checks) just
the named host.
Example command lines:
./inst config ./inst config -n ./inst config HOST ./inst config -n HOST
inst
if (defined $ARGV[0] && $ARGV[0] eq "config") { die "Secret/CA/easyrsa: not executable\n" if ! -x "Secret/CA/easyrsa"; shift; my $cmd = "ansible-playbook -e \@Secret/become.yml"; if (defined $ARGV[0] && $ARGV[0] eq "-n") { shift; $cmd .= " --check --diff" } if (@ARGV == 0) { ; } elsif (defined $ARGV[0]) { my $hosts = lc $ARGV[0]; die "$hosts: contains illegal characters" if $hosts !~ /^!?[a-z][-a-z0-9,!]+$/; $cmd .= " -l $hosts"; } else { die "usage: $0 config [-n] [HOSTS]\n"; } $cmd .= " playbooks/site.yml"; mysystem $cmd; exit; }
12.6. Account Management
For general information about members and their Unix accounts, see
Accounts. The account management sub-commands maintain a mapping
associating member "usernames" (Unix account names) with their
records. The mapping is stored among other things in
private/members.yml
as the value associated with the key members
.
A new member's record in the members
mapping will have the status
key value current
. That key gets value former
when the member
leaves.3 Access by former members is revoked by invalidating the
Unix account passwords, removing any authorized SSH keys from Front
and Core, and disabling their VPN certificates.
The example file (below) contains a membership roll with one
membership record, for an account named dick
, which was issued
client certificates for devices named dick-note
, dick-phone
and
dick-razr
. dick-phone
appears to be lost because its certificate
was revoked. Dick's membership record includes a vault-encrypted
password (for Fetchmail) and the two password hashes installed on
Front and Core. (The example hashes are truncated versions.)
private/members.yml
--- members: dick: status: current clients: - dick-note - dick-phone - dick-razr password_front: $6$17h49U76$c7TsH6eMVmoKElNANJU1F1LrRrqzYVDreNu.QarpCoSt9u0gTHgiQ password_core: $6$E9se3BoSilq$T.W8IUb/uSlhrVEWUQsAVBweiWB4xb3ebQ0tguVxJaeUkqzVmZ password_fetchmail: !vault | $ANSIBLE_VAULT;1.1;AES256 38323138396431323564366136343431346562633965323864633938613363336 4333334333966363136613264636365383031376466393432623039653230390a 39366232633563646361616632346238333863376335633639383162356661326 4363936393530633631616630653032343465383032623734653461323331310a 6535633263656434393030333032343533626235653332626330666166613833 usernames: - dick revoked: - dick-phone
The test campus starts with the empty membership roll found in
private/members-empty.yml
and saved in private/members.yml
(which is not tangled from this document, thus not over-written
during testing). If members.yml
is not found, members-empty.yml
is used instead.
private/members-empty.yml
--- members: usernames: [] revoked: []
Both locations go on the membership_rolls
variable used by the
include_vars
tasks.
private/vars.yml
membership_rolls: - "../private/members.yml" - "../private/members-empty.yml"
Using the standard Perl library YAML::XS
, the subroutine for
reading the membership roll is simple, returning the top-level hash
read from the file. The dump subroutine is another story (below).
inst
use YAML::XS qw(LoadFile DumpFile); sub read_members_yaml () { my $path; $path = "private/members.yml"; if (-e $path) { return LoadFile ($path); } $path = "private/members-empty.yml"; if (-e $path) { return LoadFile ($path); } die "private/members.yml: not found\n"; } sub write_members_yaml ($) { my ($yaml) = @_; my $old_umask = umask 077; my $path = "private/members.yml"; print "$path: "; STDOUT->flush; eval { #DumpFile ("$path.tmp", $yaml); dump_members_yaml ("$path.tmp", $yaml); rename ("$path.tmp", $path) or die "Could not rename $path.tmp: $!\n"; }; my $err = $@; umask $old_umask; if ($err) { print "ERROR\n"; } else { print "updated\n"; } die $err if $err; } sub dump_members_yaml ($$) { my ($pathname, $yaml) = @_; my $O = new IO::File; open ($O, ">$pathname") or die "Could not open $pathname: $!\n"; print $O "---\n"; if (keys %{$yaml->{"members"}}) { print $O "members:\n"; for my $user (sort keys %{$yaml->{"members"}}) { print_member ($O, $yaml->{"members"}->{$user}); } print $O "usernames:\n"; for my $user (sort keys %{$yaml->{"members"}}) { print $O "- $user\n"; } } else { print $O "members:\n"; print $O "usernames: []\n"; } if (@{$yaml->{"revoked"}}) { print $O "revoked:\n"; for my $name (@{$yaml->{"revoked"}}) { print $O "- $name\n"; } } else { print $O "revoked: []\n"; } close $O or die "Could not close $pathname: $!\n"; }
The first implementation using YAML::Tiny
balked at the !vault
data type. The current version using YAML::XS
(Simonov's libyaml
)
does not support local data types neither, but does not abort. It
just produces a multi-line string. Luckily the structure of
members.yml
is relatively simple and fixed, so a purpose-built
printer can add back the !vault
data types at appropriate points.
YAML::XS
thus provides only a borked parser. Also luckily, the YAML
produced by the for-the-purpose printer makes the resulting membership
roll easier to read, with the username
and status
at the top of
each record.
inst
sub print_member ($$) { my ($out, $member) = @_; print $out " ", $member->{"username"}, ":\n"; print $out " username: ", $member->{"username"}, "\n"; print $out " status: ", $member->{"status"}, "\n"; if (@{$member->{"clients"} || []}) { print $out " clients:\n"; for my $name (@{$member->{"clients"} || []}) { print $out " - ", $name, "\n"; } } else { print $out " clients: []\n"; } print $out " password_front: ", $member->{"password_front"}, "\n"; print $out " password_core: ", $member->{"password_core"}, "\n"; if (defined $member->{"password_fetchmail"}) { print $out " password_fetchmail: !vault |\n"; for my $line (split /\n/, $member->{"password_fetchmail"}) { print $out " $line\n"; } } my @standard_keys = ( "username", "status", "clients", "password_front", "password_core", "password_fetchmail" ); my @other_keys = (sort grep { my $k = $_; ! grep { $_ eq $k } @standard_keys } keys %$member); for my $key (@other_keys) { print $out " $key: ", $member->{$key}, "\n"; } }
12.7. The New Command
The next code block implements the new
sub-command. It adds a new
member to the institute's membership roll. It runs an Ansible
playbook to create the member's Nextcloud user, updates
private/members.yml
, and runs the site.yml
playbook. The site
playbook (re)creates the member's accounts on Core and Front,
(re)installs the member's personal homepage on Front, and the member's
Fetchmail service on Core. All services are configured with an
initial, generated password.
inst
sub valid_username (@); sub shell_escape ($); sub strip_vault ($); if (defined $ARGV[0] && $ARGV[0] eq "new") { my $user = valid_username (@ARGV); my $yaml = read_members_yaml (); my $members = $yaml->{"members"}; die "$user: already exists\n" if defined $members->{$user}; my $pass = `apg -n 1 -x 12 -m 12`; chomp $pass; print "Initial password: $pass\n"; my $epass = shell_escape $pass; my $front = `mkpasswd -m sha-512 "$epass"`; chomp $front; my $core = `mkpasswd -m sha-512 "$epass"`; chomp $core; my $vault = strip_vault `ansible-vault encrypt_string "$epass"`; mysystem ("ansible-playbook -e \@Secret/become.yml", " playbooks/nextcloud-new.yml", " -e user=$user", " -e pass=\"$epass\""); $members->{$user} = { "username" => $user, "status" => "current", "password_front" => $front, "password_core" => $core, "password_fetchmail" => $vault }; write_members_yaml { "members" => $members, "revoked" => $yaml->{"revoked"} }; mysystem ("ansible-playbook -e \@Secret/become.yml", " -t accounts -l core,front playbooks/site.yml"); exit; } sub valid_username (@) { my $sub = $_[0]; die "usage: $0 $sub USER\n" if @_ != 2; my $username = lc $_[1]; die "$username: does not begin with an alphabetic character\n" if $username !~ /^[a-z]/; die "$username: contains non-alphanumeric character(s)\n" if $username !~ /^[a-z0-9]+$/; return $username; } sub shell_escape ($) { my ($string) = @_; my $result = "$string"; $result =~ s/([\$`"\\ ])/\\$1/g; return ($result); } sub strip_vault ($) { my ($string) = @_; die "Unexpected result from ansible-vault: $string\n" if $string !~ /^ *!vault [|]/; my @lines = split /^ */m, $string; return (join "", @lines[1..$#lines]); }
playbooks/nextcloud-new.yml
- hosts: core no_log: yes tasks: - name: Run occ user:add. shell: | spawn sudo -u www-data /usr/bin/php occ user:add {{ user }} expect { "Enter password:" {} timeout { exit 1 } } send "{{ pass|quote }}\n"; expect { "Confirm password:" {} timeout { exit 2 } } send "{{ pass|quote }}\n"; expect { "The user \"{{ user }}\" was created successfully" {} timeout { exit 3 } } args: chdir: /var/www/nextcloud/ executable: /usr/bin/expect
12.8. The Pass Command
The institute's passwd
command on Core securely emails root
with a
member's desired password (hashed). The command may update the
servers immediately or let the administrator do that using the ./inst
pass
command. In either case, the administrator needs to update the
membership roll, and so receives an encrypted email, which gets piped
into ./inst pass
. This command decrypts the message, parses the
(YAML) content, updates private/members.yml
, and runs the full
Ansible site.yml
playbook to update the servers. If all goes well a
message is sent to member@core
.
12.8.1. Less Aggressive passwd.
The next code block implements the less aggressive passwd
command.
It is less aggressive because it just emails root
. It does not
update the servers, so it does not need an SSH key and password to
root
(any privileged account) on Front, nor a set-UID root
script
(nor equivalent) on Core. It is a set-UID shadow
script so it can
read /etc/shadow
. The member will need to wait for confirmation
from the administrator, but all keys to root
at the institute stay
in Secret/
.
roles_t/core/templates/passwd
#!/bin/perl -wT use strict; $ENV{PATH} = "/usr/sbin:/usr/bin:/bin"; my ($username) = getpwuid $<; if ($username ne "{{ ansible_user }}") { { exec ("sudo", "-u", "{{ ansible_user }}", "/usr/local/bin/passwd", $username) }; print STDERR "Could not exec sudo: $!\n"; exit 1; } $username = $ARGV[0]; my $passwd; { my $SHADOW = new IO::File; open $SHADOW, "</etc/shadow" or die "Cannot read /etc/shadow: $!\n"; my ($line) = grep /^$username:/, <$SHADOW>; close $SHADOW; die "No /etc/shadow record found: $username\n" if ! defined $line; (undef, $passwd) = split ":", $line; } system "stty -echo"; END { system "stty echo"; } print "Current password: "; my $pass = <STDIN>; chomp $pass; print "\n"; my $hash = crypt($pass, $passwd); die "Sorry...\n" if $hash ne $passwd; print "New password: "; $pass = <STDIN>; chomp($pass); die "Passwords must be at least 10 characters long.\n" if length $pass < 10; print "\nRetype password: "; my $pass2 = <STDIN>; chomp($pass2); print "\n"; die "New passwords do not match!\n" if $pass2 ne $pass; use MIME::Base64; my $epass = encode_base64 $pass; use File::Temp qw(tempfile); my ($TMP, $tmp) = tempfile; close $TMP; my $O = new IO::File; open $O, ("| gpg --encrypt --armor" ." --trust-model always --recipient root\@core" ." > $tmp") or die "Error running gpg > $tmp: $!\n"; print $O <<EOD; username: $username password: $epass EOD close $O or die "Error closing pipe to gpg: $!\n"; use File::Copy; open ($O, "| sendmail root"); print $O <<EOD; From: root To: root Subject: New password. EOD $O->flush; copy $tmp, $O; #print $O `cat $tmp`; close $O or die "Error closing pipe to sendmail: $!\n"; print " Your request was sent to Root. PLEASE WAIT for email confirmation that the change was completed.\n"; exit;
12.8.2. Less Aggressive Pass Command
The following code block implements the ./inst pass
command, used by
the administrator to update private/members.yml
before running
playbooks/site.yml
and emailing the concerned member.
inst
use MIME::Base64; if (defined $ARGV[0] && $ARGV[0] eq "pass") { my $I = new IO::File; open $I, "gpg --homedir Secret/root.gnupg --quiet --decrypt |" or die "Error running gpg: $!\n"; my $msg_yaml = LoadFile ($I); close $I or die "Error closing pipe from gpg: $!\n"; my $user = $msg_yaml->{"username"}; die "Could not find a username in the decrypted input.\n" if ! defined $user; my $pass64 = $msg_yaml->{"password"}; die "Could not find a password in the decrypted input.\n" if ! defined $pass64; my $mem_yaml = read_members_yaml (); my $members = $mem_yaml->{"members"}; my $member = $members->{$user}; die "No such member: $user\n" if ! defined $member; my $pass = decode_base64 $pass64; my $epass = shell_escape $pass; my $front = `mkpasswd -m sha-512 "$epass"`; chomp $front; my $core = `mkpasswd -m sha-512 "$epass"`; chomp $core; my $vault = strip_vault `ansible-vault encrypt_string "$epass"`; $member->{"password_front"} = $front; $member->{"password_core"} = $core; $member->{"password_fetchmail"} = $vault; mysystem ("ansible-playbook -e \@Secret/become.yml", "playbooks/nextcloud-pass.yml", "-e user=$user", "-e \"pass=$epass\""); write_members_yaml $mem_yaml; mysystem ("ansible-playbook -e \@Secret/become.yml", "-t accounts playbooks/site.yml"); my $O = new IO::File; open ($O, "| sendmail $user\@$domain_priv") or die "Could not pipe to sendmail: $!\n"; print $O "From: <root> To: <$user> Subject: Password change. Your new password has been distributed to the servers. As always: please email root with any questions or concerns.\n"; close $O or die "pipe to sendmail failed: $!\n"; exit; }
And here is the playbook that interacts with Nextcloud's occ
users:resetpassword
command using expect(1)
.
playbooks/nextcloud-pass.yml
- hosts: core no_log: yes tasks: - name: Run occ user:resetpassword. shell: | spawn sudo -u www-data \ /usr/bin/php occ user:resetpassword {{ user }} expect { "Enter a new password:" {} timeout { exit 1 } } send "{{ pass|quote }}\n" expect { "Confirm the new password:" {} timeout { exit 2 } } send "{{ pass|quote }}\n" expect { "Successfully reset password for {{ user }}" {} "Please choose a different password." { exit 3 } timeout { exit 4 } } args: chdir: /var/www/nextcloud/ executable: /usr/bin/expect
12.8.3. Installing the Less Aggressive passwd
The following Ansible tasks install the less aggressive passwd
script in /usr/local/bin/passwd
on Core, and a sudo
policy file
declaring that any user can run the script as the admin user. The
admin user is added to the shadow group so that the script can read
/etc/shadow
and verify a member's current password. The public PGP
key for root@core
is also imported into the admin user's GnuPG
configuration so that the email to root can be encrypted.
roles_t/core/tasks/main.yml
- name: Install institute passwd command. become: yes template: src: passwd dest: /usr/local/bin/passwd mode: u=rwx,g=rx,o=rx - name: Authorize institute passwd command as {{ ansible_user }}. become: yes copy: content: | ALL ALL=({{ ansible_user }}) NOPASSWD: /usr/local/bin/passwd dest: /etc/sudoers.d/01passwd mode: u=r,g=r,o= owner: root group: root - name: Authorize {{ ansible_user }} to read /etc/shadow. become: yes user: name: "{{ ansible_user }}" append: yes groups: shadow - name: Authorize {{ ansible_user }} to run /usr/bin/php as www-data. become: yes copy: content: | {{ ansible_user }} ALL=(www-data) NOPASSWD: /usr/bin/php dest: /etc/sudoers.d/01www-data-php mode: u=r,g=r,o= owner: root group: root - name: Install root PGP key file. become: no copy: src: ../Secret/root-pub.pem dest: ~/.gnupg-root-pub.pem mode: u=r,g=r,o=r notify: Import root PGP key.
roles_t/core/handlers/main.yml
- name: Import root PGP key. become: no command: gpg --import ~/.gnupg-root-pub.pem
12.9. The Old Command
The old
command disables a member's accounts and clients.
inst
if (defined $ARGV[0] && $ARGV[0] eq "old") { my $user = valid_username (@ARGV); my $yaml = read_members_yaml (); my $members = $yaml->{"members"}; my $member = $members->{$user}; die "$user: does not exist\n" if ! defined $member; mysystem ("ansible-playbook -e \@Secret/become.yml", "playbooks/nextcloud-old.yml -e user=$user"); $member->{"status"} = "former"; write_members_yaml { "members" => $members, "revoked" => [ sort @{$member->{"clients"}}, @{$yaml->{"revoked"}} ] }; mysystem ("ansible-playbook -e \@Secret/become.yml", "-t accounts playbooks/site.yml"); exit; }
playbooks/nextcloud-old.yml
- hosts: core tasks: - name: Run occ user:disable. shell: | spawn sudo -u www-data /usr/bin/php occ user:disable {{ user }} expect { "The specified user is disabled" {} timeout { exit 1 } } args: chdir: /var/www/nextcloud/ executable: /usr/bin/expect
12.10. The Client Command
The client
command creates an OpenVPN configuration (.ovpn
) file
authorizing wireless devices to connect to the institute's VPNs. The
command uses the EasyRSA CA in Secret/
. The generated configuration
is slightly different depending on the type of host, given as the
first argument to the command.
./inst client android NEW USER
Anandroid
host runs OpenVPN for Android or work-alike. Two files are generated.campus.ovpn
configures a campus VPN connection, andpublic.ovpn
configures a connection to the institute's public VPN../inst client debian NEW USER
Adebian
host runs a Debian desktop with Network Manager. Again two files are generated, for the campus and public VPNs../inst client campus NEW
Acampus
host is a Debian host (with or without desktop) that is used by the institute generally, is not the property of a member, never roams off campus, and so is remotely administered with Ansible. One file is generated,campus.ovpn
.
The administrator uses encrypted email to send .ovpn
files to new
members. New members install the network-manager-openvpn-gnome
and
openvpn-systemd-resolved
packages, and import the .ovpn
files into
Network Manager on their desktops. The .ovpn
files for an
Android device are transferred by USB stick and should automatically
install when "opened". On campus hosts, the system administrator
copies the campus.ovpn
file to /etc/openvpn/campus.conf
.
The OpenVPN configurations generated for Debian hosts specify an up
script, update-systemd-resolved
, installed in /etc/openvpn/
by the
openvpn-systemd-resolved
package. The following configuration lines
instruct the OpenVPN clients to run this script whenever the
connection is restarted.
openvpn-up
script-security 2 up /etc/openvpn/update-systemd-resolved up-restart
inst
sub write_template ($$$$$$$$$); sub read_file ($); sub add_client ($$$); if (defined $ARGV[0] && $ARGV[0] eq "client") { die "Secret/CA/easyrsa: not found\n" if ! -x "Secret/CA/easyrsa"; my $type = $ARGV[1]||""; my $name = $ARGV[2]||""; my $user = $ARGV[3]||""; if ($type eq "campus") { die "usage: $0 client campus NAME\n" if @ARGV != 3; die "$name: invalid host name\n" if $name !~ /^[a-z][-a-z0-9]+$/; } elsif ($type eq "android" || $type eq "debian") { die "usage: $0 client $type NAME USER\n" if @ARGV != 4; die "$name: invalid host name\n" if $name !~ /^[a-z][-a-z0-9]+$/; } else { die "usage: $0 client [debian|android|campus]\n" if @ARGV != 4; } my $yaml; my $member; if ($type ne "campus") { $yaml = read_members_yaml; my $members = $yaml->{"members"}; if (@ARGV == 4) { $member = $members->{$user}; die "$user: does not exist\n" if ! defined $member; } if (defined $member) { my ($owner) = grep { grep { $_ eq $name } @{$_->{"clients"}} } values %{$members}; die "$name: owned by $owner->{username}\n" if defined $owner && $owner->{username} ne $member->{username}; } } die "Secret/CA: no certificate authority found" if ! -d "Secret/CA/pki/issued"; if (! -f "Secret/CA/pki/issued/$name.crt") { mysystem "cd Secret/CA; ./easyrsa build-client-full $name nopass"; } else { print "Using existing key/cert...\n"; } if ($type ne "campus") { my $clients = $member->{"clients"}; if (! grep { $_ eq $name } @$clients) { $member->{"clients"} = [ $name, @$clients ]; write_members_yaml $yaml; } } umask 077; my $DEV = $type eq "android" ? "tun" : "ovpn"; my $CA = read_file "Secret/CA/pki/ca.crt"; my $CRT = read_file "Secret/CA/pki/issued/$name.crt"; my $KEY = read_file "Secret/CA/pki/private/$name.key"; my $UP = $type eq "android" ? "" : " <<openvpn-up>>"; if ($type ne "campus") { my $TA = read_file "Secret/front-ta.key"; write_template ($DEV,$UP,$CA,$CRT,$KEY,$TA, $front_addr, $domain_name, "public.ovpn"); print "Wrote public VPN configuration to public.ovpn.\n"; } my $TA = read_file "Secret/gate-ta.key"; write_template ($DEV,$UP,$CA,$CRT,$KEY,$TA, $gate_wifi_addr, "gate.$domain_priv", "campus.ovpn"); print "Wrote campus VPN configuration to campus.ovpn.\n"; exit; } sub write_template ($$$$$$$$$) { my ($DEV,$UP,$CA,$CRT,$KEY,$TA,$ADDR,$NAME,$FILE) = @_; my $O = new IO::File; open ($O, ">$FILE.tmp") or die "Could not open $FILE.tmp: $!\n"; print $O "client dev-type tun dev $DEV remote $ADDR nobind <<openvpn-drop-priv>> remote-cert-tls server verify-x509-name $NAME name <<openvpn-crypt>>$UP verb 3 key-direction 1 <ca>\n$CA</ca> <cert>\n$CRT</cert> <key>\n$KEY</key> <tls-auth>\n$TA</tls-auth>\n"; close $O or die "Could not close $FILE.tmp: $!\n"; rename ("$FILE.tmp", $FILE) or die "Could not rename $FILE.tmp: $!\n"; } sub read_file ($) { my ($path) = @_; my $I = new IO::File; open ($I, "<$path") or die "$path: could not read: $!\n"; local $/; my $c = <$I>; close $I or die "$path: could not close: $!\n"; return $c; }
13. Testing
The example files in this document, ansible.cfg
and hosts
as well
as those in public/
and private/
, along with the matching EasyRSA
certificate authority and GnuPG key-ring in Secret/
(included in the
distribution), can be used to configure three VirtualBox VMs
simulating Core, Gate and Front in test networks simulating a campus
Ethernet, Wi-Fi, ISP, and a commercial cloud. With the test networks
up and running, a simulated member's notebook can be created and
alternately attached to the simulated campus Wi-Fi or the Internet (as
though abroad). The administrator's notebook in this simulation is
the VirtualBox host.
The next two sections list the steps taken to create the simulated Core, Gate and Front machines, and connect them to their networks. The process is similar to that described in The (Actual) Hardware, but is covered in detail here where the VirtualBox hypervisor can be assumed and exact command lines can be given (and copied during re-testing). The remaining sections describe the manual testing process, simulating an administrator adding and removing member accounts and devices, a member's desktop sending and receiving email, etc.
For more information on the VirtualBox Hypervisor, the User Manual can be found off-line in file:///usr/share/doc/virtualbox/UserManual.pdf. An HTML version of the latest revision can be found on the official web site at https://www.virtualbox.org/manual/UserManual.html.
13.1. The Test Networks
The networks used in the test:
premises
- A NAT Network, simulating the cloud provider's and
campus ISP's networks. This is the only network with DHCP and DNS
services provided by the hypervisor. It is not the default NAT
network because
gate
andfront
need to communicate. vboxnet0
- A Host-only network, simulating the institute's
private Ethernet switch. It has no services, no DHCP, just the host
machine at
192.168.56.10
pretending to be the administrator's notebook. vboxnet1
- Another Host-only network, simulating the tiny
Ethernet between Gate and the campus Wi-Fi access point. It has no
services, no DHCP, just the host at
192.168.57.2
, simulating the NATed Wi-Fi network.
In this simulation the IP address for front
is not a public address
but a private address on the NAT network premises
. Thus front
is
not accessible to the administrator's notebook (the host). To work
around this restriction, front
gets a second network interface
connected to the vboxnet1
network and used only for ssh access from
the host.4
The networks described above are created and "started" with the
following VBoxManage
commands.
VBoxManage natnetwork add --netname premises \ --network 192.168.15.0/24 \ --enable --dhcp on --ipv6 off VBoxManage natnetwork start --netname premises VBoxManage hostonlyif create # vboxnet0 VBoxManage hostonlyif ipconfig vboxnet0 --ip=192.168.56.10 VBoxManage dhcpserver modify --interface=vboxnet0 --disable VBoxManage hostonlyif create # vboxnet1 VBoxManage hostonlyif ipconfig vboxnet1 --ip=192.168.57.2
Note that the first host-only network, vboxnet0
, gets DHCP service
by default, but that will interfere with the service being tested on
core
, so it must be explicitly disabled. Only the NAT network
premises
should have a DHCP server enabled.
Note also that actual ISPs and clouds will provide Gate and Front with
public network addresses. In this simulation "they" provide addresses
on the private 192.168.15.0/24
network.
13.2. The Test Machines
The virtual machines are created by VBoxManage
command lines in the
following sub-sections. They each start with a recent Debian release
(e.g. debian-12.5.0-amd64-netinst.iso
) in their simulated DVD
drives. As in The Hardware preparation process being simulated, a few
additional software packages are installed. Unlike in The Hardware
preparation, machines are moved to their final networks and then
remote access is authorized. (They are not accessible via ssh
on
the VirtualBox NAT network where they first boot.)
Once the administrator's notebook is authorized to access the privileged accounts on the virtual machines, they are prepared for configuration by Ansible.
13.2.1. A Test Machine
The following shell function contains most of the VBoxManage
commands needed to create the test machines. The name of the machine
is taken from the NAME
shell variable and the quantity of RAM and
disk space from the RAM
and DISK
variables. The function creates
a DVD drive on each machine and loads it with a simulated CD of a
recent Debian release. The path to the CD disk image (.iso
file) is
taken from the ISO
shell variable.
function create_vm { VBoxManage createvm --name $NAME --ostype Debian_64 --register VBoxManage modifyvm $NAME --memory $RAM VBoxManage createhd --size $DISK \ --filename ~/VirtualBox\ VMs/$NAME/$NAME.vdi VBoxManage storagectl $NAME --name "SATA Controller" \ --add sata --controller IntelAHCI VBoxManage storageattach $NAME --storagectl "SATA Controller" \ --port 0 --device 0 --type hdd \ --medium ~/VirtualBox\ VMs/$NAME/$NAME.vdi VBoxManage storagectl $NAME --name "IDE Controller" --add ide VBoxManage storageattach $NAME --storagectl "IDE Controller" \ --port 0 --device 0 --type dvddrive --medium $ISO VBoxManage modifyvm $NAME --boot1 dvd --boot2 disk }
After this shell function creates a VM, its network interface is attached to the default NAT network, simulating the Internet connected network where actual hardware is prepared.
Here are the commands needed to create the test machine front
with
512MiB of RAM and 4GiB of disk and the Debian 12.5.0 release in its
CDROM drive.
NAME=front RAM=512 DISK=4096 ISO=~/Downloads/debian-12.5.0-amd64-netinst.iso create_vm
Soon after starting, the machine console should show the installer's
first prompt: to choose a system language. Installation on the small
machines, front
and gate
, may put the installation into "low
memory mode", in which case the installation is textual, the system
language is English, and the first prompt is for location. The
appropriate responses to the prompts are given in the list below.
- Select a language (unless in low memory mode!)
- Language: English - English
- Select your location
- Country, territory or area: United States
- Configure the keyboard
- Keymap to use: American English
- Configure the network
- Hostname: front (gate, core, etc.)
- Domain name: small.example.org (small.private)
- Set up users and passwords.
- Root password: <blank>
- Full name for the new user: System Administrator
- Username for your account: sysadm
- Choose a password for the new user: fubar
- Configure the clock
- Select your time zone: Eastern
- Partition disks
- Partitioning method: Guided - use entire disk
- Select disk to partition: SCSI3 (0,0,0) (sda) - …
- Partitioning scheme: All files in one partition
- Finish partitioning and write changes to disk: Continue
- Write the changes to disks? Yes
- Install the base system
- Configure the package manager
- Scan extra installation media? No
- Debian archive mirror country: United States
- Debian archive mirror: deb.debian.org
- HTTP proxy information (blank for none): <blank>
- Configure popularity-contest
- Participate in the package usage survey? No
- Software selection
- SSH server
- standard system utilities
- Install the GRUB boot loader
- Install the GRUB boot loader to your primary drive? Yes
- Device for boot loader installation: /dev/sda (ata-VBOX…
After the reboot, the machine's console should produce a login:
prompt. The administrator logs in here, with username sysadm
and
password fubar
, before continuing with the specific machine's
preparation (below).
13.2.2. The Test Front Machine
The front
machine is created with 512MiB of RAM, 4GiB of disk, and
Debian 12.5.0 (recently downloaded) in its CDROM drive. The exact
command lines were given in the previous section.
After Debian is installed (as detailed above) front
is shut down and
its primary network interface moved to the simulated Internet, the NAT
network premises
. front
also gets a second network interface, on
the host-only network vboxnet1
, to make it directly accessible to
the administrator's notebook (as described in The Test Networks).
VBoxManage modifyvm front --nic1 natnetwork --natnetwork1 premises VBoxManage modifyvm front --nic2 hostonly --hostonlyadapter2 vboxnet1
After Debian is installed and the machine rebooted, the administrator
logs in and configures the "extra" network interface with a static IP
address using a drop-in configuration file:
/etc/network/interfaces.d/eth1
.
eth1
auto enp0s8 iface enp0s8 inet static address 192.168.57.3/24
A sudo ifup enp0s8
command then brings the interface up.
Note that there is no pre-provisioning for front
, which is never
deployed on a frontier, always in the cloud. Additional Debian
packages are assumed to be readily available. Thus Ansible installs
them as necessary, but first the administrator authorizes remote
access by following the instructions in the final section: Ansible
Test Authorization.
13.2.3. The Test Gate Machine
The gate
machine is created with the same amount of RAM and disk as
front
. Assuming the RAM
, DISK
, and ISO
shell variables have
not changed, gate
can be created with two commands.
NAME=gate
create_vm
After Debian is installed (as detailed in A Test Machine) and the machine rebooted, the administrator logs in and installs several additional software packages.
sudo apt install netplan.io systemd-resolved unattended-upgrades \
ufw isc-dhcp-server postfix openvpn
Again, the Postfix installation prompts for a couple settings. The defaults, listed below, are fine.
- General type of mail configuration: Internet Site
- System mail name: gate.small.private
gate
can then move to the campus. It is shut down before the
following VBoxManage
commands are executed. The commands disconnect
the primary Ethernet interface from premises
and connected it to
vboxnet0
. They also create two new interfaces, isp
and wifi
,
connected to the simulated ISP and campus wireless access point.
VBoxManage modifyvm gate --mac-address1=080027f31679 VBoxManage modifyvm gate --nic1 hostonly --hostonlyadapter1 vboxnet0 VBoxManage modifyvm gate --mac-address2=0800273d42e5 VBoxManage modifyvm gate --nic2 natnetwork --natnetwork2 premises VBoxManage modifyvm gate --mac-address3=0800274aded2 VBoxManage modifyvm gate --nic3 hostonly --hostonlyadapter3 vboxnet1
The MAC addresses above were specified so they match the example values of the MAC address variables in this table.
device | network | simulating | MAC address variable |
---|---|---|---|
enp0s3 |
vboxnet0 |
campus Ethernet | gate_lan_mac |
enp0s8 |
premises |
campus ISP | gate_isp_mac |
enp0s9 |
vboxnet1 |
campus wireless | gate_wifi_mac |
After gate
boots up with its new network interfaces, the primary
Ethernet interface is temporarily configured with an IP address.
(Ansible will install a Netplan soon.)
sudo ip address add 192.168.56.2/24 dev enp0s3
Finally, the administrator authorizes remote access by following the instructions in the final section: Ansible Test Authorization.
13.2.4. The Test Core Machine
The core
machine is created with 1GiB of RAM and 6GiB of disk.
Assuming the ISO
shell variable has not changed, core
can be
created with following commands.
NAME=core RAM=2048 DISK=6144 create_vm
After Debian is installed (as detailed in A Test Machine) and the machine rebooted, the administrator logs in and installs several additional software packages.
sudo apt install netplan.io systemd-resolved unattended-upgrades \ ntp isc-dhcp-server bind9 apache2 openvpn \ postfix dovecot-imapd fetchmail expect rsync \ gnupg sudo apt install mariadb-server php php-{apcu,bcmath,curl,gd,gmp}\ php-{json,mysql,mbstring,intl,imagick,xml,zip} \ libapache2-mod-php sudo apt install nagios4 monitoring-plugins-basic lm-sensors \ nagios-nrpe-plugin
Again the Postfix installation prompts for a couple settings. The defaults, listed below, are fine.
- General type of mail configuration: Internet Site
- System mail name: core.small.private
Before shutting down, the name of the primary Ethernet interface
should be compared to the example variable setting in
private/vars.yml
. The value assigned to core_ethernet
should
match the interface name.
core
can now move to the campus. It is shut down before the
following VBoxManage
command is executed. The command connects the
machine's NIC to vboxnet0
, which simulates the campus's private
Ethernet.
VBoxManage modifyvm core --nic1 hostonly --hostonlyadapter1 vboxnet0
After core
boots up with its new network connection, its primary NIC
is temporarily configured with an IP address. (Ansible will install a
Netplan soon.)
sudo ip address add 192.168.56.1/24 dev enp0s3
Finally, the administrator authorizes remote access by following the instructions in the next section: Ansible Test Authorization.
13.2.5. Ansible Test Authorization
To authorize Ansible's access to the three test machines, they must
allow remote access to their sysadm
accounts. In the following
commands, the administrator must use IP addresses to copy the public
key to each test machine.
SRC=Secret/ssh_admin/id_rsa.pub scp $SRC sysadm@192.168.57.3:admin_key # Front scp $SRC sysadm@192.168.56.2:admin_key # Gate scp $SRC sysadm@192.168.56.1:admin_key # Core
Then the key must be installed on each machine with the following command line (entered at each console, or in an SSH session with each machine).
( cd; umask 077; mkdir .ssh; cp admin_key .ssh/authorized_keys )
The front
machine needs a little additional preparation. Ansible
will configure front
with the host keys in Secret/
. These should
be installed there now so that front
does not appear to change
identities while Ansible is configuring.
First, the host keys are securely copied to front
with the following
command.
scp Secret/ssh_front/etc/ssh/ssh_host_* sysadm@192.168.57.3:
Then they are installed with these commands.
chmod 600 ssh_host_* chmod 644 ssh_host_*.pub sudo cp -b ssh_host_* /etc/ssh/
Finally, the system administrator removes the old identity of front
.
ssh-keygen -f ~/.ssh/known_hosts -R 192.168.57.3
13.3. Configure Test Machines
At this point the three test machines core
, gate
, and front
are
running fresh Debian systems with select additional packages, on their
final networks, with a privileged account named sysadm
that
authorizes password-less access from the administrator's notebook,
ready to be configured by Ansible.
To configure the test machines, the ./inst config
command is
executed and core
restarted. Note that this first run should
exercise all of the handlers, and that subsequent runs probably do
not.
13.4. Test Basics
At this point the test institute is just core
, gate
and front
,
no other campus servers, no members nor their VPN client devices. On
each machine, Systemd should assess the system's state as running
with 0 failed units.
systemctl status
gate
and thus core
should be able to reach the Internet and
front
. If core
can reach the Internet and front
, then gate
is
forwarding (and NATing). On core
(and gate
):
ping -c 1 8.8.4.4 # dns.google ping -c 1 192.168.15.5 # front_addr
gate
and thus core
should be able to resolve internal and public
domain names. (Front does not use the institute's internal domain
names yet.) On core
(and gate
):
host dns.google host core.small.private host www
The last resort email address, root
, should deliver to the
administrator's account. On core
, gate
and front
:
/sbin/sendmail root
Testing email to root.
.
Two messages, from core
and gate
, should appear in
/home/sysadm/Maildir/new/
on core
in just a couple seconds. The
message from front
should be delivered to the same directory but on
front
. While members' emails are automatically fetched (with
fetchmail(1)
) to core
, the system administrator is expected to
fetch system emails directly to their desktop (and to give them
instant attention).
13.5. The Test Nextcloud
Further tests involve Nextcloud account management. Nextcloud is
installed on core
as described in Configure Nextcloud. Once
/Nextcloud/
is created, ./inst config core
will validate
or update its configuration files.
The administrator will need a desktop system in the test campus
networks (using the campus name server). The test Nextcloud
configuration requires that it be accessed with the domain name
core.small.private
. The following sections describe how a client
desktop is simulated and connected to the test VPNs (and test campus
name server). Its browser can then connect to core.small.private
to
exercise the test Nextcloud.
The process starts with enrolling the first member of the institute
using the ./inst new
command and issuing client VPN keys with the
./inst client
command.
13.6. Test New Command
A member must be enrolled so that a member's client machine can be
authorized and then test the VPNs, Nextcloud, and the web sites.
The first member enrolled in the simulated institute is New Hampshire
innkeeper Dick Loudon. Mr. Loudon's accounts on institute servers are
named dick
, as is his notebook.
./inst new dick
Take note of Dick's initial password.
13.7. The Test Member Notebook
A test member's notebook is created next, much like the servers,
except with memory and disk space doubled to 2GiB and 8GiB, and a
desktop. This machine is not configured by Ansible. Rather, its
desktop VPN client and web browser test the OpenVPN configurations on
gate
and front
, and the Nextcloud installation on core
.
NAME=dick RAM=2048 DISK=8192 create_vm VBoxManage modifyvm $NAME --macaddress1 080027dc54b5 VBoxManage modifyvm $NAME --nic1 hostonly --hostonlyadapter1 vboxnet1
Dick's notebook, dick
, is initially connected to the host-only
network vboxnet1
as though it were the campus wireless access point.
It simulates a member's notebook on campus, connected to (NATed
behind) the access point.
Debian is installed much as detailed in A Test Machine except that the SSH server option is not needed and the GNOME desktop option is. When the machine reboots, the administrator logs into the desktop and installs a couple additional software packages (which require several more).
sudo apt install network-manager-openvpn-gnome \ openvpn-systemd-resolved \ nextcloud-desktop evolution
13.8. Test Client Command
The ./inst client
command is used to issue keys for the institute's
VPNs. The following command generates two .ovpn
(OpenVPN
configuration) files, small.ovpn
and campus.ovpn
, authorizing
access by the holder, identified as dick
, owned by member dick
, to
the test VPNs.
./inst client debian dick dick
13.9. Test Campus VPN
The campus.ovpn
OpenVPN configuration file (generated in Test Client
Command) is transferred to dick
, which is at the Wi-Fi access
point's wifi_wan_addr
.
scp *.ovpn sysadm@192.168.57.2:
The file is installed using the Network tab of the desktop Settings
app. The administrator uses the "+" button, chooses "Import from
file…" and the campus.ovpn
file. Importantly the administrator
checks the "Use this connection only for resources on its network"
checkbox in the IPv4 tab of the Add VPN dialog. The admin does the
same with the small.ovpn
file, for use on the simulated Internet.
The administrator turns on the campus VPN on dick
(which connects
instantly) and does a few basic tests in a terminal.
systemctl status ping -c 1 8.8.4.4 # dns.google ping -c 1 192.168.56.1 # core host dns.google host core.small.private host www
13.10. Test Web Pages
Next, the administrator copies Backup/WWW/
(included in the
distribution) to /WWW/
on core
and sets the file permissions
appropriately.
sudo chown -R sysadm.staff /WWW/campus sudo chown -R monkey.staff /WWW/live /WWW/test sudo chmod 02775 /WWW/* sudo chmod 664 /WWW/*/index.html
then uses Firefox on dick
to fetch the following URLs. They should
all succeed and the content should be a simple sentence identifying
the source file.
http://www/
http://www.small.private/
http://live/
http://live.small.private/
http://test/
http://test.small.private/
http://small.example.org/
The last URL should re-direct to https://small.example.org/
, which
uses a certificate (self-)signed by an unknown authority. Firefox
will warn but allow the luser to continue.
13.11. Test Web Update
Modify /WWW/live/index.html
on core
and wait 15 minutes for it to
appear as https://small.example.org/
(and in /home/www/index.html
on front
).
Hack /home/www/index.html
on front
and observe the result at
https://small.example.org/
. Wait 15 minutes for the correction.
13.12. Test Nextcloud
Nextcloud is typically installed and configured after the first
Ansible run, when core
has Internet access via gate
. Until the
installation directory /Nextcloud/nextcloud/
appears, the Ansible
code skips parts of the Nextcloud configuration. The same
installation (or restoration) process used on Core is used on core
to create /Nextcloud/
. The process starts with Create
/Nextcloud/
, involves Restore Nextcloud or Install Nextcloud,
and runs ./inst config core
again 8.23.6. When the ./inst
config core
command is happy with the Nextcloud configuration on
core
, the administrator uses Dick's notebook to test it, performing
the following tests on dick
's desktop.
- Use a web browser to get
http://core/nextcloud/
. It should be a warning about accessing Nextcloud by an untrusted name. - Get
http://core.small.private/nextcloud/
. It should be a login web page. - Login as
sysadm
with passwordfubar
. - Examine the security & setup warnings in the Settings >
Administration > Overview web page. A few minor warnings are
expected (besides the admonishment about using
http
rather thanhttps
). - Download and enable Calendar and Contacts in the Apps > Featured web page.
- Logout and login as
dick
with Dick's initial password (noted above). - Use the Nextcloud app to sync
~/nextCloud/
with the cloud. In the Nextcloud app's Connection Wizard (the initial dialog), choose to "Log in to your Nextcloud" with the URLhttp://core.small.private/nextcloud
. The web browser should pop up with a new tab: "Connect to your account". Press "Log in" and "Grant access". The Nextcloud Connection Wizard then prompts for sync parameters. The defaults are fine. Presumably the Local Folder is/home/sysadm/Nextcloud/
. - Drop a file in
~/Nextcloud/
, use the app to force a sync, and find the file in the Files web page. Create a Mail account in Evolution. This step does not involve Nextcloud, but placates Evolution's Welcome Wizard, and follows in the steps of the newly institutionalized luser. CardDAV and CalDAV accounts can be created in Evolution later.
The account's full name is Dick Loudon and its email address is
dick@small.example.org
. The Receiving Email Server Type is IMAP, its name ismail.small.private
and it uses the IMAPS port (993). The Username on the server isdick
. The encryption method is TLS on a dedicated port. Authentication is by password. The Receiving Option defaults are fine. The Sending Email Server Type is SMTP with the namesmtp.small.private
using the default SMTP port (25). It requires neither authentication nor encryption.At some point Evolution will find that the server certificate is self-signed and unknown. It must be accepted (permanently).
- Create a CardDAV account in Evolution. Choose Edit, Accounts, Add,
Address Book, Type CardDAV, name Small Institute, and user
dick
. The URL starts withhttp://core.small.private/nextcloud/
and ends withremote.php/dav/addressbooks/users/dick/contacts/
(yeah, 88 characters!). Create a contact in the new address book and see it in the Contacts web page. At some point Evolution will need Dick's password to access the address book. - Create a CalDAV account in Evolution just like the CardDAV account
except add a Calendar account of Type CalDAV with a URL that ends
remote.php/dav/calendars/dick/personal/
(only 79 characters). Create an event in the new calendar and see it in the Calendar web page. At some point Evolution will need Dick's password to access the calendar.
13.13. Test Email
With Evolution running on the member notebook dick
, one second email
delivery can be demonstrated. The administrator runs the following
commands on front
/sbin/sendmail dick
Subject: Hello, Dick.
How are you?
.
and sees a notification on dick
's desktop in a second or less.
Outgoing email is also tested. A message to
sysadm@small.example.org
should be delivered to
/home/sysadm/Maildir/new/
on front
just as fast.
13.14. Test Public VPN
At this point, dick
can move abroad, from the campus Wi-Fi
(host-only network vboxnet1
) to the broader Internet (the NAT
network premises
). The following command makes the change. The
machine does not need to be shut down.
VBoxManage modifyvm dick --nic1 natnetwork --natnetwork1 premises
The administrator might wait to see evidence of the change in
networks. Evolution may start "Testing reachability of mail account
dick@small.example.org." Eventually, the campus
VPN should
disconnect. After it does, the administrator turns on the small
VPN, which connects in a second or two. Again, some basics are
tested in a terminal.
ping -c 1 8.8.4.4 # dns.google ping -c 1 192.168.56.1 # core host dns.google host core.small.private host www
And these web pages are fetched with a browser.
- http://www/
- http://www.small.private/
- http://live/
- http://live.small.private/
- http://test/
- http://test.small.private/
- http://small.example.org/
The Nextcloud web pages too should still be refresh-able, editable, and Evolution should still be able to edit messages, contacts and calendar events.
13.15. Test Pass Command
To test the ./inst pass
command, the administrator logs in to core
as dick
and runs passwd
. A random password is entered, more
obscure than fubar
(else Nextcloud will reject it!). The
administrator then finds the password change request message in the
most recent file in /home/sysadm/Maildir/new/
and pipes it to the
./inst pass
command. The administrator might do that by copying the
message to a more conveniently named temporary file on core
,
e.g. ~/msg
, copying that to the current directory on the notebook,
and feeding it to ./inst pass
on its standard input.
On core
, logged in as sysadm
:
( cd ~/Maildir/new/ cp `ls -1t | head -1` ~/msg ) grep Subject: ~/msg
To ensure that the most recent message is indeed the password change
request, the last command should find the line Subject: New
password.
. Then on the administrator's notebook:
scp sysadm@192.168.56.1:msg ./ ./inst pass < msg
The last command should complete without error.
Finally, the administrator verifies that dick
can login on core
,
front
and Nextcloud with the new password.
13.16. Test Old Command
One more institute command is left to exercise. The administrator
retires dick
and his main device dick
.
./inst old dick
The administrator tests Dick's access to core
, front
and
Nextcloud, and attempts to re-connect the small
VPN. All of these
should fail.
14. Future Work
The small institute's network, as currently defined in this doocument, is lacking in a number of respects.
14.1. Deficiencies
The current network monitoring is rudimentary. It could use some
love, like intrusion detection via Snort or similar. Services on
Front are not monitored except that the webupdate
script should be
emailing sysadm
whenever it cannot update Front (every 15 minutes!).
Pro-active monitoring might include notifying root
of any vandalism
corrected by Monkey's quarter-hourly web update. This is a
non-trivial task that must ignore intentional changes.
Monkey's cron
jobs on Core should be systemd.timer
and .service
units.
The institute's private domain names (e.g. www.small.private
) are
not resolvable on Front. Reverse domains (86.177.10.in-addr.arpa
)
mapping institute network addresses back to names in the private
domain small.private
work only on the campus Ethernet. These nits
might be picked when OpenVPN supports the DHCP option
rdnss-selection
(RFC6731), or with hard-coded resolvectl
commands.
The ./inst old dick
command does not break VPN connections to Dick's
clients. New connections cannot be created, but old connections can
continue to work for some time.
The ./inst client android dick-phone dick
command generates .ovpn
files that require the member to remember to check the "Use this
connection only for resources on its network" box in the IPv4 (and
IPv6) tab(s) of the Add VPN dialog. The command should include an
OpenVPN setting that the NetworkManager file importer recognizes as
the desired setting.
The VPN service is overly complex. The OpenVPN 2.4.7 clients allow
multiple server addresses, but the openvpn(8)
manual page suggests
per connection parameters are restricted to a set that does not
include the essential verify-x509-name
. Use the same name on
separate certificates for Gate and Front? Use the same certificate
and key on Gate and Front?
Nextcloud should really be found at https://CLOUD.small.private/
rather than https://core.small.private/nextcloud/
, to ease
future expansion (moving services to additional machines).
HTTPS could be used for Nextcloud transactions even though they are carried on encrypted VPNs. This would eliminate a big warning on the Nextcloud Administration Overview page.
14.2. More Tests
The testing process described in the previous chapter is far from complete. Additional tests are needed.
14.2.1. Backup
The backup
command has not been tested. It needs an encrypted
partition with which to sync? And then some way to compare that to
Backup/
?
14.2.2. Restore
The restore process has not been tested. It might just copy Backup/
to core:/
, but then it probably needs to fix up file ownerships,
perhaps permissions too. It could also use an example
Backup/Nextcloud/20220622.bak
.
14.2.3. Campus Disconnect
Email access (IMAPS) on front
is… difficult to test unless
core
's fetchmails are disconnected, i.e. the whole campus is
disconnected, so that new email stays on front
long enough to be
seen.
- Disconnect
gate
's NIC #2. - Send email to
dick@small.example.org
. - Find it in
/home/dick/Maildir/new/
. - Re-configure Evolution on
dick
. Edit thedick@small.example.org
mail account (or create a new one?) so that the Receiving Email Server name is192.168.15.5
, notmail.small.private
. The latter domain name will not work while the campus is disappeared. In actual use (with Front, notfront
), the institute domain name could be used.
15. Appendix: The Bootstrap
Creating the private network from whole cloth (machines with recent standard distributions installed) is not straightforward.
Standard distributions do not include all of the necessary server
software, esp. isc-dhcp-server
and bind9
for critical localnet
services. These are typically downloaded from the Internet.
To access the Internet Core needs a default route to Gate, Gate needs to forward with NAT to an ISP, Core needs to query the ISP for names, etc.: quite a bit of temporary, manual localnet configuration just to get to the additional packages.
15.1. The Current Strategy
The strategy pursued in The Hardware is two phase: prepare the servers on the Internet where additional packages are accessible, then connect them to the campus facilities (the private Ethernet switch, Wi-Fi AP, ISP), manually configure IP addresses (while the DHCP client silently fails), and avoid names until BIND9 is configured.
15.2. Starting With Gate
The strategy of Starting With Gate concentrates on configuring Gate's connection to the campus ISP in hope of allowing all to download additional packages. This seems to require manual configuration of Core or a standard rendezvous.
- Connect Gate to ISP, e.g. apartment WAN via Wi-Fi or Ethernet.
Connect Gate to private Ethernet switch.
sudo ip address add GATE dev ISPDEV
- Configure Gate to NAT from private Ethernet.
Configure Gate to serve DHCP on Ethernet, temporarily!
- Push default route through Gate, DNS from 8.8.8.8.
Or statically configure Core with address, route, and name server.
sudo ip address add CORE dev PRIVETH sudo ip route add default via GATE sudo sh -c 'echo "nameserver 8.8.8.8" >/etc/resolve.conf'
- Configure admin's notebook similarly?
- Test remote access from administrator's notebook.
Finally, configure Gate and Core.
ansible-playbook -l gate site.yml ansible-playbook -l core site.yml
15.3. Pre-provision With Ansible
A refinement of the current strategy might avoid the need to maintain
(and test!) lists of "additional" packages. With Core and Gate and
the admin's notebook all together on a café Wi-Fi, Ansible might be
configured (e.g. tasks tagged) to just install the necessary
packages. The administrator would put Core's and Gate's localnet IP
addresses in Ansible's inventory file, then run just the Ansible tasks
tagged base-install
, leaving the new services in a decent (secure,
innocuous, disabled) default state.
ansible-playbook -l core -t base-install site.yml ansible-playbook -l gate -t base-install site.yml
Footnotes:
The recommended private top-level domains are listed in "Appendix G. Private DNS Namespaces" of RFC6762 (Multicast DNS). https://www.rfc-editor.org/rfc/rfc6762#appendix-G
The cipher set specified by Let's Encrypt is large enough to turn orange many parts of an SSL Report from Qualys SSL Labs.
Presumably, eventually, a former member's home directories are archived to external storage, their other files are given new ownerships, and their Unix accounts are deleted. This has never been done, and is left as a manual exercise.
Front is accessible via Gate but routing from the host address
on vboxnet0
through Gate requires extensive interference with the
routes on Front and Gate, making the simulation less… similar.