Birchwood Abbey Networks
The abbey's network services are configured by Ansible scripts based
on A Small Institute. The institutional roles like core, gate and
front are intended for general use and so are kept free of abbey
idiosyncrasies. The roles herein are abbey specific, emphasized by
the abbey- prefix on their names. These roles are applied after
the generic institutional roles (again, documented here).
1. Overview
A Small Institute makes security and privacy top priorities but Birchwood Abbey approaches these from a particularly Elvish viewpoint. Elves depend for survival on speed, agility, and concealment. Working toward those ends (esp. the last) Birchwood Abbey's network topology was designed to look like that of an average Amerikan household. Korporate Amerika expects our ISP to provide us with a Wi-Fi/router/modem that all of our appliances can use to communicate amongst themselves in a cliquey, New World Order IoT kumbaya. We dare not disappoint.
Thus Samsung (our refrigerator) is able to browse for our printer or connect to Kroger (our grocer) or Kaiser (our health care provider) for whatever reason (presumably to report on our eating habits). The only suspicious character in this Amerikan household will be Gate, a Raspberry Pi passing many encrypted packets. Thus when the New World Police come a-knock'n (i.e. after they kick the door and kill the dog) we might still hold onto some plausible deniability.
To most look like our neighbors we sit between our smart TVs and our smart refrigerators and consciously play the flaccid consumer streaming Amazon and watching Blu-ray discs. This works because we have preserved a means of escape. We may not be able to hide our entertainment choices nor even eating habits anymore, but we can still retreat into private correspondence between Inner Citadels.
The small institute tries to look "normal" too so the abbey's network map is very similar, with differences mainly in terminology, philosophy, attitude.
|
=
_|||_
----- The Temple-----
= = = =
= = = =
=====-Front-=====
|
-----------------
( )
( The Internet(s) )----(Hotel Wi-Fi)
( ) |
----------------- |
| +----Monk's notebook abroad
|
=============== | ==================================================
| Premises
(House ISP)
|
| +----Monk's notebook in the house
| +----Samsung refrigerator
| +----Sony Bluray
| +----Lexmark printer
| |
| +----(House Wi-Fi)
| | Game of Thrones
============== Gate ================================================
| Cloister
+----Ethernet switch
|
+----Core
+----Security DVR
+----IP camera(s)
+----HDTV TVR
+----WebTV
2. The Abbey Particulars
The abbey's public particulars are included below. They are the public particulars of a small institute, nothing more.
public/vars.yml
---
domain_name: birchwood-abbey.net
full_name: Birchwood Abbey
front_addr: 159.65.75.60
The abbey's private institutional parameters are in
private/vars.yml
. Example lines can be found in
Institute/private/vars.yml
.
The abbey's private liturgical parameters are in
private/vars-abbey.yml
. Example lines are included here and tangled
into private_ex/vars-abbey.yml
.
3. The Abbey Front Role
Birchwood Abbey's front door is a Digital Ocean Droplet configured as A Small Institute Front. Thus it is already serving a public web site with Apache2, spooling email with Postfix and serving it with Dovecot-IMAPd, and hosting a VPN with WireGuard™.
3.1. Install Emacs
The monks of the abbey are masters of the staff (bo) and Emacs.
roles_t/abbey-front/tasks/main.yml
---
- name: Install Emacs.
become: yes
apt: pkg=emacs
3.2. Configure Public Email Aliases
The abbey uses several additional email aliases. These are the public
mailboxes @birchwood-abbey.net. The institute already funnels the
common mailboxes like postmaster and admin into root and root
to the machine's privileged account (sysadm). The abbey takes it
from there, forwarding sysadm to a real person.
roles_t/abbey-front/tasks/main.yml
- name: Install abbey email aliases.
become: yes
blockinfile:
block: |
sysadm: matt
keymaster: root
codemaster: matt
all: matt, lori, erica
elders: matt, lori
rents: elders
puck: matt
abbess: lori
dest: /etc/aliases
marker: "# {mark} ABBEY MANAGED BLOCK"
notify: New aliases.
roles_t/abbey-front/handlers/main.yml
---
- name: New aliases.
become: yes
command: newaliases
tags: actualizer
3.3. Configure Git Daemon on Front
The abbey publishes member Git repositories with git-daemon. If
Dick (a member of A Small Institute) builds a Foo project Git
repository in ~/foo/
, he can publish it to the campus by
symbolically linking its .git/
into ~/Public/Git/
on Core. If the
repository is world readable and contains a git-daemon-export-ok
file, it will be served at git://www/~dick/foo
.
touch ~/foo/.git/git-daemon-export-ok ln -s ~/foo/.git ~/Public/Git/foo chmod -R o+r ~/foo/.git find ~/foo/.git -type d -print0 | xargs -0 chmod o+rx
User repositories can be made available to the public at a URL like
git://small.example.org/~dick/foo by copying it to the same path on
Front (~dick/Public/Git/foo/
). The following rsync command
creates or updates such a copy.
rsync -av ~/foo/.git/ small.example.org:Public/Git/foo/
Note that Dick's Git repository, mirrored to Front (or Core), does not
need to be backed up, assuming Dick's home directory (including
~/foo/
) is. If updates are git-pushed to a repository on Front,
regular backups should be made, but this is Dick's responsibility.
There are no regular, system backups on Front.
rsync -av --del small.institute.org:Public/foo/ ~/Public/foo/
With SystemD and the git-daemon-sysvinit package installed, SystemD
supervises a git-daemon service unit launched with
/etc/init.d/git-daemon. The old SysV init script gets its
configuration from the customary /etc/default/git-daemon
file. The
script then constructs the appropriate git-daemon command. The
git-daemon(1) manual page explains the command options in detail.
As explained in /usr/share/doc/git-daemon-sysvinit/README.Debian
,
the service must be enabled by setting GIT_DAEMON_ENABLE to true.
The base path is also changed to agree with gitweb.cgi
.
User repositories are enabled by adding a user-path option and
disabling the default whitelist. To specify an empty whitelist, the
default (a list of one directory: /var/lib/git
) must be avoided by
setting GIT_DAEMON_DIRECTORY to a blank (not empty) string.
The code below is included in both Front and Core configurations,
which should be nearly identical for testing purposes. Rather than
factor out small roles like abbey-git-server, Emacs Org Mode's Noweb
support does the duplication, by multiple references to code blocks
like git-tasks and git-handlers.
roles_t/abbey-front/tasks/main.yml
<<git-tasks>>
git-tasks- name: Install git daemon.
become: yes
apt: pkg=git-daemon-sysvinit
- name: Configure git daemon.
become: yes
lineinfile:
path: /etc/default/git-daemon
regexp: "{{ item.patt }}"
line: "{{ item.line }}"
loop:
- patt: '^GIT_DAEMON_ENABLE *='
line: 'GIT_DAEMON_ENABLE=true'
- patt: '^GIT_DAEMON_OPTIONS *='
line: 'GIT_DAEMON_OPTIONS="--user-path=Public/Git"'
- patt: '^GIT_DAEMON_BASE_PATH *='
line: 'GIT_DAEMON_BASE_PATH="/var/www/git"'
- patt: '^GIT_DAEMON_DIRECTORY *='
line: 'GIT_DAEMON_DIRECTORY=" "'
notify: Restart git daemon.
- name: Create /var/www/git/.
become: yes
file:
path: /var/www/git
state: directory
group: staff
mode: u=rwx,g=srwx,o=rx
roles_t/abbey-front/handlers/main.yml
<<git-handlers>>
git-handlers
- name: Restart git daemon.
become: yes
command: systemctl restart git-daemon
tags: actualizer
3.4. Configure Gitweb on Front
The abbey provides an HTML interface to members' public Git
repositories using gitweb.cgi, one of the few CGI scripts allowed on
Front. Unlike the Git daemon, the Gitweb interface does not care if
the repository contains a git-daemon-export-ok
file.
Again Front and Core need to be configured congruently, so the necessary Apache directives are given here and referenced in the Apache configurations.
Like the suggested per-user rewrite rule in the gitweb(1) manual
page, the second RewriteRule specifies the root directory of the
user's public Git repositories via the GITWEB_PROJECTROOT
environment variable. It makes http://www/~dick/git run
Gitweb with the project root ~dick/Public/Git/
, the same directory
the git-daemon makes available. The first RewriteRule directs
URLs with no user name to the default. Thus http://www/git
lists the repositories found in /var/www/git/
.
apache-gitweb
Alias /gitweb-static/ /usr/share/gitweb/static/
<Directory "/usr/share/gitweb/static/">
Options MultiViews
</Directory>
RewriteEngine on
RewriteRule ^/git(/.*)?$ \
/cgi-bin/gitweb.cgi$1 [QSA,L,PT]
RewriteRule ^/\~([^\/]+)/git(/.*)?$ \
/cgi-bin/gitweb.cgi$2 \
[QSA,E=GITWEB_PROJECTROOT:/home/$1/Public/Git/,L,PT]
The RewriteRule flags used here are:
- QSA | qsappend
- Append the request's query string.
- E= | env
- Set or unset an environment variable.
- L | last
- Stop with this Last rule.
- PT | passthrugh
- Treat the result as a URI, not a file path.
The RewriteEngine on directive must be included in the virtual host
or no rewriting will take place.
The CGI script and RewriteRule require Apache's cgi and rewrite
modules, which are not normally enabled on a small institute's public
server. Thus they need to be enabled here. Note that Debian and
Ubuntu install different Apache MPMs (multi-processing modules)
requiring different CGI modules, turning two tasks into three.
The script uses the CGI Perl module, which must be installed.
The rewrite rule maps to the URL /cgi-bin/gitweb.cgi
, which is
mapped by default to /usr/lib/cgi-bin/gitweb.cgi
. The git package
installs gitweb.cgi
in /usr/share/gitweb/
, so it and its related
index.cgi
script are linked into /usr/lib/cgi-bin/
.
The static/
directory, also installed in /usr/share/gitweb/
, is
made available as http://www/gitweb-static/ via an Alias
directive. The global Perl configuration file, /etc/gitweb.conf
,
overrides the relative URLs Gitweb normally generates, and uses the
web site /favicon.ico
.
apache-gitweb-tasks- name: Enable Apache2 rewrite module for Gitweb.
become: yes
apache2_module: name=rewrite
notify: Restart Apache2.
- name: Enable Apache2 cgid module.
become: yes
apache2_module: name=cgid
notify: Restart Apache2.
- name: Install libcgi-pm-perl for Gitweb.
become: yes
apt: pkg=libcgi-pm-perl
- name: Link Gitweb into /cgi-bin/.
become: yes
file:
state: link
path: /usr/lib/cgi-bin/{{ item }}
src: /usr/share/gitweb/{{ item }}
loop: [ gitweb.cgi, index.cgi ]
- name: Override Gitweb assets location.
become: yes
copy:
content: |
$projectroot = $ENV{'GITWEB_PROJECTROOT'} || "/var/www/git";
@stylesheets = ("/gitweb-static/gitweb.css");
$logo = "/gitweb-static/git-logo.png";
$favicon = "/favicon.ico";
$javascript = "/gitweb-static/gitweb.js";
dest: /etc/gitweb.conf
mode: u=rw,g=r,o=r
apache-gitweb-handlers- name: Restart Apache2.
become: yes
systemd:
service: apache2
state: restarted
tags: actualizer
3.5. Configure Apache for Abbey Documentation
Some of the directives added to the -vhost.conf
file are needed by
the abbey's documentation, published at
https://birchwood-abbey.net/Abbey/. The following template uses a
docroot variable for the actual path to the HTML. On Front this
variable is set to /home/www
. The same template is used on Core, to
ensure matching configurations for accurate previews and tests.
The abbey's network documentation currently uses automatic directory indexes, and declares the types of files with several additional filename suffixes.
apache-abbey<Directory {{ docroot }}/Abbey/>
AllowOverride Indexes FileInfo
Options +Indexes +FollowSymLinks
</Directory>
The following .htaccess
file works with the directives above. It
declares most the native source files in the current directory tree to
be plain text, so that they are displayed rather than downloaded.
.htaccess
ReadmeName notfound.html
IndexIgnore README.org
AddType text/plain attr campus_vpn cfg cnf conf crt daily_letsencrypt
AddType text/plain domain el htaccess idx j2 key old org pack pem
AddType text/plain private pub public_vpn req rev sample txt yml
3.6. Configure Photos URLs on Front
Some of the directives added to the -vhost.conf
file map the abbey's
abstract photo URLs, e.g. /Photos/2022/08/06/
, into actual file
paths. The following template uses the docroot variable introduced
in the previous section. On Front this variable is set to
/home/www
. The same template is used on Core, to ensure
matching configurations for accurate previews and tests.
apache-photos
RedirectMatch /Photos$ /Photos/
RedirectMatch /Photos/(20[0-9][0-9])_([0-9][0-9])_([0-9][0-9])$ \
/Photos/$1_$2_$3/
AliasMatch /Photos/(20[0-9][0-9])_([0-9][0-9])_([0-9][0-9])/(.+)$ \
{{ docroot }}/Photos/$1/$2/$3/$4
AliasMatch /Photos/(20[0-9][0-9])_([0-9][0-9])_([0-9][0-9])/$ \
{{ docroot }}/Photos/$1/$2/$3/index.html
AliasMatch /Photos/$ {{ docroot }}/Photos/index.html
3.7. Configure Apache on Front
The abbey needs to add some Apache2 configuration directives to the
virtual host listening for HTTPS requests to birchwood-abbey.net
.
Luckily there is support for this in the institutional configuration.
The abbey simply creates a birchwood-abbey.net-vhost.conf
file in
/etc/apache2/sites-available/
.
The following task adds the apache-abbey, apache-photos, and
apache-gitweb directives described above to the -vhost.conf
file,
and includes options-ssl-apache.conf
from /etc/letsencrypt/
. The
rest of the Let's Encrypt configuration is discussed in the following
Install Let's Encrypt section.
roles_t/abbey-front/tasks/main.yml
- name: Configure Apache.
become: yes
vars:
docroot: /home/www
copy:
content: |
<<apache-abbey>>
<<apache-photos>>
<<apache-gitweb>>
IncludeOptional /etc/letsencrypt/options-ssl-apache.conf
dest: /etc/apache2/sites-available/birchwood-abbey.net-vhost.conf
notify: Restart Apache2.
<<apache-gitweb-tasks>>
roles_t/abbey-front/handlers/main.yml
<<apache-gitweb-handlers>>
3.8. Configure Apache Log Archival
These tasks hack Apache's logrotate(8) configuration to rotate
weekly, keep a couple weeks, and email each week's log to root.
The logrotate(8) manual page explains the configuration options.
The Systemd configuration drop tells logrotate to use a special
script for its mail program. Postfix's mail work-alike did not take
the subject as a command line argument as provided by logrotate.
The replacement logrotate-mailer
does, and includes it in a
Subject header prepended to logrotate's message.
roles_t/abbey-front/tasks/main.yml
- name: Configure Apache log archival.
become: yes
lineinfile:
path: /etc/logrotate.d/apache2
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
loop:
- { regexp: '^ *daily', line: "\tweekly" }
- { regexp: '^ *rotate', line: "\trotate 2" }
- name: Configure Apache log email.
become: yes
lineinfile:
path: /etc/logrotate.d/apache2
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
insertbefore: " *}"
firstmatch: yes
loop:
- { regexp: "^\tmail ", line: "\tmail webmaster" }
- { regexp: "^\tmailfirst", line: "\tmailfirst" }
- name: Configure logrotate.
become: yes
copy:
src: logrotate-mailer.conf
dest: /etc/systemd/system/logrotate.service.d/mailer.conf
notify: Reload systemd.
- name: Install logrotate mailer.
become: yes
copy:
src: logrotate-mailer
dest: /usr/local/sbin/logrotate-mailer
mode: u=rwx,g=rx,o=rx
roles_t/abbey-front/handlers/main.yml
- name: Reload systemd.
become: yes
systemd:
daemon_reload: yes
tags: actualizer
Note that the first setting for ExecStart is intended to clear the
system's ExecStart in /lib/systemd/system/logrotate.service
. (A
oneshot service like this can have multiple ExecStart settings.
See the description of ExecStart in the systemd.service(5) manual
page.)
roles_t/abbey-front/files/logrotate-mailer.conf
[Service]
ExecStart=
ExecStart=/usr/sbin/logrotate \
--mail /usr/local/sbin/logrotate-mailer \
/etc/logrotate.conf
The /usr/local/sbin/logrotate-mailer
script (below) was originally
needed because Postfix does not provide an emulation of mail(1) and
some translation to sendmail(1) was required. Since then the script
has learned to compute the date-dependent file name, compress the log,
convert it to base64, and encapsulate it in MIME format, before
encrypting and sending to sendmail.
roles_t/abbey-front/files/logrotate-mailer
#!/bin/bash -e
if [ "$#" != 3 -o "$1" != "-s" ]; then
echo "usage: $0 -s subject recipient" 1>&2
exit 1
fi
D=`date -d yesterday "+%Y%m%d"`
if [[ "$2" == *error.log* ]]; then
F="$D-error.log.gz"
else
F="$D.log.gz"
fi
( echo "Subject: $2"
echo ""
( echo "Content-Type: multipart/mixed; boundary=\"boundary\""
echo "MIME-Version: 1.0"
echo ""
echo "--boundary"
echo "Content-Type: text/plain"
echo "Content-Transfer-Encoding: 8bit"
echo ""
echo "$F"
echo "--boundary"
echo "Content-Type: application/gzip; name=\"$F\""
echo "Content-Disposition: attachment; filename=\"$F\""
echo "Content-Transfer-Encoding: base64"
echo ""
gzip | base64
echo ""
echo "--boundary--" ) \
| gpg --encrypt --armor \
--trust-model always --recipient root@core ) \
| sendmail root \
|| exit $?
3.9. Install Let's Encrypt
The abbey uses a Let's Encrypt certificate to authenticate its public web site and email services. Initial installation of a Let's Encrypt certificate is a terminal session affair (with prompts and lines entered as shown below).
$ sudo apt install python3-certbot-apache $ sudo certbot --apache -d birchwood-abbey.net ... Enter email address (...) (Enter 'c' to cancel): webmaster@birchwood- bbey.net ... Please read the Terms of Service at ... (A)gree/(C)ancel: A ... Would you be willing to share your email address... ... (Y)es/(N)o: Y ... Deploying Certificate to VirtualHost /etc/apache2/sites-enabled/birch ood-abbey.net.conf Please choose whether or not to redirect HTTP traffic to HTTPS, remov ng HTTP access. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1: No redirect - Make no further changes to the webserver configurati n. ... Select the appropriate number [1-2] then [enter] (press 'c' to cancel : 1 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Congratulations! You have successfully enabled https://birchwood-abbe .net You should test your configuration at: https://www.ssllabs.com/ssltest/analyze.html?d=birchwood-abbey.net - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - IMPORTANT NOTES: - Your account credentials have been saved in your Certbot configuration directory at /etc/letsencrypt. You should make a secure backup of this folder now. This configuration directory wil also contain certificates and private keys obtained by Certbot so making regular backups of this folder is ideal. ... - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/birchwood-abbey.net/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/birchwood-abbey.net/privkey.pem Your cert will expire on 2019-01-13. To obtain a new or tweaked version of this certificate in the future, simply run certbot agai with the "certonly" option. To non-interactively renew *all* of your certificates, run "certbot renew"
When the /etc/letsencrypt/
directory is restored from a backup copy,
and the following tasks performed, the web server will be prepared to
do ACME (the certificate protocol) when next Let's Encrypt calls
(quarterly). The following tasks ensure the python3-cerbot-apache
package is installed and its live/
subdirectory is world readable.
roles_t/abbey-front/tasks/main.yml
- name: Install Certbot for Apache.
become: yes
apt: pkg=python3-certbot-apache
- name: Ensure Let's Encrypt certificate is readable.
become: yes
file:
mode: u=rwx,g=rx,o=rx
path: /etc/letsencrypt/live
Front's Dovecot (and Postfix) certificate and key are in separate files despite their warning about a race condition (when updating the pair of files) mainly because that is how they are provided (and updated) by Let's Encrypt, but also because Let's Encrypt's symbolic links keep the window for a mismatch extremely small.
With the institutional configuration, Postfix, Dovecot and Apache
servers get their certificate&key from /etc/server.crt&.key
. The
institutional roles check that they exist, but will not create them.
In this abbey specific role, /etc/server.crt&key
are ours to frob.
The following tasks ensure they are symbolic links to
/etc/letsencrypt/live/birchwood-abbey.net/fullchain&privkey.pem
. If
/etc/letsencrypt/
was restored from a backup, the servers should be
restarted manually.
roles_t/abbey-front/tasks/main.yml
- name: Use Let's Encrypt certificate&key.
file:
state: link
src: "{{ item.target }}"
path: "{{ item.link }}"
force: yes
loop:
- target: /etc/letsencrypt/live/birchwood-abbey.net/fullchain.pem
link: /etc/server.crt
- target: /etc/letsencrypt/live/birchwood-abbey.net/privkey.pem
link: /etc/server.key
3.10. Rotate Let's Encrypt Log
The following task arranges to rotate Certbot's logs files.
roles_t/abbey-front/tasks/main.yml
- name: Install Certbot logrotate configuration.
become: yes
copy:
src: certbot_logrotate
dest: /etc/logrotate.d/certbot
mode: u=rw,g=r,o=r
roles_t/abbey-front/files/certbot_logrotate
/var/log/letsencrypt/*.log {
rotate 12
weekly
compress
missingok
}
3.11. Archive Let's Encrypt Data
A backup copy of Let's Encrypt's data (/etc/letsencrypt/
) is sent to
root@core in OpenPGP encrypted email every time it changes. Changes
are detected by keeping a copy in /etc/letsencrypt~/
for comparison.
roles_t/abbey-front/tasks/main.yml
- name: Install Let's Encrypt archive script.
become: yes
copy:
src: cron.daily_letsencrypt
dest: /etc/cron.daily/letsencrypt
mode: u=rwx,g=rx,o=rx
roles_t/abbey-front/files/cron.daily_letsencrypt
#!/bin/bash -e
cd /etc/
[ -d letsencrypt~ ] \
&& diff -rq letsencrypt/ letsencrypt~/ \
&& exit 0
F=`date "+%Y%m%d"`.tar.gz
( echo "Subject: New /etc/letsencrypt/ on Droplet."
echo ""
( echo "Content-Type: multipart/mixed; boundary=\"boundary\""
echo "MIME-Version: 1.0"
echo ""
echo "--boundary"
echo "Content-Type: application/gzip; name=\"$F\""
echo "Content-Disposition: attachment; filename=\"$F\""
echo "Content-Transfer-Encoding: base64"
echo ""
tar czf - letsencrypt/ | base64
echo ""
echo "--boundary--" ) \
| gpg --encrypt --armor \
--trust-model always --recipient root@core ) \
| sendmail root \
|| exit $?
rm -rf letsencrypt~
cp -a letsencrypt letsencrypt~
The message is encrypted with root@core's public key, which is
imported into root@front's GnuPG key file.
roles_t/abbey-front/tasks/main.yml
- name: Copy root@core's public key.
become: yes
copy:
src: ../Secret/root-pub.pem
dest: /root/.gnupg-root-pub.pem
mode: u=r,g=r,o=r
notify: Import root@core's public key.
roles_t/abbey-front/handlers/main.yml
- name: Import root@core's public key.
become: yes
command: gpg --import ~/.gnupg-root-pub.pem
4. The Abbey Core Role
Birchwood Abbey's core is a mini-PC (System76 Meerkat) configured as A Small Institute Core. Thus it is already serving a local web site with Apache2, hosting a private cloud with Nextcloud, handling email with Postfix and Dovecot, and providing essential localnet services: NTP, DNS and DHCP.
4.1. Include Abbey Variables
In this abbey specific document, most abbey particulars are not
replaced with variables, but specified in-line. Some, however, are
private (e.g. database passwords), not to be published in this
document, and so replaced with variables set in
private/vars-abbey.yml
. The file path is relative to the playbook's
directory, playbooks/
.
roles_t/abbey-core/tasks/main.yml
---
- name: Include private abbey variables.
include_vars: ../private/vars-abbey.yml
4.2. Install Additional Packages
The scripts that maintain the abbey's web site use a number of
additional software packages. The /WWW/live/Private/make-top-index
script uses HTML::TreeBuilder in the libhtml-tree-perl package.
The house task list uses JQuery.
roles_t/abbey-core/tasks/main.yml
- name: Install additional packages.
become: yes
apt:
pkg: [ procmail, libhtml-tree-perl, libjs-jquery,
mit-scheme, gnuplot ]
4.3. Configure Private Email Aliases
The abbey uses several additional email aliases. These are the campus
mailboxes @*.birchwood.private. The institute already includes
some standard system aliases, as well as mailboxes for accounts
running services: www-data and monkey. The institute funnels
these to root and forwards root to sysadm (as on Front). The
abbey takes it from there, forwarding sysadm to a real person and
including mailboxes for all accounts running services on any campus
machine. (They should all be relaying to smtp.birchwood.private
which delivers any .birchwood.private email,
e.g. mythtv@mythtv.birchwood.private, locally.)
roles_t/abbey-core/tasks/main.yml
- name: Install abbey email aliases.
become: yes
blockinfile:
block: |
sysadm: matt
house: sysadm
mythtv: sysadm
scanner: sysadm
dest: /etc/aliases
marker: "# {mark} ABBEY MANAGED BLOCK"
notify: New aliases.
roles_t/abbey-core/handlers/main.yml
---
- name: New aliases.
become: yes
command: newaliases
tags: actualizer
4.4. Configure Git Daemon on Core
These tasks are identical to those executed on Front, for similar Git services on Front and Core. See 3.3 and Configure Gitweb on Front for more information.
roles_t/abbey-core/tasks/main.yml
<<git-tasks>>
roles_t/abbey-core/handlers/main.yml
<<git-handlers>>
4.5. Configure Apache on Core
The Apache2 configuration on Core specifies three web sites (live,
test, and campus). The live and test sites must operate just like the
site on Front. Their configurations include the same apache-abbey,
apache-photos, and apache-gitweb used on Front.
roles_t/abbey-core/tasks/main.yml
- name: Configure live website.
become: yes
vars:
docroot: /WWW/live
copy:
content: |
<<apache-abbey>>
<<apache-photos>>
<<apache-gitweb>>
dest: /etc/apache2/sites-available/live-vhost.conf
mode: u=rw,g=r,o=r
notify: Restart Apache2.
- name: Configure test website.
become: yes
vars:
docroot: /WWW/test
copy:
content: |
<<apache-abbey>>
<<apache-photos>>
<<apache-gitweb>>
dest: /etc/apache2/sites-available/test-vhost.conf
mode: u=rw,g=r,o=r
notify: Restart Apache2.
<<apache-gitweb-tasks>>
roles_t/abbey-core/handlers/main.yml
<<apache-gitweb-handlers>>
4.6. Configure Documentation URLs
The institute serves its /usr/share/doc/
on the house (campus) web
site. This is a debugging convenience, making some HTML documentation
more accessible, especially the documentation of software installed on
Core and not on typical desktop clients. Also included: the Apache2
directives that enable user Git publishing with Gitweb (defined here).
roles_t/abbey-core/tasks/main.yml
- name: Configure house website.
become: yes
copy:
content: |
Alias /doc /usr/share/doc
<Directory /usr/share/doc/>
Options Indexes
</Directory>
<<apache-gitweb>>
dest: /etc/apache2/sites-available/www-vhost.conf
mode: u=rw,g=r,o=r
notify: Restart Apache2.
4.7. Install Apt Cacher
The abbey uses the Apt-Cacher:TNG package cache on Core. The
apt-cacher domain name is defined in private/db.domain
.
roles_t/abbey-core/tasks/main.yml
- name: Install Apt-Cacher:TNG.
become: yes
apt: pkg=apt-cacher-ng
4.8. Use Cloister Apt Cache
Core itself will benefit from using the package cache, but should
contact https repositories directly. (There are few such cretins
so caching their packages is not a priority.)
roles_t/abbey-core/tasks/main.yml
- name: Use the local Apt package cache.
become: yes
copy:
content: >
Acquire::http::Proxy
"http://apt-cacher.birchwood.private.:3142";
Acquire::https::Proxy "DIRECT";
dest: /etc/apt/apt.conf.d/01proxy
mode: u=rw,g=r,o=r
4.9. Configure NAGIOS
A small institute uses nagios4 to monitor the health of its network,
with an initial smattering of monitors adopted from the Debian
monitoring-plugins package. Thus a NAGIOS4 server on the abbey's
Core monitors core network services, and uses nagios-nrpe-server to
monitor Gate. The abbey adds several more monitors, installing
additional configuration files in /etc/nagios4/conf.d/
, a
check_mdstat plugin from https://exchange.nagios.org/ on Core, and
another customized check_sensors plugin (abbey_pisensors) on the
Raspberry Pis.
4.9.1. Monitoring The Home Disk
The abbey adds monitoring of the space remaining on the volume at
/home/
on Core. (The small institute only monitors the space
remaining on roots.) The abbey also monitors of the state of the
RAID-5 array under /home/
.
roles_t/abbey-core/tasks/main.yml
- name: Configure NAGIOS monitoring for Core /home/.
become: yes
copy:
content: |
define service {
use local-service
host_name core
service_description Home Partition
check_command check_local_disk!20%!10%!/home
}
define service {
use local-service
host_name core
service_description Home RAID
check_command check_mdstat!md0!3
}
define command {
command_name check_mdstat
command_line /usr/local/sbin/check_mdstat $ARG1$ $ARG2$
}
dest: /etc/nagios4/conf.d/abbey.cfg
notify: Reload NAGIOS4.
- name: Install NAGIOS monitor check_mdstat.
become: yes
copy:
src: ../abbey-core/files/check_mdstat
dest: /usr/local/sbin/check_mdstat
mode: u=rwx,g=rx,o=rx
roles_t/abbey-core/handlers/main.yml
- name: Reload NAGIOS4.
become: yes
systemd:
service: nagios4
state: reloaded
tags: actualizer
4.9.2. Custom NAGIOS Monitor abbey_pisensors
The check_sensors plugin is included in the package
monitoring-plugins-basic, but it does not report any readings. The
small institute substitutes a Custom NAGIOS Monitor inst_sensors
that reports core CPU temperatures, but the sensors command on a
Raspberry Pi does not reveal core CPU temperatures, so the abbey
includes yet another version, abbey_pisensors, that reports any
recognizable temperature in the sensors output.
roles_t/abbey-core/files/abbey_pisensors
#!/bin/sh
PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin"
export PATH
PROGNAME=`basename $0`
REVISION="2.3.1"
. /usr/lib/nagios/plugins/utils.sh
print_usage() {
echo "Usage: $PROGNAME" [--ignore-fault]
}
print_help() {
print_revision $PROGNAME $REVISION
echo ""
print_usage
echo ""
echo "This plugin checks hardware status using" \
"the lm_sensors package."
echo ""
support
exit $STATE_OK
}
brief_data() {
echo "$1" | sed -n -E -e '
/^temp[0-9]+: +[-+][0-9.]+.?C/ {
s/^temp[0-9]+: +([-+][0-9.]+).?C.*/ \1/; H }
$ { x; s/\n//g; p }'
}
case "$1" in
--help)
print_help
exit $STATE_OK
;;
-h)
print_help
exit $STATE_OK
;;
--version)
print_revision $PROGNAME $REVISION
exit $STATE_OK
;;
-V)
print_revision $PROGNAME $REVISION
exit $STATE_OK
;;
*)
sensordata=`sensors 2>&1`
status=$?
if test ${status} -eq 127; then
text="SENSORS UNKNOWN - command not found"
text="$text (did you install lmsensors?)"
exit=$STATE_UNKNOWN
elif test ${status} -ne 0; then
text="WARNING - sensors returned state $status"
exit=$STATE_WARNING
elif echo ${sensordata} | egrep ALARM > /dev/null; then
text="SENSOR CRITICAL -`brief_data "${sensordata}"`"
exit=$STATE_CRITICAL
elif echo ${sensordata} | egrep FAULT > /dev/null \
&& test "$1" != "-i" -a "$1" != "--ignore-fault"; then
text="SENSOR UNKNOWN - Sensor reported fault"
exit=$STATE_UNKNOWN
else
text="SENSORS OK -`brief_data "${sensordata}"`"
exit=$STATE_OK
fi
echo "$text"
if test "$1" = "-v" -o "$1" = "--verbose"; then
echo ${sensordata}
fi
exit $exit
;;
esac
4.9.3. Stolen NAGIOS Monitor check_mdstat
This check_mdstat plugin was copied from the NAGIOS Exchange (here).
It detects a failing disk in a multi-disk array.
roles_t/abbey-core/files/check_mdstat
#!/usr/bin/env bash
# nagios script checks for failed raid device
# linux software raid /proc/mdstat
# karl@webmedianow.com 2013-10-01
STATE_OK=0
STATE_WARNING=1
STATE_CRITICAL=2
STATE_UNKNOWN=3
STATE_DEPENDENT=4
PATH=/bin:/usr/bin:/sbin:/usr/sbin
export PATH
usage() {
cat <<-EOE
Usage: $0 mdadm_device total_drives
mdadm_device is md0, md1, etc...
total_drives is 2 for mirror, or 3, 4 etc...
Nagios script to check if failed drive in /proc/mdstat
Example: raid 2 (2 disk mirror)
/opt/nagios/libexec/check_mdstat.sh md0 2
Example: raid 5 with 8 disks
/opt/nagios/libexec/check_mdstat.sh md0 8
EOE
exit $STATE_UNKNOWN
}
if [ $# -lt 2 ]; then
usage
fi
cmd_device="$1"
drive_num="$2"
U=""
for i in $(seq 1 $drive_num);
do
U="${U}U"
done
uu="[${U}]"
nn="[${drive_num}/${drive_num}]"
#cat /proc/mdstat | grep -A 1 ^md1 | tail -1 | awk '{print ($(NF))}'
# [UUUUUUUU] is OK raid
# [_U] is Failed Drive
# check if we have correct device...
if cat /proc/mdstat | grep ^${cmd_device} | awk '{print $1}' | grep ^${cmd_device}$ >/dev/null 2>&1
then
device=$cmd_device
else
echo "Couldn't match $cmd_device"
exit $STATE_UNKNOWN
fi
u_status=$(cat /proc/mdstat | grep -A 1 ^${device} | tail -1 | awk '{print ($(NF))}')
n_status=$(cat /proc/mdstat | grep -A 1 ^${device} | tail -1 | awk '{print ($(NF-1))}')
if [ $uu = $u_status ] && [ $nn = $n_status ]; then
echo "OK: $device $n_status $u_status"
exit $STATE_OK
else
echo "FAIL: $device $n_status $u_status"
exit $STATE_CRITICAL
fi
4.9.4. Configure NAGIOS Monitoring of The Cloister
The abbey adds monitoring for more servers: Dantooine and Kessel.
They are abbey-cloister servers, so they are configured as small
institute campus servers, like Gate, with an NRPE (a NAGIOS Remote
Plugin Executor) server and an inst_sensors command.
The configurations for these servers are very similar to Gate's, but are idiosyncratically in flux.
4.9.4.1. Cloister Network Addresses
The IP addresses of all three hosts are nice to use in the NAGIOS
configuration (to avoid depending on name service) and so are
included in private/vars-abbey.yml
.
private_ex/vars-abbey.yml
---
dantooine_addr: 10.84.138.8
kessel_addr: 10.84.138.10
4.9.4.2. Install NAGIOS Configurations
The following task installs each host's NAGIOS configuration.
roles_t/abbey-core/tasks/main.yml
- name: Configure cloister NAGIOS monitoring.
become: yes
template:
src: nagios-{{ item }}.cfg
dest: /etc/nagios4/conf.d/{{ item }}.cfg
loop: [ dantooine, kessel ]
notify: Reload NAGIOS4.
4.9.4.3. NAGIOS Monitoring of Dantooine
roles_t/abbey-core/templates/nagios-dantooine.cfg
define host {
use linux-server
host_name dantooine
address {{ dantooine_addr }}
}
define service {
use generic-service
host_name dantooine
service_description Root Partition
check_command check_nrpe!inst_root
}
define service {
use generic-service
host_name dantooine
service_description DVR Recordings
check_command check_nrpe!abbey_dvr
}
# define service {
# use generic-service
# host_name dantooine
# service_description Current Load
# check_command check_nrpe!check_load
# }
define service {
use generic-service
host_name dantooine
service_description Zombie Processes
check_command check_nrpe!check_zombie_procs
}
# define service {
# use generic-service
# host_name dantooine
# service_description Total Processes
# check_command check_nrpe!check_total_procs
# }
define service {
use generic-service
host_name dantooine
service_description Swap Usage
check_command check_nrpe!inst_swap
}
define service {
use generic-service
host_name dantooine
service_description Temperature Sensors
check_command check_nrpe!inst_sensors
}
4.9.4.4. NAGIOS Monitoring of Kessel
roles_t/abbey-core/templates/nagios-kessel.cfg
define host {
use linux-server
host_name kessel
address {{ kessel_addr }}
}
define service {
use generic-service
host_name kessel
service_description Root Partition
check_command check_nrpe!inst_root
}
# define service {
# use generic-service
# host_name kessel
# service_description Current Load
# check_command check_nrpe!check_load
# }
define service {
use generic-service
host_name kessel
service_description Zombie Processes
check_command check_nrpe!check_zombie_procs
}
# define service {
# use generic-service
# host_name kessel
# service_description Total Processes
# check_command check_nrpe!check_total_procs
# }
define service {
use generic-service
host_name kessel
service_description Swap Usage
check_command check_nrpe!inst_swap
}
define service {
use generic-service
host_name kessel
service_description Temperature Sensors
check_command check_nrpe!inst_sensors
}
4.10. Install Munin
The abbey is experimenting with Munin. NAGIOS is all about notifying the Sys. Admin. of failed services. Munin is more about tracking trends in resource usage.
roles_t/abbey-core/tasks/main.yml
- name: Install Munin.
become: yes
apt: pkg=munin
- name: Add {{ ansible_user }} to munin group.
become: yes
user:
name: "{{ ansible_user }}"
append: yes
groups: munin
- name: Enable network access to Munin.
become: yes
lineinfile:
path: /etc/munin/apache24.conf
regexp: '([^#]*)Require'
line: '\1Require all granted'
backrefs: yes
notify: Restart Apache2.
- name: Punt default Munin node.
become: yes
ini_file:
section: "localhost.localdomain"
state: absent
backup: true
path: /etc/munin/munin.conf
notify: Restart Munin.
- name: Configure actual Munin nodes.
become: yes
copy:
content: |
[malastare.birchwood.private]
address 127.0.0.1
[anoat.birchwood.private]
address {{ gate_addr }}
[dantooine.birchwood.private]
address {{ dantooine_addr }}
[kessel.birchwood.private]
address {{ kessel_addr }}
dest: /etc/munin/munin-conf.d/zzz-site.cfg
notify: Restart Munin.
The core machine's sensors produce some unfortunate measurements. The
next task configures libsensors to ignore them.
roles_t/abbey-core/tasks/main.yml
- name: Configure core sensors(1).
become: yes
copy:
content: |
chip "iwlwifi_1-virtual-0"
ignore temp1
chip "acpitz-acpi-0"
ignore temp1
dest: /etc/sensors.d/site.conf
roles_t/abbey-core/handlers/main.yml
- name: Restart Munin.
become: yes
systemd:
service: munin
state: restarted
tags: actualizer
4.11. Install Analog
The abbey's public web site's access and error logs are emailed
regularly to webmaster, who saves them in /Logs/apache2-public/
and runs analog as monkey to generate /WWW/campus/analog.html
,
available to the campus as http://www/analog.html.
sudo -u monkey analog
The analog package includes a manual, how-to's and examples in
/usr/share/doc/analog/
. The HTML portions can be viewed on campus
at http://www/doc/analog/.
roles_t/abbey-core/tasks/main.yml
- name: Install Analog.
become: yes
apt: pkg=analog
- name: Configure Analog.
become: yes
vars:
dir: /Logs/apache2-public
lineinfile:
path: /etc/analog.cfg
regexp: "{{ item.regx }}"
line: "{{ item.line }}"
insertafter: EOF
loop:
- { regx: "^LOGFILE ", line: "LOGFILE {{ dir }}/202?????.log.gz" }
- { regx: "^OUTFILE ", line: "OUTFILE /WWW/campus/analog.html" }
- { regx: "HOSTNAME ", line: "HOSTNAME \"{{ full_name }}\"" }
- { regx: "^ALLCHART ", line: "ALLCHART OFF" }
- { regx: "^DNS ", line: "DNS WRITE" }
- { regx: "^DNSFILE ", line: "DNSFILE /Logs/dnscache" }
- name: Create /Logs/.
become: yes
file:
path: /Logs
state: directory
mode: u=rwx,g=rx,o=rx
- name: Create /Logs/dnscache.
become: yes
file:
path: /Logs/dnscache
owner: monkey
group: monkey
mode: u=rw,g=r,o=r
- name: Create /Logs/apache2-public/.
become: yes
file:
path: /Logs/apache2-public
state: directory
owner: monkey
group: staff
mode: u=rwx,g=srwx,o=rx
- name: Create /WWW/campus/analog/.
become: yes
file:
state: link
path: /WWW/campus/analog
src: /usr/share/analog/images
4.12. Add Monkey to Web Server Group
Monkey needs to be in www-data so that it can run
/WWW/live/Photos/Private/cronjob
to publish photos from multiple
user cloud accounts, found in files owned by www-data, files like
InstantUpload/Camera/2021/01/IMG_20210115_092838.jpg
in
/var/www/nextcloud/data/$USER/files/
.
roles_t/abbey-core/tasks/main.yml
- name: Add Monkey to Nextcloud group.
become: yes
user:
name: monkey
append: yes
groups: www-data
4.13. Install netpbm For Photo Processing
Monkey's photo processing scripts use netpbm commands like
jpegtopnm.
roles_t/abbey-core/tasks/main.yml
- name: Install netpbm.
become: yes
apt: pkg=netpbm
5. The Abbey Gate Role
Birchwood Abbey's gate is a $110 µPC configured as A Small Institute
Gate, thus providing a campus VPN on a campus Wi-Fi access point. It
routes network traffic from its wild and lan interfaces to its
isp interface (and back) with NAT. The abbey adds masquerading
between its private interfaces (lan and wg0) and wild. This
allows access to the Abbey's IoT appliances: a HomeAssistant and an
Ecowitt hub.
5.1. The Abbey Gate's Network Interfaces
The abbey gate's lan interface is the PC's built-in Ethernet
interface, connected to the cloister Ethernet, a Gigabit Ethernet
switch. Its wild interface is a USB3.0 Ethernet adapter connected
to a 5-port Gigabit Ethernet switch into which are patched the WAN
interfaces of two Think Penguin TPE-R1300 (and sometimes a Linksys
WRT1900AC), as well as a couple IoT things like an Ecowitt hub and a
HomeAssistant Pi. The isp interface is another USB3.0 Ethernet
adapter connected with a cross-over cable to the Ethernet interface of
a "cable modem" (a Starlink terminal).
The MAC address of each interface is set in private/vars.yml
(see
Institute/private/vars.yml
) as the values of the gate_lan_mac,
gate_wild_mac and gate_isp_mac variables.
5.2. The Abbey's IoT Network
To allow masquerading between the private subnets and wild, the
following iptables(8) rules are added. They are very similar to the
nat and filter table rules used by a small institute to masquerade
its lan to its isp (see the UFW Rules of a Small Institute).
The campus WireGuard™ subnet is not included because the campus Wi-Fi
hosts should be routing to the wild subnet directly and are assumed to
be masquerading as their access point(s).
iot-nat-A POSTROUTING -s {{ private_net_cidr }} -o wild -j MASQUERADE
-A POSTROUTING -s {{ public_wg_net_cidr }} -o wild -j MASQUERADE
iot-forward-A ufw-user-forward -i lan -o wild -j ACCEPT
-A ufw-user-forward -i wg0 -o wild -j ACCEPT
The lan interface encompasses the private LAN and the public VPN.
The second rule includes the campus VPN.
5.3. Configure UFW for IoT
The following tasks install the additional rules in before.rules
and user.rules
(as in Configure UFW).
roles_t/abbey-gate/tasks/main.yml
---
- name: Configure UFW NAT rules for IoT.
become: yes
blockinfile:
block: |
*nat
<<iot-nat>>
COMMIT
dest: /etc/ufw/before.rules
marker: "# {mark} ABBEY MANAGED BLOCK"
insertafter: EOF
prepend_newline: yes
- name: Configure UFW FORWARD rules for IoT.
become: yes
blockinfile:
block: |
*filter
<<iot-forward>>
COMMIT
dest: /etc/ufw/user.rules
marker: "# {mark} ABBEY MANAGED BLOCK"
insertafter: EOF
prepend_newline: yes
5.4. The Abbey's Starlink Configuration
The abbey connects to Starlink via Ethernet, and disables Starlink's Wi-Fi access point. An Ethernet adapter add-on (ordered separately) was installed on the Starlink cable, and a second USB-Ethernet dongle on Gate. The adapters were then connected with a cross-over cable.
The abbey could have avoided buying a separate cloister Wi-Fi access point, and used Starlink's Wi-Fi instead, with or without its add-on Ethernet interface. Instead, the abbey invested in a 2.4GHz-only Think Penguin access point, and connected it to a third Ethernet interface on Gate. This was preferred for a number of reasons.
The abbey uses ISPs other than Starlink, tethering to a cellphone when under trees, or even limping along on campground Wi-Fi where the land of woven trees has cut off even cell service.
The abbey uses long and complex passwords, especially on public facing services like Wi-Fi. Such a password has been laboriously entered into several household IoT devices. Connecting them to a dedicated, ISP-independent cloister Wi-Fi access point ensures a reliable IoT with zero re-configuration.
Using Starlink's add-on Ethernet interface allowed its Wi-Fi to be disabled, reducing the Wi-Fi clutter in the campground ether.
The Think Penguin access point is transparent, trustworthy hardware that has earned a Respects Your Freedom certification (see https://ryf.fsf.org/).
And most importantly, a dedicated and trustworthy cloister Wi-Fi keeps at least our local network traffic out of view of our ISPs.
5.5. Alternate ISPs
The abbey used to use a cell phone on a USB tether to get Internet
service. At that time, Gate's /etc/netplan/60-isp.yaml
file was the
following.
network:
ethernets:
tether:
match:
name: usb0
set-name: isp
dhcp4: true
dhcp4-overrides:
use-dns: false
The abbey has occasionally used a campground Wi-Fi for Internet
service, using a 60-isp.yaml
file similar to the lines below.
network:
wifis:
tether:
match:
name: wlan0
set-name: isp
dhcp4: true
dhcp4-overrides:
use-dns: false
access-points:
"AP with password":
password: "password"
"AP with no password": {}
6. The Abbey Cloister Role
Birchwood Abbey's cloister is a small institute campus. The campus
role configures all campus machines to trust the institute's CA, sync
with the campus time server, and forward email to Core. The
abbey-cloister role additionally configures cloistered machines to
use the cloister Apt cache, respond to Core's NAGIOS and Munin network
monitors, and to install Emacs. There are also a few OS specific
tasks, namely configuration required on Raspberry Pi OS machines.
Wireless clients are issued keys for the cloister VPN by the ./abbey
client command which is currently identical to the ./inst client
command (described in The Client Command). The wireless, cloistered
hosts never roam, are not associated with a member, and so are
"campus" clients, issued keys with commands like this:
./abbey client campus new-host-name \
S+6HaTnOwwhWgUGXjSBcPAvifKw+j8BDTRfq534gNW4=
6.1. Use Cloister Apt Cache
The Apt-Cacher:TNG program does not work well on the frontier, so is not a common part of a small institute. But it is helpful even for a cloister with less than a dozen hosts (especially to a homogeneous cloister using many of the same packages), so it is tolerable to the abbey's monks. Monks are patient enough to re-run failed scans repeatedly until few or no incomplete or damaged files are found. Depending on the quality of the Internet connection, this may take a while.
Again, https repositories are contacted directly, cached only on the
local host.
roles_t/abbey-cloister/tasks/main.yml
---
- name: Use the local Apt package cache.
become: yes
copy:
content: >
Acquire::http::Proxy
"http://apt-cacher.birchwood.private.:3142";
Acquire::https::Proxy "DIRECT";
dest: /etc/apt/apt.conf.d/01proxy
mode: u=rw,g=r,o=r
6.2. Configure Cloister NRPE
Each cloistered host is a small institute campus host and thus is
already running an NRPE server (a NAGIOS Remote Plugin Executor
server) with a custom inst_sensors monitor (described in Configure
NRPE of A Small Institute). The abbey adds one complication: yet
another check_sensors variant, abbey_pisensors, installed on
Raspberry Pis (architecture aarch64) only.
roles_t/abbey-cloister/tasks/main.yml
- name: Install abbey_pisensors NAGIOS plugin.
become: yes
copy:
src: ../abbey-core/files/abbey_pisensors
dest: /usr/local/sbin/abbey_pisensors
mode: u=rwx,g=rx,o=rx
when: ansible_architecture == 'aarch64'
- name: Configure NAGIOS monitor abbey_pisensors.
become: yes
copy:
content: |
command[abbey_pisensors]=/usr/local/sbin/abbey_pisensors
dest: /etc/nagios/nrpe.d/abbey.cfg
when: ansible_architecture == 'aarch64'
notify: Reload NRPE server.
roles_t/abbey-cloister/handlers/main.yml
---
- name: Reload NRPE server.
become: yes
systemd:
service: nagios-nrpe-server
state: reloaded
tags: actualizer
6.3. Install Munin Node
Each cloistered host is a Munin node.
roles_t/abbey-cloister/tasks/main.yml
- name: Install Munin Node.
become: yes
apt: pkg=munin-node
- name: Configure Munin Node.
become: yes
lineinfile:
regexp: "^allow [^]{{ core_addr|regex_escape }}[$]$"
line: "allow ^{{ core_addr|regex_escape }}$"
path: /etc/munin/munin-node.conf
notify: Restart Munin node.
- name: Add {{ ansible_user }} to munin group.
become: yes
user:
name: "{{ ansible_user }}"
append: yes
groups: munin
Again, one of our cloistered hosts has sensors producing unfortunate
measurements. The next task configures Anoat's libsensors to ignore
them.
roles_t/abbey-cloister/tasks/main.yml
- name: Configure {{ inventory_hostname }} sensors(1).
copy:
content: |
chip "iwlwifi_1-virtual-0"
ignore temp1
chip "acpitz-acpi-0"
ignore temp1
dest: /etc/sensors.d/site.conf
when: inventory_hostname == 'anoat'
6.4. Install Emacs
The monks of the abbey are masters of the staff and Emacs.
roles_t/abbey-cloister/tasks/main.yml
- name: Install monastic software.
become: yes
apt: pkg=emacs
7. The Abbey Weather Role
Birchwood Abbey now uses Home Assistant to record and display weather data from an Ecowitt GW2001 IoT gateway connecting wirelessly to a WS90 (7 function weather station) and a couple WN31s (temp/humidity sensors).
The configuration of the GW2001 IoT hub involved turning off the Wi-Fi access point, and disabling unused channels. The hub reports the data from all sensors in range, anyone's sensors. These new data sources are noticed and recorded by Home Assistant automatically as similarly equipped campers come and go. Disabling unused channels helps avoid these distractions.
The configuration of Home Assistant involved installing the Ecowitt "integration". This was accomplished by choosing "Settings", then "Devices & services", then "Add Integration", and searching for "Ecowitt". Once installed, the integration created dozens of weather entities. These were labeled and organized on an "Abbey" dashboard.
8. The Abbey DVR Role
The abbey uses AgentDVR to record video from PoE IP HD security
cameras. It runs as user agentdvr and keeps all of its
configuration and recordings in /home/agentdvr/
.
8.1. Install AgentDVR
AgentDVR is installed according to the iSpy web site's latest instructions. The "download" button on iSpy's Download page (https://www.ispyconnect.com/download), when "Agent DVR - Linux/ macOS/ RPi" is chosen, suggests the following command lines (the second of which is broken across three lines).
sudo apt-get install curl
bash <(curl -s "https://raw.githubusercontent.com/\
ispysoftware/agent-install-scripts/main/v2/\
install.sh")
The second command fetches and runs an installation script that
executes several sudo commands. These commands can be run by the
agentdvr account if it has (temporary) authorization.
8.1.1. Prepare for AgentDVR Installation
The following commands are manually executed to create the agentdvr
account and authorize it to run a handful of system commands as
root. This small set is sufficient to run the installation script
if the offer to create the system service is declined.
The commands validate the config file, 01agentdvr
, before installing
it because a syntax error can make the sudo command inoperative,
cutting off access to all elevated privileges until a "rescue"
(involving a reboot) is performed.
sudo adduser --disabled-password agentdvr
echo "ALL ALL=(agentdvr) NOPASSWD: /bin/systemctl,/bin/apt-get,\
/sbin/adduser,/sbin/usermod" >~/01agentdvr
sudo chown root:root ~/01agentdvr
sudo chmod 440 ~/01agentdvr
visudo --check --owner --perms ~/01agentdvr
sudo mv ~/01agentdvr /etc/sudoers.d/
8.1.2. Execute AgentDVR Installation
With the above preparations, the system administrator can get a shell
session under the agentdvr account to run iSpy's installation script
in the empty /home/agentdvr/
directory.
sudo apt-get install curl
sudo -u agentdvr <(curl -s "https:.../install.sh")
The script creates the /home/agentdvr/AgentDVR/
directory, and
offers to install a system service. The offer is declined. Instead,
Ansible is run again.
8.1.3. Complete AgentDVR Installation
When Ansible is run a second time, after the installation script, it
sees the new /home/agentdvr/AgentDVR/
directory and creates (and
starts) the new system service.
./abbey config dvrs
Also after the installation, the system administrator revokes the
agentdvr account's authorizations to modify packages and accounts.
sudo rm /etc/sudoers.d/01agentdvr
8.2. Configure User agentdvr
AgentDVR runs as the system user agentdvr, which is configured here.
(The account should have been created by the installation or
restoration of AgentDVR.)
roles_t/abbey-dvr/tasks/main.yml
---
- name: Create agentdvr.
become: yes
user:
name: agentdvr
password: "!"
home: /home/agentdvr
shell: /bin/bash
append: yes
groups: video
- name: Add {{ ansible_user }} to agentdvr group.
become: yes
user:
name: "{{ ansible_user }}"
append: yes
groups: agentdvr
- name: Create /home/agentdvr/.
become: yes
file:
path: /home/agentdvr
state: directory
owner: agentdvr
group: agentdvr
mode: u=rwx,g=rwxs,o=rx
8.3. Test For AgentDVR/
The following task probes for the /home/agentdvr/AgentDVR/
directory, to detect that the build/install process has completed. It
registers the results in the agentdvr variable. Several of the
remaining installation steps are skipped unless
agentdvr.stat.exists.
roles_t/abbey-dvr/tasks/main.yml
- name: Test for AgentDVR directory.
stat:
path: /home/agentdvr/AgentDVR
register: agentdvr
- debug:
msg: "/home/agentdvr/AgentDVR/ does not yet exist"
when: not agentdvr.stat.exists
8.4. Create AgentDVR Service
This service definition came from the template downloaded (from here)
by the installer, specifically the linux_setup2.sh
script downloaded
by install.sh
.
roles_t/abbey-dvr/tasks/main.yml
- name: Install AgentDVR.service.
become: yes
copy:
content: |
[Unit]
Description=AgentDVR
[Service]
WorkingDirectory=/home/agentdvr/AgentDVR
ExecStart=/home/agentdvr/AgentDVR/Agent
# fix memory management issue with dotnet core
Environment="MALLOC_TRIM_THRESHOLD_=100000"
# to query logs using journalctl, set a logical name here
SyslogIdentifier=AgentDVR
User=agentdvr
# ensure the service automatically restarts
Restart=always
# amount of time to wait before restarting the service
RestartSec=5
[Install]
WantedBy=multi-user.target
dest: /etc/systemd/system/AgentDVR.service
- name: Start AgentDVR.service.
become: yes
systemd:
service: AgentDVR
state: started
when: agentdvr.stat.exists
tags: actualizer
- name: Enable AgentDVR.service.
become: yes
systemd:
service: AgentDVR
enabled: yes
when: agentdvr.stat.exists
8.5. Create AgentDVR Storage
The abbey uses a separate volume to store surveillance recordings,
lest the DVR program fill the root file system. The volume is mounted
at /DVR/
. The following tasks create /DVR/AgentDVR/video/
(whether a large volume is mounted there or not!) with appropriate
permissions so that the instructions for configuring a default storage
location do not fail.
roles_t/abbey-dvr/tasks/main.yml
- name: Create /DVR/AgentDVR/.
become: yes
file:
state: directory
path: /DVR/AgentDVR
owner: agentdvr
group: agentdvr
mode: u=rwx,g=rxs,o=
- name: Create /DVR/AgentDVR/video/.
become: yes
file:
state: directory
path: /DVR/AgentDVR/video
owner: agentdvr
group: agentdvr
mode: u=rwx,g=rxs,o=
8.6. Install Custom NAGIOS Monitor abbey_dvr
DVR hosts install a custom NRPE plugin named abbey_dvr to monitor
the storage available on /DVR/
.
roles_t/abbey-dvr/tasks/main.yml
- name: Configure NAGIOS command abbey_dvr.
become: yes
vars:
lib: /usr/lib/nagios/plugins
copy:
content: |
command[abbey_dvr]={{ lib }}/check_disk -w 20% -c 10% -p /DVR
dest: /etc/nagios/nrpe.d/abbey.cfg
notify: Reload NRPE server.
roles_t/abbey-dvr/handlers/main.yml
---
- name: Reload NRPE server.
become: yes
systemd:
service: nagios-nrpe-server
state: reloaded
tags: actualizer
8.7. Configure IP Cameras
A new security camera is setup as described in Cloistering, after
which the camera should be accessible by name on the abbey networks.
Assuming ping -c1 new works, the camera's web interface will be
accessible at http://new/.
The administrator uses this to make the following changes.
- Set a password on the administrative account.
- Create an unprivileged user with a short password,
e.g.
user:blah. (Lately, user accounts are not supported!) - Set the frame rate to 5fps. The abbey prefers HD resolution and long duration logs, thus fewer frames per second.
- Turn off on-screen displays (OSDs), motion detection, object recognition, etc.
- Configuring the timezone or the use of NTP (the network time protocol) is nice but optional.
8.8. Configure AgentDVR's Cameras
After Ansible has configured and started the AgentDVR service, its web
UI will be available at http://core:8090/. The initial Live View
will be empty, overlayed with instructions to click the edit button.
A view must be created before devices can be added? Then the device
wizard asks for each device's general configuration parameters. The
abbey uses SV3C IP cameras with a full HD stream as well as a standard
definition "vice stream". AgentDVR can use both, so the following
settings are used on each device.
- General:
- Name: Stern
- Source Type: Network Camera
- Username: user
- Password: blah
- Live URL: rtsp://camera3.birchwood.private:554/12
- Record URL: rtsp://camera3.birchwood.private:554/11
Note that each device's recordings are also configured as described below.
Additional cameras are added via the "New Device" item in the Server Menu. This step is completed when all cameras are streaming to AgentDVR's Live View.
8.9. Configure AgentDVR's Default Storage
AgentDVR's web interface is also used to configure a default storage
location. From the Server Menu (upper left), the administrator chooses
Configuration Settings, the Storage tab, the Configure button, and the
add (plus) button. The storage location is set to /DVR/AgentDVR/
and the "default" toggle is set. Several OK buttons then need to be
pressed before the task is complete.
8.10. Configure AgentDVR's Recordings
After a default storage location has been configured, AgentDVR's cameras can begin recording. The "Edit Devices" dialog lists (via the "Edit Devices" item in the Server Menu) the configured cameras. The edit buttons lead to the device settings where the following parameters are set (in the Recording and Storage tabs).
- Recording:
- Mode: Constant
- Encoder: Raw Record Stream
- Max record time: 900 (15 minutes)
- Storage:
- Location: DVR/AgentDVR
- Folder: Outside
- Storage Management:
- On: yes
- Max Size: 0 (unlimited)
- Max Age: 168 (7 days)
8.11. Restore AgentDVR
When restoring /home/
from a backup copy, the user accounts are
presumably restored as well. Thus /home/agentdvr/AgentDVR/
should
be owned by agentdvr, a user account with disabled/locked password
and a bash shell. Restoration is completed by Ansible when it
installs the system service configuration file and starts the service.
./abbey config dvrs
9. The Abbey TVR Role
The abbey has a few TV tuners and a subscription to Schedules Direct for North American TV broadcast schedules. It uses one (master) MythTV server to make and serve recordings of area broadcasts.
The MythTV backend stores recordings in /home/mythtv/Recordings/
and
database dumps in /home/mythtv/Backups/
. Apache is
configured to serve MythTV pages at e.g. http://new/mythweb/.
A new TVR machine needs only Cloistering to prepare it for
Ansible. As part of that process, it should be added to the tvrs
group in the hosts
file. An existing server can become a TVR
machine by adding it to the tvrs group.
9.1. Include Abbey Variables
Private variables in private/vars-abbey.yml
are needed, as in the
abbey-core role. The file path is relative to the playbook's
directory, playbooks/
.
roles_t/abbey-tvr/tasks/main.yml
---
- name: Include private abbey variables.
include_vars: ../private/vars-abbey.yml
9.2. Manually Build and Install MythTV
Neither Debian nor the MythTV project provide binary packages of
MythTV. Since PEP668 (error: externally-managed-
environment) we install Debian packages built with the scripts in the
MythTV distribution Packaging project.
It is assumed the build scripts will install any requisite developer packages.
cd $top
git clone https://github.com/MythTV/packaging.git \
-b fixes/35 mythtv-v35-packaging
cd mythtv-v35-packaging/deb/
./build-debs.sh fixes/35
dpkg-scanpackages . | gzip --best > Packages.gz
echo "deb [trusted=yes] file://$top/mythtv-v35-packaging/deb ./" \
| sudo tee /etc/apt/sources.list.d/mythtv35.list
sudo apt update
sudo apt install mythtv-backend
9.3. Restore MythTV
Restoring MythTV from a backup copy to a fresh TVR host:
- Apply the TVR role to the new host thus installing build requisites.
- Manually load SQL timezone info.
- Manually build and install (as described above).
- Restore
/home/mythtv/
. Restore the database from backup.
sudo -u mythtv -i cd /home/mythtv/ /usr/share/mythtv/mythconverg_restore.pl
The
.mythtv/config.xml
file should provide the DB particulars (name, user, password).- Reboot or start the service.
- Configure the backend (as described below).
9.4. Manually Load DB Timezone Info
Starting with MythTV version 0.26, the time zone tables must be loaded
into MySQL. The MariaDB installed by Debian 12 seems to need this
too. The test SQL produced NULL.
SELECT CONVERT_TZ(NOW(), 'SYSTEM', 'Etc/UTC');
After running the following command line, the test SQL produced
e.g. 2022-09-13 20:15:41.
mysql_tzinfo_to_sql /usr/share/zoneinfo | sudo mysql mysql
9.5. Create MythTV Storage Area
The backend does not have a default storage area for its recordings.
A path to an appropriate directory must be set with the mythtv-setup
program (as described below). The abbey uses
/home/mythtv/Recordings/
for MythTV's default storage. This task
creates that directory and ensures it has appropriate permissions.
roles_t/abbey-tvr/tasks/main.yml
- name: Create MythTV storage area.
become: yes
file:
state: directory
dest: /home/mythtv/Recordings
owner: mythtv
group: mythtv
mode: u=rwx,g+rwx,o=rx
9.6. Configure MythTV Backend
With MythTV built and installed, the post-installation tasks
addressed, and mythtv-backend.service started, go to the web page
at http://new:6544 and make the following selections.
- Select MythTV Setup (gear icon in the left sidebar).
- Select "Storage Groups".
- Select "Default" and choose
/home/mythtv/Recordings/
. - Select "DB Backups" and choose
/home/mythtv/Backups/
.
9.7. Configure Tuner
The abbey has a Silicon Dust Homerun HDTV Duo (with two tuners). It
is setup as described in Cloistering, after which the tuner is
accessible by name (e.g. new) on the cloister network. Assuming
ping -c1 new works, the tuner should be accessible via the
hdhomerun_config_gui command, a graphical interface contributed to
Debian by Silicon Dust and found in the hdhomerun-config-gui
package. The program, run with the command hdhomerun_config_gui,
will broadcast on the localnet to find any Homeruns there, but the new
tuner's domain name or IP address can also be entered.
9.8. Add HDHomerun and Mr.Antenna
In MythTV Setup:
- Choose "Capture cards".
- Choose "(Add Capture Card)", then the "New Capture Card".
- Choose Card Type and select "HDHomeRun Networked Tuner".
- Press the right arrow key to see card type parameters. Choose the tuner's address, which should be listed assuming the tuner and TVR are on the same subnet (e.g. the private Ethernet).
- Save and Exit (via Escape key).
- Choose "Video sources".
- Choose "(New Video Source)", then the "New Video Source".
- Enter video source name "Mr.Antenna".
- Choose listings grabber "Schedules Direct JSON API (xmltv)".
- Save and Exit.
- Choose "Input Connections".
- Choose the HDHomeRun.
- Choose video source "Mr.Antenna".
- Save and Exit.
- Choose "Capture cards".
- Add a second HDHomeRun as above.
- Save and Exit.
- Choose "Input connections".
- Connect the second HDHomeRun to Mr.Antenna as above.
- Save and Exit.
- Exit MythTV Setup or continue directly to Scan for New Channels. In
any case, do not run
mythfilldatabase.
9.9. Scan for New Channels
In MythTV Backend, the website on Core's port 6544, e.g.
http://malastare.birchwood.private:6544/:
- Choose "MythTV Setup" (the gear) from the left sidebar.
- Choose "Enable Updates" (at the top of the page).
- Choose "Channel Editor" from the top tab bar.
- Press "Delete".
- Choose "Input Connections" from the top tab bar.
- Choose (unfold) "HDHomeRun => Mr.Antenna".
- Press "+ Scan for Channels".
- Choose options? Eventually press "Scan"? And wait.
- Choose to import all.
- Choose "Restart Backend Full Operation".
9.10. Configure XMLTV
The xmltv package, specifically its tv_grab_zz_sdjson program, is
used to download broadcast listings from Schedules Direct. The
program is run by the mythtv user (like mythtv-setup) and is
initially configured (the first time) using its --configure
option.
tv_grab_zz_sdjson --configure
cp ~/.xmltv/tv_grab_zz_sdjson.conf ~/.mythtv/Mr.Antenna.xmltv
The --configure command above prompts with many questions and
creates ~/.xmltv/tv_grab_zz_sdjson.conf
, which is copied to
~/.mythtv/Mr.Antenna.xmltv
where mythfilldatabase will find it.
Afterwards any re-configuration should use the following command.
tv_grab_zz_sdjson --configure \
--config-file ~/.mythtv/Mr.Antenna.xmltv
Here is a transcript of a session with tv_grab_zz_sdjson. Note that
the list of "inputs" available in a postal code typically ends with
the OTA (over the air) broadcasts.
$ tv_grab_zz_sdjson --configure --config-file .mythtv/Mr.Antenna.xml Cache file for lineups, schedules and programs. Cache file: [/home/mythtv/.xmltv/tv_grab_zz_sdjson.cache] If you are migrating from a different grabber selecting an alternate channel ID format can make the migration easier. Select channel ID format: 0: Default Format (eg: I12345.json.schedulesdirect.org) 1: tv_grab_na_dd Format (eg: I12345.labs.zap2it.com) 2: MythTV Internal DD Grabber Format (eg: 12345) Select one: [0,1,2 (default=0)] As the JSON data only includes the previously shown date normally th XML output should only have the date. However some programs such as older versions of MythTV also need a time. Select previously shown format: 0: Date Only 1: Date And Time Select one: [0,1 (default=0)] Schedules Direct username. Username: USERNAME Schedules Direct password. Password: PASSWORD ** POST https://json.schedulesdirect.org/20141201/token ==> 200 OK ** GET https://json.schedulesdirect.org/20141201/status ==> 200 OK ( ** GET https://json.schedulesdirect.org/20141201/lineups ==> 200 OK This step configures the lineups enabled for your Schedules Direct account. It impacts all other configurations and programs using the JSON API with your account. A maximum of 4 lineups can by added to your account. In a later step you will choose which lineups or channels to actually use for this configuration. Current lineups enabled for your Schedules Direct account: #. Lineup ID | Name | Location | Transport 1. USA-OTA-57719 | Local Over the Air Broadcast | 57719 | Antenna Edit account lineups: [continue,add,delete (default=continue)] Choose whether you want to include complete lineups or individual channels for this configuration. Select mode: [lineups,channels (default=lineups)] ** GET https://json.schedulesdirect.org/20141201/lineups ==> 200 OK Choose lineups to use for this configuration. USA-OTA-57719 [yes,no,all,none (default=no)] all
Once configured, the mythfilldatabase program should be able to use
tv_grab_zz_sdjson to connect to Schedules Direct and download the
chosen line-up. However mythfilldatabase is happiest when the
backend is running, so it is not run until then.
9.11. Debug XMLTV
If the mythfilldatabase command fails or expected listings do not
appear, more information is available by adding the --verbose
option. The --help option also reveals much, including a --manual
option for "interactive configuration".
sudo -H -u mythtv mythfilldatabase --verbose
The command might, for example, show that it is failing to run a
tv_grab_zz_sdjson command like the following.
nice tv_grab_zz_sdjson \
--config-file '/home/mythtv/.mythtv/Mr.Antenna.xmltv' \
--output /tmp/myths5Sq35 --quiet
Running a similar command (without --quiet) might be more revealing.
sudo -H -u mythtv \
tv_grab_zz_sdjson \
--config-file '/home/mythtv/.mythtv/Mr.Antenna.xmltv' \
--output /tmp/mythFUBAR
9.12. Change Broadcast Area
The abbey changes location almost weekly, so its HDTV broadcast area changes frequently. At the start of a long stay the administrator uses the MythTV Setup program to scan for the new area's channels, as described in Scan for New Channels.
To change MythTV's "listings", the administrator needs the new area's
postal code and the username and password of the abbey's Schedules
Direct account. The administrator then runs the tv_grab_zz_sdjson
program as user mythtv.
tv_grab_zz_sdjson --configure \
--config-file ~/.mythtv/Mr.Antenna.xmltv
The program will prompt for the zip code and offer a list of "inputs" available in that area, as described in Configure XMLTV.
Lastly, the administrator runs an immediate update (again as the
mythtv user).
mythfilldatabase
If the command fails, consult Debug XMLTV. Else, the listings appear in MythTV Backend's "Program Guide" page.
10. The Ansible Configuration
The abbey's Ansible configuration, like that of A Small Institute, is
kept on an administrator's notebook. The private SSH key that allows
remote access to privileged accounts on all abbey servers is kept on
an encrypted, off-line volume plugged into the administrator's
notebook only when running ./abbey commands.
The small institute provided examples of both public and private
variables. This document includes the abbey's actual public
variables, and examples of the private variables. As in A Small
Institute, this document's roles tangle into roles_t/
, separate from
the running (and perhaps recently debugged!) code in roles/
.
The configuration of a small institute is included as a git sub-module
in Institute/
. Its roles are included in the roles_path setting
in ansible.cfg
. Its example hosts
inventory, and public/
and
private/
directories are not included, and are replaced by abbey
specific versions.
NOTE: if you have not read at least the Overview of A Small Institute you are lost.
The Ansible configuration:
ansible.cfg
- The Ansible configuration file.
hosts
- The inventory of hosts.
playbooks/site.yml
- The play that assigns roles to hosts.
public/
- Variables, certificates.
public/vars.yml
- The institutional variables.
private/
- Sensitive variables, files, templates.
private/vars.yml
- Sensitive institutional variables.
private/vars-abbey.yml
- Sensitive liturgical variables.
roles/
- The running copy of
roles_t/
. roles_t/
- The liturgical roles as tangled from this document.
Institute/roles/
- The running copy of
Institute/roles_t/
. Institute/roles_t/
- The institutional roles as tangled from
Institute/README.org
.
The first three files in the list are included in this chapter. The
rest are built up piecemeal by (tangled from) this document,
README.org
, and Institute/README.org
.
10.1. ansible.cfg
This is much like the example (test) institutional configuration file,
except the roles are found in Institute/roles/
as well as roles/
.
ansible.cfg
[defaults]
interpreter_python=/usr/bin/python3
vault_password_file=Secret/vault-password
inventory=hosts
roles_path=roles:Institute/roles
10.2. hosts
hosts
all:
vars:
ansible_user: sysadm
ansible_ssh_extra_args: -i Secret/ssh_admin/id_rsa
hosts:
# The Main Servers: Front, Gate and Core.
droplet:
ansible_host: 159.65.75.60
ansible_become_password: "{{ become_droplet }}"
anoat:
ansible_host: anoat.birchwood.private
ansible_become_password: "{{ become_anoat }}"
malastare:
ansible_host: malastare.birchwood.private
ansible_become_password: "{{ become_malastare }}"
# Campus
kessel:
ansible_host: kessel.birchwood.private
ansible_become_password: "{{ become_kessel }}"
dantooine:
ansible_host: dantooine.birchwood.private
ansible_become_password: "{{ become_dantooine }}"
# Notebooks
endor:
ansible_host: endor.birchwood.private
ansible_become_password: "{{ become_endor }}"
sullust:
ansible_host: 127.0.0.1
ansible_become_password: "{{ become_sullust }}"
postfix_mydestination: >-
sullust.birchwood.private
sullust
sullust.localdomain
localhost.localdomain
localhost
children:
front:
hosts:
droplet:
gate:
hosts:
anoat:
core:
hosts:
malastare:
campus:
hosts:
anoat:
dantooine:
kessel:
dvrs:
hosts:
dantooine:
tvrs:
hosts:
malastare:
webtvs:
hosts:
dantooine:
kessel:
notebooks:
hosts:
endor:
sullust:
builders:
hosts:
dantooine:
endor:
kessel:
sullust:
10.3. playbooks/site.yml
This playbook provisions the entire network by applying first the institutional roles, then the liturgical roles.
playbooks/site.yml
---
- name: Configure All
hosts: all
roles: [ all ]
- name: Configure Front
hosts: front
roles: [ front, abbey-front ]
- name: Configure Gate
hosts: gate
roles: [ gate, abbey-gate ]
- name: Configure Core
hosts: core
roles: [ core, abbey-core ]
- name: Configure Cloister
hosts: campus
roles: [ campus, abbey-cloister ]
- name: Configure DVRs
hosts: dvrs
roles: [ abbey-dvr ]
- name: Configure TVRs
hosts: tvrs
roles: [ abbey-tvr ]
11. The Abbey Commands
The ./abbey script encodes the abbey's canonical procedures. It
includes The Institute Commands and adds a few abbey-specific
sub-commands.
11.1. Abbey Command Overview
Institutional sub-commands:
- config
- Check/Set the configuration of one or all hosts.
- new
- Create system accounts for a new member.
- old
- Disable system accounts for a former member.
- pass
- Set the password of a current member.
- client
- Register WireGuard™ public keys for a member's device.
Liturgical sub-commands:
- tz
- Run
timedatectl set-timezoneon cloister servers. - upgrade
- Run
apt update; apt full-upgrade --autoremoveon all hosts. - reboots
- Look for
/run/reboot*
on all hosts. - versions
- Report
ansible_distribution,_distribution_version, and_architecturefor all hosts. - facts
- Update (clobber!)
facts
.
11.2. Abbey Command Script
The script begins with the following prefix and trampolines.
abbey
#!/usr/bin/perl -w
#
# DO NOT EDIT. This file was tangled from README.org.
use strict;
if (grep { $_ eq $ARGV[0] } qw(CA config new old pass client)) {
exec "./Institute/inst", @ARGV;
}
The small institute's ./inst command expects to be running in
Institute/
, not ./
, but it only references public/
, private/
,
Secret/
and playbooks/check-inst-vars.yml
, and will find the abbey
specific versions of these. The roles_path setting in ansible.cfg
effectively merges the institutional roles into the distinctly named
abbey specific roles. The roles likewise reference files with
relative names, and will find the abbey specific private/
directory (named ../private/
relative to playbooks/
).
Ansible does not implement a playbooks_path key, so the following
code block "duplicates" the action of the institute's
check-inst-vars.yml
.
playbooks/check-inst-vars.yml
- import_playbook: ../Institute/playbooks/check-inst-vars.yml
11.3. The Upgrade Command
The script implements an upgrade sub-command that runs apt update
and apt full-upgrade --autoremove on all abbey managed machines. It
recognizes an optional -n flag indicating that the upgrade tasks
should only be checked. Any other (single, optional) argument must be
a limit pattern. For example:
./abbey upgrade ./abbey upgrade -n ./abbey upgrade core ./abbey upgrade -n core ./abbey upgrade '!front'
abbey
if ($ARGV[0] eq "upgrade") {
shift;
my @args = ( "-e", "\@Secret/become.yml" );
if (defined $ARGV[0] && $ARGV[0] eq "-n") {
shift;
push @args, "--check", "--diff";
}
if (defined $ARGV[0]) {
my $limit = $ARGV[0];
shift;
die "illegal characters: $limit"
if $limit !~ /^!?[a-z][-a-z0-9,!]+$/;
push @args, "-l", $limit;
}
exec ("ansible-playbook", @args, "playbooks/upgrade.yml");
}
playbooks/upgrade.yml
- hosts: all
tasks:
- name: Upgrade packages.
become: yes
apt:
update_cache: yes
upgrade: full
autoremove: yes
purge: yes
autoclean: yes
- name: Check for /run/reboot-required.
stat:
path: /run/reboot-required
no_log: true
register: st
- debug:
msg: Reboot required.
when: st.stat.exists
11.4. The Reboots Command
The script implements a reboots sub-command that looks for
/run/reboot-required
on all abbey managed machines.
abbey
if ($ARGV[0] eq "reboots") {
exec ("ansible-playbook", "-e", "\@Secret/become.yml",
"playbooks/reboots.yml");
}
playbooks/reboots.yml
---
- hosts: all
tasks:
- stat:
path: /run/reboot-required
register: st
- debug:
msg: Reboot required.
when: st.stat.exists
11.5. The Versions Command
The script implements a versions sub-command that reports the
operating system version of all abbey managed machines.
abbey
if ($ARGV[0] eq "versions") {
exec ("ansible-playbook", "-e", "\@Secret/become.yml",
"playbooks/versarch.yml");
}
playbooks/versarch.yml
- hosts: all
tasks:
- debug:
msg: >-
{{ ansible_distribution }}
{{ ansible_distribution_version }}
{{ ansible_architecture }}
11.6. The Facts Command
The script implements a facts sub-command to collect the Ansible
"facts" from all and output them to the JSON format facts
file.
abbey
if ($ARGV[0] eq "facts") {
my $line = ("ansible all -m gather_facts -e \@Secret/become.yml"
. " >facts");
print "$line\n";
my $status = system $line;
die "status: $status\nCould not run $line: $!\n" if $status != 0;
exit;
}
11.7. The TZ Command
The abbey changes location almost weekly, so its timezone changes occasionally. Droplet does not move. Gate and other simple servers are kept in UTC. Core, the DVRs, TVRs, Home Assistant and the desktops all want updating to the current local timezone. Home Assistant and the desktops are managed maually, but the rest can all be updated using Ansible.
The tz sub-command runs the timezone.yml
playbook, which uses the
current timezone/city on the administrator's notebook and updates
Core, the DVRs and TVRs. Each runs timedatectl set-timezone and
restarts the affected services.
This is an experimental playbook until it is used/tested with separate
machines hosting the DVR and TVR services. It assumes each host sees
the new_tz result registered by it in a previous play and not by the
last host in the previous play.
abbey
if ($ARGV[0] eq "tz") {
exec ("ansible-playbook", "-e", "\@Secret/become.yml",
"playbooks/timezone.yml");
}
playbooks/timezone.yml
---
- hosts: core, dvrs, tvrs, webtvs
tasks:
- name: Get timezone.
command: date '+%Z'
delegate_to: localhost
changed_when: false
check_mode: false
register: zone
- name: Get city.
shell: readlink /etc/localtime | sed 's,/usr/share/zoneinfo/,,'
delegate_to: localhost
changed_when: false
check_mode: false
register: city
- name: Update timezone.
become: yes
command: timedatectl set-timezone {{ city.stdout }}
when: ansible_date_time.tz != zone.stdout
register: new_tz
- hosts: dvrs
tasks:
- name: Restart AgentDVR.
become: yes
systemd:
service: AgentDVR
state: restarted
when: new_tz.changed
- hosts: tvrs
tasks:
- name: Restart MythTV.
become: yes
systemd:
service: "{{ item }}"
state: restarted
loop: [ mysql, mythtv-backend ]
when: new_tz.changed
11.8. Abbey Command Help
abbey
my $ops = ("config,new,old,pass,client,"
."upgrade,reboots,versions,facts,tz");
die "usage: $0 [$ops]\n";
12. Cloistering
This is how a new machine is brought into the cloister. The process is initially quite different depending on the device type but then narrows down to the common preparation of all machines administered by Ansible.
12.1. IoT Devices
A wireless IoT device (smart TV, Blu-ray deck, etc.) cannot install Debian nor even the WireGuard™ For Android app. And it shouldn't. As an untrustworthy bit of kit, it should have no access to the cloister, merely the Internet. It need not appear in the Ansible inventory.
IoT devices trusted enough to be patched to the cloister Ethernet (IP
cameras, TV Tuners, etc.) are added to /etc/dhcp/dhcpd.conf
and
given a private domain name as described in the following steps.
Wireless IoT devices are manually configured with the cloister Wi-Fi password and may be given a private domain name as described in the last step:
12.2. Raspberry Pis
The abbey's Raspberry Pi runs the Raspberry Pi OS desktop off an NVMe SSD. A fresh install should go something like this:
- Write the disk image,
2023-12-05-raspios-bookworm-arm64.img.xz
, to the SSD and plug it into the Pi. Leave the µSD card socket empty. - Attach an HDMI monitor, a USB keyboard/mouse, and the cloister Ethernet, and power up.
- Answer first-boot installation questions:
- Language: English (USA)
- Keyboard: English (USA)
- root password: <blank>
- new user name: System Administrator
- new username: sysadm
- new password: <password>
- Add to Core DHCP
- Create Wired Domain Name
- Log in as
sysadmon the console. - Run
sudo raspi-configand use the following menu items.- S4 Hostname (Set name for this computer on a network): new
- I1 SSH (Enable/disable remote command line access using SSH): enable
- A1 Expand Filesystem (Ensures that all of the SD card is available)
- Update From Cloister Apt Cache
- Authorize Remote Administration
- Configure with Ansible
If the Pi is going to operate wirelessly, the following additional steps are taken.
12.3. PCs
Most of the abbey's machines, like Core and Gate, are general-purpose PCs running Debian. The process of cloistering these machines follows.
- Write the disk image, e.g.
debian-12.11.0-amd64-netinst.iso
, to a USB drive and connect it to the PC. - Connect an HDMI monitor, a USB keyboard/mouse, and the cloister Ethernet, and power up. Choose to boot from the USB drive.
- Answer first-boot installation questions as detailed in the preparation of A Test Machine for a Small Institute.
- Add to Core DHCP
- Create Wired Domain Name
- Log in as
sysadmon the console. - Update From Cloister Apt Cache
Install OpenSSH, unless it already was when included in the initial Software selection during the Debian installation. Run the following if unsure.
sudo apt install openssh-server
- Authorize Remote Administration
- Configure with Ansible
If the PC is going to operate wirelessly, the following additional steps are taken.
12.4. Add to Core DHCP
When a new machine is connected to the cloister Ethernet, its MAC address must be added to Core's DHCP configuration. Core does not provide network addresses to new devices automatically.
IoT devices (IP cameras, HDTV tuners, etc.) often have their MAC
address printed on their case or mentioned in a configuration page.
The MAC address must also appear in the device's DHCP Discover
broadcasts, which are logged to /var/log/daemon.log
on Core. As a
last (or first!) resort, the following command line should reveal the
new device's MAC.
tail -100 /var/log/daemon.log | grep DISCOVER
With the new device's Ethernet MAC in hand, a stanza like the
following is added to the bottom of private/core-dhcpd.conf
. The IP
address must be unique. Typically the next host number after the last
entry is chosen.
host new {
hardware ethernet 08:00:27:f3:41:66; fixed-address 192.168.56.4; }
The DHCP service is then restarted (not reloaded).
sudo systemctl restart isc-dhcp-server
Soon after this the device should be offered a lease for its IP
address, 192.168.56.4. It might be power cycled to speed up the
process.
When successful, the following command shows the device is accessible,
reporting 1 packets transmitted, 1 received, 0% packet loss....
ping -c1 192.168.56.4
12.5. Create Wired Domain Name
A wired device is assigned an IP address when it is added to Core's
DHCP configuration (as in Add to Core DHCP). A private domain name is
then associated with this address. If the device is intended to
operate wirelessly, the name for its address is modified with a -w
suffix. Thus new-w.small.private would be the name of the new
device while it is temporarily connected to the cloister Ethernet, and
new.small.private would be its "normal" name used when it is on the
cloister Wi-Fi.
The private domain name is created by adding a line like the following
to private/db.domain
and incrementing the serial number at the top
of the file.
new-w IN A 192.168.56.4
The reverse mapping is also created by adding a line like the
following to private/db.private
and incrementing the serial number
at the top of that file.
4 IN PTR new-w.small.private.
After ./abbey config core updates Core, resolution of the new-w
name can be tested.
resolvectl query new-w.small.private.
resolvectl query 192.168.56.4
12.6. Update From Cloister Apt Cache
- Log in as
sysadmon the console. Create
/etc/apt/apt.conf.d/01proxy
.D=apt-cacher.small.private. echo "Acquire::http::Proxy \"http://$D:3142\";" \ | sudo tee /etc/apt/apt.conf.d/01proxy
Update the system and reboot.
sudo apt update sudo apt full-upgrade --autoremove sudo reboot
12.7. Authorize Remote Administration
To remotely administer new-w, Ansible must be authorized to login as
sysadm@new-w without a login password, using an SSH key pair. This is
accomplished by copying Ansible's SSH public key to new-w.
scp Secret/ssh_admin/id_rsa.pub sysadm@new-w:admin_key
Then on new-w (logged in as sysadm) the public key is installed in
~sysadm/.ssh/authorized_keys
.
( cd; umask 077; mkdir .ssh; cp admin_key .ssh/authorized_keys )
Now the administrator can test access to new-w using Ansible's SSH
key.
ssh -i Secret/ssh_admin/id_rsa sysadm@new-w
12.8. Configure with Ansible
With remote administration authorized and tested (as in Authorize
Remote Administration), and the machine connected to the cloister
Ethernet, the configuration of new-w can be completed by Ansible.
Note that if the machine is staying on the cloister Ethernet, its
domain name will be new (having had no -w suffix added).
First new-w is added to Ansible's inventory in hosts
. A new-w
section is added to the list of all hosts, and an empty section of the
same name is added to the list of campus hosts. If the machine uses
the usual privileged account name, sysadm, the ansible_user key is
not needed.
hosts:
...
new-w:
ansible_user: pi
ansible_become_password: "{{ become_new }}"
...
children:
...
campus:
hosts:
...
new-w:
If the sudo command on new-w never prompts sysadm for a
password, then the ansible_become_password setting is also not
needed. Otherwise, the password is added to Secret/become.yml
as
shown below.
echo -n "become_new: " >>Secret/become.yml
ansible-vault encrypt_string PASSWORD >>Secret/become.yml
Finally the ./abbey config new-w command is run. It will install
several additional software packages and change several more
configuration files.
./abbey config new-w
12.9. Connect to Cloister Wi-Fi
On an IoT device, or a Debian or Android "desktop", the cloister Wi-Fi
name and password are entered manually. Once the device is connected,
its Wi-Fi IP address may be discovered in its network settings, and
perhaps via the access point's local domain, e.g. as new.lan on a
desktop connected to the cloister Wi-Fi.
Wireless Debian machines use ifupdown configured with a short
/etc/network/interfaces.d/wifi
drop-in. In this example, the Wi-Fi
interface on new is named wlan0.
/etc/network/interfaces.d/wifi
auto wlan0
iface wlan0 inet dhcp
wpa-ssid "Birchwood Abbey"
wpa-psk "PASSWORD"
Once the sudo ifup wlan0 command is successful, the machine will get
an IP address on the access point's local network (revealed by the
command ip addr show dev wlan0).
The new Wi-Fi IP address, e.g. 192.168.10.225, should be tested on a
desktop connected to the Wi-Fi using the following ping command.
ping -c1 192.168.10.225
12.10. Connect to Cloister VPN
Wireless devices (with the cloister Wi-Fi password) can get an IP address and a default route to the Internet with no special configuration. Neither said devices nor the access point require special configuration. Any Wi-Fi access point, e.g. as found in a cable modem, will work with zero configuration. The abbey's networks, however, are not accessible except via the cloister VPN.
Connections to the cloister VPN are authorized by the ./abbey
client... command (aka The Client Command), which registers a new
client's public key and installs new WireGuard™ configurations on the
servers. Private keys are kept on the clients (e.g. in
/etc/wireguard/private-key
).
12.10.1. Campus Desktops and Servers
Wireless Debian desktops (with NetworkManager) as well as servers
(without NetworkManager) are configured to automatically connect to
the cloister Wi-Fi and VPN, and so can be used much like a wired
desktop machine. They are typically connected to a large TV and
auto-login to an unprivileged account named house, i.e. anyone in
the house. Our campus desktops include an 8GB Core i3 NUC (Intel®'s
Next Unit of Computing) and an 8GB Raspberry Pi 4 with SSD storage
running Pop!_OS and Raspberry Pi OS desktops, respectively. They are
authorized to connect to the campus VPN via the following process.
The administrator first creates a
wifi
file like the following (in which the wireless network device is namedwlan0).auto wlan0 iface wlan0 inet dhcp wpa-ssid "Birchwood Abbey" wpa-psk "PASSWORD"Then the
wifi
file is installed and the network interface brought up.sudo cp wifi /etc/network/interfaces.d/ sudo ifup wlan0Next, the administrator generates a pair of WireGuard™ keys.
sudo apt install wireguard wg genkey | sudo tee /etc/wireguard/private-key >/dev/null sudo cat /etc/wireguard/private-key | wg pubkey >server.pubThe client's name and public key are then registered via the
./abbey clientcommand, and the resulting details are copied to the client.scp sysadm@new-w:server.pub ./ ./abbey client campus new `cat server.pub` scp campus.conf sysadm@new-w:The details are copied to
/etc/wireguard/wg0.conf
on the client and the service started.sudo cp campus.conf /etc/wireguard/wg0.conf sudo systemctl start wg-quick@wg0 systemctl status wg-quick@wg0- The client can then be unplugged from the cloister Ethernet, and connect to the campus Wi-Fi (if not already).
Finally the connection to the VPN is tested and, if OK, is "enabled" (to start at boot time).
ping -c1 core sudo wg show sudo systemctl enable wg-quick@wg0
12.10.2. Private Desktops
Member notebooks are private machines not remotely administered by the abbey. These machines roam, and so are authorized to connect both to the cloister VPN or to the public VPN. They are authorized to do so via the following process.
- The owner of the Debian desktop machine should have already connected to the campus Wi-Fi using the GUI of NetworkManager.
The owner thus begins by generating a pair of WireGuard™ keys on the client, sending the public key to the administrator.
sudo apt install wireguard wg genkey | sudo tee /etc/wireguard/private-key >/dev/null sudo cat /etc/wireguard/private-key | wg pubkey >dick.pub ( echo "Subject: new client named dick" echo cat dick.pub ) | sendmail sysadm@small.example.orgThe administrator runs the
./abbey clientcommand and replies with the generated configurations../abbey client debian dick dick `cat dick.pub` ( echo "Subject: dick now authorized" echo cat campus.conf echo -------- cat public.conf ) | sendmail dickThe owner saves the configuration details in
campus.conf
andpublic.conf
, then installs them and starts the campus VPN service.sudo cp campus.conf /etc/wireguard/wg0.conf sudo vp public.conf /etc/wireguard/wg1.conf sudo systemctl start wg-quick@wg0 systemctl status wg-quick@wg0Finally the owner checks that the client has successfully connected to the campus VPN and, if it has, enables the service.
systemctl status wg-quick@wg0 ping -c1 core sudo systemctl enable wg-quick@wg0
The owner will want to test the public VPN connection as well by taking the Debian desktop off the campus Wi-Fi and getting it Internet access some other way (perhaps tethered to a cell phone). Then the following commands will switch to the public VPN and test it.
sudo systemctl stop wg-quick@wg0
sudo systemctl start wg-quick@wg1
ping -c1 core
This leaves wg-quick@wg0 enabled. The campus VPN is re-connected if
the machine reboots.
Note that a new member's notebook does not need to be patched to the
cloister Ethernet nor connected to the cloister Wi-Fi. It can be
authorized "remotely" simply by copying the .conf
text files to the
machine by whatever means is available.
The members of A Small Institute are peers, and enjoy complete, individual privacy. The administrator does not expect to have "root access" to members' machines, their desktops, personal diaries and photos. The monks of the abbey are brothers, and tolerate a little less than complete individual privacy (still expecting all necessary and appropriate privacy, being in a position to punish deviants).
Our private notebooks are included in the Ansible inventory, mainly so
they can be included in the weekly (or more frequent!) network
upgrades. The campus and abbey-cloister roles are not applied
though their Postfix and other configurations are recommended. Remote
access by the administrator is authorized and the privileged account's
password is included in Secret/become.yml
.
12.10.3. Android
Android phones and tablets are authorized to connect to the cloister
and public VPNs via the following process. Note that they do not
appear in the set of campus hosts, are not configured by Ansible,
and do not appear in the host inventory.
- The owner of the Android device creates a WireGuard™ key pair with the WireGuard™ for Android app, and texts the public key to the administrator.
The administrator runs the
./abbey clientcommand and replies with the generated configurations../abbey client android dicks-razr dick <client public key> ( echo "Subject: dicks-razr now authorized" echo cat campus.conf echo -------- cat public.conf ) | sendmail owner- The owner enters the details of the two WireGuard™ subnets into the app, creating two tunnels. These are turned on and off depending on whether the Android is connecting to the campus or public VPN.
12.11. Create Wireless Domain Name
A wireless machine is assigned a Wi-Fi address when it connects to the
cloister Wi-Fi, and a host number when it is registered. Given the
host number (e.g. 7), a private domain name
(e.g. new.small.private) can be associated with that host number on
the cloister VPN subnet, e.g 10.84.138.7. The administrator adds a
line like the following to private/db.domain
and increments the
serial number at the top of the file.
new IN A 10.84.138.7
The administrator also creates the reverse mapping by adding a line
like the following to private/db.campus_vpn
and incrementing the
serial number at the top of that file.
7 IN PTR new.small.private.
After ./abbey config core updates Core, the administrator can test
resolution of the new name.
resolvectl query new.small.private.
resolvectl query 10.84.138.7
A wireless device with no Ethernet interface and unable to run
WireGuard™ gets just a Wi-Fi address. It can be given a private
domain name (e.g. thing.small.private) associated with its Wi-Fi
address (e.g. 192.168.10.225), but a reverse lookup on a machine
connected to the Wi-Fi may yield a name like thing.lan (provided by
the access point) while elsewhere (e.g. on the cloister Ethernet) the
IP address will not resolve at all. (There is no "reverse mapping" to
be added to private/db.campus_vpn
.)