Birchwood Abbey Networks
The abbey's network services are configured by Ansible scripts based
on A Small Institute. The institutional roles like core
, gate
and
front
are intended for general use and so are kept free of abbey
idiosyncrasies. The roles herein are abbey specific, emphasized by
the abbey-
prefix on their names. These roles are applied after
the generic institutional roles (again, documented here).
1. Overview
A Small Institute makes security and privacy top priorities but Birchwood Abbey approaches these from a particularly Elvish viewpoint. Elves depend for survival on speed, agility, and concealment. Working toward those ends (esp. the last) Birchwood Abbey's network topology was designed to look like that of an average Amerikan household. Korporate Amerika expects our ISP to provide us with a Wi-Fi/router/modem that all of our appliances can use to communicate amongst themselves in a cliquey, New World Order IoT kumbaya. We dare not disappoint.
Thus Samsung (our refrigerator) is able to browse for our printer or connect to Kroger (our grocer) or Kaiser (our health care provider) for whatever reason (presumably to report on our eating habits). The only suspicious character in this Amerikan household will be Gate, a Raspberry Pi passing many encrypted packets. Thus when the New World Police come a-knock'n (i.e. after they kick the door and kill the dog) we might still hold onto some plausible deniability.
To most look like our neighbors we sit between our smart TVs and our smart refrigerators and consciously play the flaccid consumer streaming Amazon and watching Blu-ray discs. This works because we have preserved a means of escape. We may not be able to hide our entertainment choices nor even eating habits anymore, but we can still retreat into private correspondence between Inner Citadels.
The small institute tries to look "normal" too so the abbey's network map is very similar, with differences mainly in terminology, philosophy, attitude.
| = _|||_ ----- The Temple----- = = = = = = = = =====-Front-===== | ----------------- ( ) ( The Internet(s) )----(Hotel Wi-Fi) ( ) | ----------------- | | +----Monk's notebook abroad | =============== | ================================================== | Premises (House ISP) | | +----Monk's notebook in the house | +----Samsung refrigerator | +----Sony Bluray | +----Lexmark printer | | | +----(House Wi-Fi) | | Game of Thrones ============== Gate ================================================ | Cloister +----Ethernet switch | +----Core +----Security DVR +----IP camera(s) +----HDTV TVR +----WebTV
2. The Abbey Particulars
The abbey's public particulars are included below. They are the public particulars of a small institute, nothing more.
public/vars.yml
--- domain_name: birchwood-abbey.net full_name: Birchwood Abbey front_addr: 159.65.75.60
The abbey's private institutional parameters are in
private/vars.yml
. Example lines can be found in
Institute/private/vars.yml
.
The abbey's private liturgical parameters are in
private/vars-abbey.yml
. Example lines are included here and tangled
into private_ex/vars-abbey.yml
.
3. The Abbey Front Role
Birchwood Abbey's front door is a Digital Ocean Droplet configured as A Small Institute Front. Thus it is already serving a public web site with Apache2, spooling email with Postfix and serving it with Dovecot-IMAPd, and hosting a VPN with OpenVPN.
3.1. Install Emacs
The monks of the abbey are masters of the staff (bo) and Emacs.
roles_t/abbey-front/tasks/main.yml
---
- name: Install Emacs.
become: yes
apt: pkg=emacs
3.2. Configure Public Email Aliases
The abbey uses several additional email aliases. These are the public
mailboxes @birchwood-abbey.net
. The institute already funnels the
common mailboxes like postmaster
and admin
into root
and root
to the machine's privileged account (sysadm
). The abbey takes it
from there, forwarding sysadm
to a real person.
roles_t/abbey-front/tasks/main.yml
- name: Install abbey email aliases.
become: yes
blockinfile:
block: |
sysadm: matt
keymaster: root
codemaster: matt
all: matt, lori, erica
elders: matt, lori
rents: elders
puck: matt
abbess: lori
dest: /etc/aliases
marker: "# {mark} ABBEY MANAGED BLOCK"
notify: New aliases.
roles_t/abbey-front/handlers/main.yml
--- - name: New aliases. become: yes command: newaliases
3.3. Configure Git Daemon on Front
The abbey publishes member Git repositories with git-daemon
. If
Dick (a member of A Small Institute) builds a Foo project Git
repository in ~/foo/
, he can publish it to the campus by
symbolically linking its .git/
into ~/Public/Git/
on Core. If the
repository is world readable and contains a git-daemon-export-ok
file, it will be served at git://www/~dick/foo
.
touch ~/foo/.git/git-daemon-export-ok ln -s ~/foo/.git ~/Public/Git/foo chmod -R o+r ~/foo/.git find ~/foo/.git -type d -print0 | xargs -0 chmod o+rx
User repositories can be made available to the public at a URL like
git://small.example.org/~dick/foo
by copying it to the same path on
Front (~dick/Public/Git/foo/
). The following rsync
command
creates or updates such a copy.
rsync -av ~/foo/.git/ small.example.org:Public/Git/foo/
Note that Dick's Git repository, mirrored to Front (or Core), does not
need to be backed up, assuming Dick's home directory (including
~/foo/
) is. If updates are git-pushed to a repository on Front,
regular backups should be made, but this is Dick's responsibility.
There are no regular, system backups on Front.
rsync -av --del small.institute.org:Public/foo/ ~/Public/foo/
With SystemD and the git-daemon-sysvinit
package installed, SystemD
supervises a git-daemon
service unit launched with
/etc/init.d/git-daemon
. The old SysV init
script gets its
configuration from the customary /etc/default/git-daemon
file. The
script then constructs the appropriate git-daemon
command. The
git-daemon(1)
manual page explains the command options in detail.
As explained in /usr/share/doc/git-daemon-sysvinit/README.Debian
,
the service must be enabled by setting GIT_DAEMON_ENABLE
to true
.
The base path is also changed to agree with gitweb.cgi
.
User repositories are enabled by adding a user-path
option and
disabling the default whitelist. To specify an empty whitelist, the
default (a list of one directory: /var/lib/git
) must be avoided by
setting GIT_DAEMON_DIRECTORY
to a blank (not empty) string.
The code below is included in both Front and Core configurations,
which should be nearly identical for testing purposes. Rather than
factor out small roles like abbey-git-server
, Emacs Org Mode's Noweb
support does the duplication, by multiple references to code blocks
like git-tasks
and git-handlers
.
roles_t/abbey-front/tasks/main.yml
<<git-tasks>>
git-tasks
- name: Install git daemon. become: yes apt: pkg=git-daemon-sysvinit - name: Configure git daemon. become: yes lineinfile: path: /etc/default/git-daemon regexp: "{{ item.patt }}" line: "{{ item.line }}" loop: - patt: '^GIT_DAEMON_ENABLE *=' line: 'GIT_DAEMON_ENABLE=true' - patt: '^GIT_DAEMON_OPTIONS *=' line: 'GIT_DAEMON_OPTIONS="--user-path=Public/Git"' - patt: '^GIT_DAEMON_BASE_PATH *=' line: 'GIT_DAEMON_BASE_PATH="/var/www/git"' - patt: '^GIT_DAEMON_DIRECTORY *=' line: 'GIT_DAEMON_DIRECTORY=" "' notify: Restart git daemon. - name: Create /var/www/git/. become: yes file: path: /var/www/git state: directory group: staff mode: u=rwx,g=srwx,o=rx
roles_t/abbey-front/handlers/main.yml
<<git-handlers>>
git-handlers
- name: Restart git daemon. become: yes command: systemctl restart git-daemon
3.4. Configure Gitweb on Front
The abbey provides an HTML interface to members' public Git
repositories using gitweb.cgi
, one of the few CGI scripts allowed on
Front. Unlike the Git daemon, the Gitweb interface does not care if
the repository contains a git-daemon-export-ok
file.
Again Front and Core need to be configured congruently, so the necessary Apache directives are given here and referenced in the Apache configurations.
Like the suggested per-user rewrite rule in the gitweb(1)
manual
page, the second RewriteRule
specifies the root directory of the
user's public Git repositories via the GITWEB_PROJECTROOT
environment variable. It makes http://www/~dick/git
run
Gitweb with the project root ~dick/Public/Git/
, the same directory
the git-daemon
makes available. The first RewriteRule
directs
URLs with no user name to the default. Thus http://www/git
lists the repositories found in /var/www/git/
.
apache-gitweb
Alias /gitweb-static/ /usr/share/gitweb/static/ <Directory "/usr/share/gitweb/static/"> Options MultiViews </Directory> RewriteEngine on RewriteRule ^/git(/.*)?$ \ /cgi-bin/gitweb.cgi$1 [QSA,L,PT] RewriteRule ^/\~([^\/]+)/git(/.*)?$ \ /cgi-bin/gitweb.cgi$2 \ [QSA,E=GITWEB_PROJECTROOT:/home/$1/Public/Git/,L,PT]
The RewriteRule
flags used here are:
- QSA | qsappend
- Append the request's query string.
- E= | env
- Set or unset an environment variable.
- L | last
- Stop with this Last rule.
- PT | passthrugh
- Treat the result as a URI, not a file path.
The RewriteEngine on
directive must be included in the virtual host
or no rewriting will take place.
The CGI script and RewriteRule
require Apache's cgi
and rewrite
modules, which are not normally enabled on a small institute's public
server. Thus they need to be enabled here. Note that Debian and
-Ubuntu install different Apache MPMs (multi-processing modules)
-requiring different CGI modules, turning two tasks into three.
The script uses the CGI
Perl module, which must be installed.
The rewrite rule maps to the URL /cgi-bin/gitweb.cgi
, which is
mapped by default to /usr/lib/cgi-bin/gitweb.cgi
. The git
package
installs gitweb.cgi
in /usr/share/gitweb/
, so it and its related
index.cgi
script are linked into /usr/lib/cgi-bin/
.
The static/
directory, also installed in /usr/share/gitweb/
, is
made available as http://www/gitweb-static/
via an Alias
directive. The global Perl configuration file, /etc/gitweb.conf
,
overrides the relative URLs Gitweb normally generates, and uses the
web site /favicon.ico
.
apache-gitweb-tasks
- name: Enable Apache2 rewrite module for Gitweb. become: yes apache2_module: name=rewrite notify: Restart Apache2. - name: Enable Apache2 cgid module. become: yes apache2_module: name=cgid notify: Restart Apache2. - name: Install libcgi-pm-perl for Gitweb. become: yes apt: pkg=libcgi-pm-perl - name: Link Gitweb into /cgi-bin/. become: yes file: state: link path: /usr/lib/cgi-bin/{{ item }} src: /usr/share/gitweb/{{ item }} loop: [ gitweb.cgi, index.cgi ] - name: Override Gitweb assets location. become: yes copy: content: | $projectroot = $ENV{'GITWEB_PROJECTROOT'} || "/var/www/git"; @stylesheets = ("/gitweb-static/gitweb.css"); $logo = "/gitweb-static/git-logo.png"; $favicon = "/favicon.ico"; $javascript = "/gitweb-static/gitweb.js"; dest: /etc/gitweb.conf mode: u=rw,g=r,o=r
apache-gitweb-handlers
- name: Restart Apache2. become: yes systemd: service: apache2 state: restarted
3.5. Configure Apache for Abbey Documentation
Some of the directives added to the -vhost.conf
file are needed by
the abbey's documentation, published at
https://birchwood-abbey.net/Abbey/. The following template uses a
docroot
variable for the actual path to the HTML. On Front this
variable is set to /home/www
. The same template is used on Core, to
ensure matching configurations for accurate previews and tests.
The abbey's network documentation currently uses automatic directory indexes, and declares the types of files with several additional filename suffixes.
apache-abbey
<Directory {{ docroot }}/Abbey/> AllowOverride Indexes FileInfo Options +Indexes +FollowSymLinks </Directory>
The following .htaccess
file works with the directives above. It
declares most the native source files in the current directory tree to
be plain text, so that they are displayed rather than downloaded.
.htaccess
ReadmeName notfound.html IndexIgnore README.org AddType text/plain attr campus_vpn cfg cnf conf crt daily_letsencrypt AddType text/plain domain el htaccess idx j2 key old org pack pem AddType text/plain private pub public_vpn req rev sample txt yml
3.6. Configure Photos URLs on Front
Some of the directives added to the -vhost.conf
file map the abbey's
abstract photo URLs, e.g. /Photos/2022/08/06/
, into actual file
paths. The following template uses the docroot
variable introduced
in the previous section. On Front this variable is set to
/home/www
. The same template is used on Core, to ensure
matching configurations for accurate previews and tests.
apache-photos
RedirectMatch /Photos$ /Photos/ RedirectMatch /Photos/(20[0-9][0-9])_([0-9][0-9])_([0-9][0-9])$ \ /Photos/$1_$2_$3/ AliasMatch /Photos/(20[0-9][0-9])_([0-9][0-9])_([0-9][0-9])/(.+)$ \ {{ docroot }}/Photos/$1/$2/$3/$4 AliasMatch /Photos/(20[0-9][0-9])_([0-9][0-9])_([0-9][0-9])/$ \ {{ docroot }}/Photos/$1/$2/$3/index.html AliasMatch /Photos/$ {{ docroot }}/Photos/index.html
3.7. Configure Apache on Front
The abbey needs to add some Apache2 configuration directives to the
virtual host listening for HTTPS requests to birchwood-abbey.net
.
Luckily there is support for this in the institutional configuration.
The abbey simply creates a birchwood-abbey.net-vhost.conf
file in
/etc/apache2/sites-available/
.
The following task adds the apache-abbey
, apache-photos
, and
apache-gitweb
directives described above to the -vhost.conf
file,
and includes options-ssl-apache.conf
from /etc/letsencrypt/
. The
rest of the Let's Encrypt configuration is discussed in the following
Install Let's Encrypt section.
roles_t/abbey-front/tasks/main.yml
- name: Configure Apache. become: yes vars: docroot: /home/www copy: content: | <<apache-abbey>> <<apache-photos>> <<apache-gitweb>> IncludeOptional /etc/letsencrypt/options-ssl-apache.conf dest: /etc/apache2/sites-available/birchwood-abbey.net-vhost.conf notify: Restart Apache2. <<apache-gitweb-tasks>>
roles_t/abbey-front/handlers/main.yml
<<apache-gitweb-handlers>>
3.8. Configure Apache Log Archival
These tasks hack Apache's logrotate(8)
configuration to rotate
weekly, keep the last 12 weeks, and email each week's log to root
.
The logrotate(8)
manual page explains the configuration options.
The Systemd configuration drop tells logrotate
to use a special
script for its mail program. Postfix's mail
work-alike did not take
the subject as a command line argument as provided by logrotate
.
The replacement logrotate-mailer
does, and includes it in a
Subject
header prepended to logrotate
's message.
roles_t/abbey-front/tasks/main.yml
- name: Configure Apache log archival. become: yes lineinfile: path: /etc/logrotate.d/apache2 regexp: "{{ item.regexp }}" line: "{{ item.line }}" loop: - { regexp: '^ *daily', line: "\tweekly" } - { regexp: '^ *rotate', line: "\trotate 12" } - name: Configure Apache log email. become: yes lineinfile: path: /etc/logrotate.d/apache2 regexp: "{{ item.regexp }}" line: "{{ item.line }}" insertbefore: " *}" firstmatch: yes loop: - { regexp: "^\tmail ", line: "\tmail webmaster" } - { regexp: "^\tmailfirst", line: "\tmailfirst" } - name: Configure logrotate. become: yes copy: src: logrotate-mailer.conf dest: /etc/systemd/system/logrotate.service.d/mailer.conf notify: Reload systemd. - name: Install logrotate mailer. become: yes copy: src: logrotate-mailer dest: /usr/local/sbin/logrotate-mailer mode: u=rwx,g=rx,o=rx
roles_t/abbey-front/handlers/main.yml
- name: Reload systemd. become: yes systemd: daemon_reload: yes
Note that the first setting for ExecStart
is intended to clear the
system's ExecStart
in /lib/systemd/system/logrotate.service
. (A
oneshot
service like this can have multiple ExecStart
settings.
See the description of ExecStart
in the systemd.service(5)
manual
page.)
roles_t/abbey-front/files/logrotate-mailer.conf
[Service] ExecStart= ExecStart=/usr/sbin/logrotate \ --mail /usr/local/sbin/logrotate-mailer \ /etc/logrotate.conf
The /usr/local/sbin/logrotate-mailer
script (below) was originally
needed because Postfix does not provide an emulation of mail(1)
and
some translation to sendmail(1)
was required. Since then the script
has learned to compute the date-dependent file name, compress the log,
convert it to base64, and encapsulate it in MIME format, before
encrypting and sending to sendmail
.
roles_t/abbey-front/files/logrotate-mailer
#!/bin/bash -e if [ "$#" != 3 -o "$1" != "-s" ]; then echo "usage: $0 -s subject recipient" 1>&2 exit 1 fi D=`date -d yesterday "+%Y%m%d"` if [[ "$2" == *error.log* ]]; then F="$D-error.log.gz" else F="$D.log.gz" fi ( echo "Subject: $2" echo "" ( echo "Content-Type: multipart/mixed; boundary=\"boundary\"" echo "MIME-Version: 1.0" echo "" echo "--boundary" echo "Content-Type: text/plain" echo "Content-Transfer-Encoding: 8bit" echo "" echo "$F" echo "--boundary" echo "Content-Type: application/gzip; name=\"$F\"" echo "Content-Disposition: attachment; filename=\"$F\"" echo "Content-Transfer-Encoding: base64" echo "" gzip | base64 echo "" echo "--boundary--" ) \ | gpg --encrypt --armor \ --trust-model always --recipient root@core ) \ | sendmail root \ || exit $?
3.9. Install Let's Encrypt
The abbey uses a Let's Encrypt certificate to authenticate its public web site and email services. Initial installation of a Let's Encrypt certificate is a terminal session affair (with prompts and lines entered as shown below).
$ sudo apt install python3-certbot-apache $ sudo certbot --apache -d birchwood-abbey.net ... Enter email address (...) (Enter 'c' to cancel): webmaster@birchwood- bbey.net ... Please read the Terms of Service at ... (A)gree/(C)ancel: A ... Would you be willing to share your email address... ... (Y)es/(N)o: Y ... Deploying Certificate to VirtualHost /etc/apache2/sites-enabled/birch ood-abbey.net.conf Please choose whether or not to redirect HTTP traffic to HTTPS, remov ng HTTP access. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1: No redirect - Make no further changes to the webserver configurati n. ... Select the appropriate number [1-2] then [enter] (press 'c' to cancel : 1 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Congratulations! You have successfully enabled https://birchwood-abbe .net You should test your configuration at: https://www.ssllabs.com/ssltest/analyze.html?d=birchwood-abbey.net - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - IMPORTANT NOTES: - Your account credentials have been saved in your Certbot configuration directory at /etc/letsencrypt. You should make a secure backup of this folder now. This configuration directory wil also contain certificates and private keys obtained by Certbot so making regular backups of this folder is ideal. ... - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/birchwood-abbey.net/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/birchwood-abbey.net/privkey.pem Your cert will expire on 2019-01-13. To obtain a new or tweaked version of this certificate in the future, simply run certbot agai with the "certonly" option. To non-interactively renew *all* of your certificates, run "certbot renew"
When the /etc/letsencrypt/
directory is restored from a backup copy,
and the following tasks performed, the web server will be prepared to
do ACME (the certificate protocol) when next Let's Encrypt calls
(quarterly). The following tasks ensure the python3-cerbot-apache
package is installed and its live/
subdirectory is world readable.
roles_t/abbey-front/tasks/main.yml
- name: Install Certbot for Apache. become: yes apt: pkg=python3-certbot-apache - name: Ensure Let's Encrypt certificate is readable. become: yes file: mode: u=rwx,g=rx,o=rx path: /etc/letsencrypt/live
Front's Dovecot (and Postfix) certificate and key are in separate files despite their warning about a race condition (when updating the pair of files) mainly because that is how they are provided (and updated) by Let's Encrypt, but also because Let's Encrypt's symbolic links keep the window for a mismatch extremely small.
With the institutional configuration, Postfix, Dovecot and Apache
servers get their certificate&key from /etc/server.crt&.key
. The
institutional roles check that they exist, but will not create them.
In this abbey specific role, /etc/server.crt&key
are ours to frob.
The following tasks ensure they are symbolic links to
/etc/letsencrypt/live/birchwood-abbey.net/fullchain&privkey.pem
. If
/etc/letsencrypt/
was restored from a backup, the servers should be
restarted manually.
roles_t/abbey-front/tasks/main.yml
- name: Use Let's Encrypt certificate&key. file: state: link src: "{{ item.target }}" path: "{{ item.link }}" force: yes loop: - target: /etc/letsencrypt/live/birchwood-abbey.net/fullchain.pem link: /etc/server.crt - target: /etc/letsencrypt/live/birchwood-abbey.net/privkey.pem link: /etc/server.key
3.10. Rotate Let's Encrypt Log
The following task arranges to rotate Certbot's logs files.
roles_t/abbey-front/tasks/main.yml
- name: Install Certbot logrotate configuration.
become: yes
copy:
src: certbot_logrotate
dest: /etc/logrotate.d/certbot
mode: u=rw,g=r,o=r
roles_t/abbey-front/files/certbot_logrotate
/var/log/letsencrypt/*.log {
rotate 12
weekly
compress
missingok
}
3.11. Archive Let's Encrypt Data
A backup copy of Let's Encrypt's data (/etc/letsencrypt/
) is sent to
root@core
in OpenPGP encrypted email every time it changes. Changes
are detected by keeping a copy in /etc/letsencrypt~/
for comparison.
roles_t/abbey-front/tasks/main.yml
- name: Install Let's Encrypt archive script. become: yes copy: src: cron.daily_letsencrypt dest: /etc/cron.daily/letsencrypt mode: u=rwx,g=rx,o=rx
roles_t/abbey-front/files/cron.daily_letsencrypt
#!/bin/bash -e cd /etc/ [ -d letsencrypt~ ] \ && diff -rq letsencrypt/ letsencrypt~/ \ && exit 0 ( echo "Subject: New /etc/letsencrypt/ on Droplet." echo "" tar czf - letsencrypt/ \ | gpg --encrypt --armor \ --trust-model always --recipient root@core ) \ | sendmail root \ || exit $? rm -rf letsencrypt~ cp -a letsencrypt letsencrypt~
The message is encrypted with root@core
's public key, which is
imported into root@front
's GnuPG key file.
roles_t/abbey-front/tasks/main.yml
- name: Copy root@core's public key. become: yes copy: src: ../Secret/root-pub.pem dest: /root/.gnupg-root-pub.pem mode: u=r,g=r,o=r notify: Import root@core's public key.
roles_t/abbey-front/handlers/main.yml
- name: Import root@core's public key. become: yes command: gpg --import ~/.gnupg-root-pub.pem
4. The Abbey Core Role
Birchwood Abbey's core is a mini-PC (System76 Meerkat) configured as A Small Institute Core. Thus it is already serving a local web site with Apache2, hosting a private cloud with Nextcloud, handling email with Postfix and Dovecot, and providing essential localnet services: NTP, DNS and DHCP.
4.1. Include Abbey Variables
In this abbey specific document, most abbey particulars are not
replaced with variables, but specified in-line. Some, however, are
private (e.g. database passwords), not to be published in this
document, and so replaced with variables set in
private/vars-abbey.yml
. The file path is relative to the playbook's
directory, playbooks/
.
roles_t/abbey-core/tasks/main.yml
--- - name: Include private abbey variables. include_vars: ../private/vars-abbey.yml
4.2. Install Additional Packages
The scripts that maintain the abbey's web site use a number of
additional software packages. The /WWW/live/Private/make-top-index
script uses HTML::TreeBuilder
in the libhtml-tree-perl
package.
The house task list uses JQuery.
roles_t/abbey-core/tasks/main.yml
- name: Install additional packages. apt: pkg: [ libhtml-tree-perl, libjs-jquery, mit-scheme, gnuplot ]
4.3. Configure Private Email Aliases
The abbey uses several additional email aliases. These are the campus
mailboxes @*.birchwood.private
. The institute already includes
some standard system aliases, as well as mailboxes for accounts
running services: www-data
and monkey
. The institute funnels
these to root
and forwards root
to sysadm
(as on Front). The
abbey takes it from there, forwarding sysadm
to a real person and
including mailboxes for all accounts running services on any campus
machine. (They should all be relaying to smtp.birchwood.private
which delivers any .birchwood.private
email,
e.g. mythtv@mythtv.birchwood.private
, locally.)
roles_t/abbey-core/tasks/main.yml
- name: Install abbey email aliases.
become: yes
blockinfile:
block: |
sysadm: matt
house: sysadm
mythtv: sysadm
scanner: sysadm
dest: /etc/aliases
marker: "# {mark} ABBEY MANAGED BLOCK"
notify: New aliases.
roles_t/abbey-core/handlers/main.yml
--- - name: New aliases. become: yes command: newaliases
4.4. Configure Git Daemon on Core
These tasks are identical to those executed on Front, for similar Git services on Front and Core. See 3.3 and Configure Gitweb on Front for more information.
roles_t/abbey-core/tasks/main.yml
<<git-tasks>>
roles_t/abbey-core/handlers/main.yml
<<git-handlers>>
4.5. Configure Apache on Core
The Apache2 configuration on Core specifies three web sites (live,
test, and campus). The live and test sites must operate just like the
site on Front. Their configurations include the same apache-abbey
,
apache-photos
, and apache-gitweb
used on Front.
roles_t/abbey-core/tasks/main.yml
- name: Configure live website. become: yes vars: docroot: /WWW/live copy: content: | <<apache-abbey>> <<apache-photos>> <<apache-gitweb>> dest: /etc/apache2/sites-available/live-vhost.conf mode: u=rw,g=r,o=r notify: Restart Apache2. - name: Configure test website. become: yes vars: docroot: /WWW/test copy: content: | <<apache-abbey>> <<apache-photos>> <<apache-gitweb>> dest: /etc/apache2/sites-available/test-vhost.conf mode: u=rw,g=r,o=r notify: Restart Apache2. <<apache-gitweb-tasks>>
roles_t/abbey-core/handlers/main.yml
<<apache-gitweb-handlers>>
4.6. Configure Documentation URLs
The institute serves its /usr/share/doc/
on the house (campus) web
site. This is a debugging convenience, making some HTML documentation
more accessible, especially the documentation of software installed on
Core and not on typical desktop clients. Also included: the Apache2
directives that enable user Git publishing with Gitweb (defined here).
roles_t/abbey-core/tasks/main.yml
- name: Configure house website.
become: yes
copy:
content: |
Alias /doc /usr/share/doc
<Directory /usr/share/doc/>
Options Indexes
</Directory>
<<apache-gitweb>>
dest: /etc/apache2/sites-available/www-vhost.conf
mode: u=rw,g=r,o=r
notify: Restart Apache2.
4.7. Install Apt Cacher
The abbey uses the Apt-Cacher:TNG package cache on Core. The
apt-cacher
domain name is defined in private/db.domain
.
roles_t/abbey-core/tasks/main.yml
- name: Install Apt-Cacher:TNG.
become: yes
apt: pkg=apt-cacher-ng
4.8. Use Cloister Apt Cache
Core itself will benefit from using the package cache, but should
contact https
repositories directly. (There are few such cretins
so caching their packages is not a priority.)
roles_t/abbey-core/tasks/main.yml
- name: Use the local Apt package cache. become: yes copy: content: > Acquire::http::Proxy "http://apt-cacher.birchwood.private.:3142"; Acquire::https::Proxy "DIRECT"; dest: /etc/apt/apt.conf.d/01proxy mode: u=rw,g=r,o=r
4.9. Configure NAGIOS
A small institute uses nagios4
to monitor the health of its network,
with an initial smattering of monitors adopted from the Debian
monitoring-plugins
package. Thus a NAGIOS4 server on the abbey's
Core monitors core network services, and uses nagios-nrpe-server
to
monitor Gate. The abbey adds several more monitors, installing
additional configuration files in /etc/nagios4/conf.d/
, and another
customized check_sensors
plugin (abbey_pisensors
) in
/usr/local/sbin/
on the Raspberry Pis.
4.10. Monitoring The Home Disk
The abbey adds monitoring of the space remaining on the volume at
/home/
on Core. (The small institute only monitors the space
remaining on roots.)
roles_t/abbey-core/tasks/main.yml
- name: Configure NAGIOS monitoring for Core /home/.
become: yes
copy:
content: |
define service {
use local-service
host_name core
service_description Home Partition
check_command check_local_disk!20%!10%!/home
}
dest: /etc/nagios4/conf.d/abbey.cfg
notify: Reload NAGIOS4.
roles_t/abbey-core/handlers/main.yml
- name: Reload NAGIOS4. become: yes systemd: service: nagios4 state: reloaded
4.11. Custom NAGIOS Monitor abbey_pisensors
The check_sensors
plugin is included in the package
monitoring-plugins-basic
, but it does not report any readings. The
small institute substitutes a Custom NAGIOS Monitor inst_sensors
that reports core CPU temperatures, but the sensors
command on a
Raspberry Pi does not reveal core CPU temperatures, so the abbey
includes yet another version, abbey_pisensors
, that reports any
recognizable temperature in the sensors
output.
roles_t/abbey-core/files/abbey_pisensors
#!/bin/sh PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin" export PATH PROGNAME=`basename $0` REVISION="2.3.1" . /usr/lib/nagios/plugins/utils.sh print_usage() { echo "Usage: $PROGNAME" [--ignore-fault] } print_help() { print_revision $PROGNAME $REVISION echo "" print_usage echo "" echo "This plugin checks hardware status using" \ "the lm_sensors package." echo "" support exit $STATE_OK } brief_data() { echo "$1" | sed -n -E -e ' /^temp[0-9]+: +[-+][0-9.]+.?C/ { s/^temp[0-9]+: +([-+][0-9.]+).?C.*/ \1/; H } $ { x; s/\n//g; p }' } case "$1" in --help) print_help exit $STATE_OK ;; -h) print_help exit $STATE_OK ;; --version) print_revision $PROGNAME $REVISION exit $STATE_OK ;; -V) print_revision $PROGNAME $REVISION exit $STATE_OK ;; *) sensordata=`sensors 2>&1` status=$? if test ${status} -eq 127; then text="SENSORS UNKNOWN - command not found" text="$text (did you install lmsensors?)" exit=$STATE_UNKNOWN elif test ${status} -ne 0; then text="WARNING - sensors returned state $status" exit=$STATE_WARNING elif echo ${sensordata} | egrep ALARM > /dev/null; then text="SENSOR CRITICAL -`brief_data "${sensordata}"`" exit=$STATE_CRITICAL elif echo ${sensordata} | egrep FAULT > /dev/null \ && test "$1" != "-i" -a "$1" != "--ignore-fault"; then text="SENSOR UNKNOWN - Sensor reported fault" exit=$STATE_UNKNOWN else text="SENSORS OK -`brief_data "${sensordata}"`" exit=$STATE_OK fi echo "$text" if test "$1" = "-v" -o "$1" = "--verbose"; then echo ${sensordata} fi exit $exit ;; esac
4.12. Monitoring The Cloister
The abbey adds monitoring for more servers: Kamino, Kessel, and Ord
Mantell. They are abbey-cloister
servers, so they are configured as
small institute campus
servers, like Gate, with an NRPE (a NAGIOS
Remote Plugin Executor) server and an inst_sensors
command.
The configurations for the servers are very similar to Gate's, but are
idiosyncratically in flux. In particular, Kamino does not irritate
check_total_procs
, yet Kessel does. Both are Pop!_OS 22.04, but
Kessel is a wireless host while Kamino is wired. Ord Mantell, the
Raspberry Pi OS (ARM64) machine, uses the abbey_pisensors
monitor.
4.12.1. Cloister Network Addresses
The IP addresses of all three hosts are nice to use in the NAGIOS
configuration (to avoid depending on name service) and so are
included in private/vars-abbey.yml
.
private_ex/vars-abbey.yml
--- kamino_addr: 192.168.56.14 kessel_addr: 10.84.138.8 ord_mantell_addr: 10.84.138.10
4.12.2. Installing NAGIOS Configurations
The following task installs each host's NAGIOS configuration. Note that Kamino is not included. It is currently unmonitored as it is now rarely powered up.
roles_t/abbey-core/tasks/main.yml
- name: Configure cloister NAGIOS monitoring. become: yes template: src: nagios-{{ item }}.cfg dest: /etc/nagios4/conf.d/{{ item }}.cfg loop: [ ord-mantell, kessel ] notify: Reload NAGIOS4.
4.12.3. NAGIOS Monitoring of Ord-Mantell
roles_t/abbey-core/templates/nagios-ord-mantell.cfg
define host { use linux-server host_name ord-mantell address {{ ord_mantell_addr }} } define service { use generic-service host_name ord-mantell service_description Root Partition check_command check_nrpe!inst_root } # define service { # use generic-service # host_name ord-mantell # service_description Current Load # check_command check_nrpe!check_load # } define service { use generic-service host_name ord-mantell service_description Zombie Processes check_command check_nrpe!check_zombie_procs } # define service { # use generic-service # host_name ord-mantell # service_description Total Processes # check_command check_nrpe!check_total_procs # } define service { use generic-service host_name ord-mantell service_description Swap Usage check_command check_nrpe!inst_swap } define service { use generic-service host_name ord-mantell service_description Temperature Sensors check_command check_nrpe!abbey_pisensors }
4.12.4. NAGIOS Monitoring of Kamino
roles_t/abbey-core/templates/nagios-kamino.cfg
define host { use linux-server host_name kamino address {{ kamino_addr }} } define service { use generic-service host_name kamino service_description Root Partition check_command check_nrpe!inst_root } define service { use generic-service host_name kamino service_description Current Load check_command check_nrpe!check_load } define service { use generic-service host_name kamino service_description Zombie Processes check_command check_nrpe!check_zombie_procs } # define service { # use generic-service # host_name kamino # service_description Total Processes # check_command check_nrpe!check_total_procs # } define service { use generic-service host_name kamino service_description Swap Usage check_command check_nrpe!inst_swap } define service { use generic-service host_name kamino service_description Temperature Sensors check_command check_nrpe!inst_sensors }
4.12.5. NAGIOS Monitoring of Kessel
roles_t/abbey-core/templates/nagios-kessel.cfg
define host { use linux-server host_name kessel address {{ kessel_addr }} } define service { use generic-service host_name kessel service_description Root Partition check_command check_nrpe!inst_root } # define service { # use generic-service # host_name kessel # service_description Current Load # check_command check_nrpe!check_load # } define service { use generic-service host_name kessel service_description Zombie Processes check_command check_nrpe!check_zombie_procs } # define service { # use generic-service # host_name kessel # service_description Total Processes # check_command check_nrpe!check_total_procs # } define service { use generic-service host_name kessel service_description Swap Usage check_command check_nrpe!inst_swap } define service { use generic-service host_name kessel service_description Temperature Sensors check_command check_nrpe!inst_sensors }
4.13. Install Munin
The abbey is experimenting with Munin. NAGIOS is all about notifying the Sys. Admin. of failed services. Munin is more about tracking trends in resource usage.
roles_t/abbey-core/tasks/main.yml
- name: Install Munin. become: yes apt: pkg=munin - name: Add {{ ansible_user }} to munin group. become: yes user: name: "{{ ansible_user }}" append: yes groups: munin - name: Enable network access to Munin. become: yes lineinfile: path: /etc/munin/apache24.conf regexp: '([^#]*)Require' line: '\1Require all granted' backrefs: yes notify: Restart Apache2. - name: Punt default Munin node. become: yes replace: path: /etc/munin/munin.conf regexp: '^\[localhost.*\n\n' - name: Configure actual Munin nodes. become: yes copy: content: | [dantooine.birchwood.private] address 127.0.0.1 [anoat.birchwood.private] address {{ gate_addr }} [kessel.birchwood.private] address {{ kessel_addr }} [ord-mantell.birchwood.private] address {{ ord_mantell_addr }} dest: /etc/munin/munin-conf.d/zzz-site.cfg notify: Restart Munin.
The core machine's sensors produce some unfortunate measurements. The
next task configures libsensors
to ignore them.
roles_t/abbey-core/tasks/main.yml
- name: Configure core sensors(1). become: yes copy: content: | chip "iwlwifi_1-virtual-0" ignore temp1 chip "acpitz-acpi-0" ignore temp1 dest: /etc/sensors.d/site.conf
roles_t/abbey-core/handlers/main.yml
- name: Restart Munin. become: yes systemd: service: munin state: restarted
4.14. Install Analog
The abbey's public web site's access and error logs are emailed
regularly to webmaster
, who saves them in /Logs/apache2-public/
and runs analog
to generate /WWW/campus/analog.html
, available to
the campus as http://www/analog.html
.
roles_t/abbey-core/tasks/main.yml
- name: Install Analog. become: yes apt: pkg=analog - name: Configure Analog (removing old /var/log/apache/ LOGFILEs). become: yes lineinfile: path: /etc/analog.cfg regexp: '^LOGFILE /var/log/apache/' state: absent - name: Configure Analog (adding new configuration lines). become: yes lineinfile: path: /etc/analog.cfg line: "{{ item }}" insertafter: EOF loop: - "LOGFILE /Logs/apache2-public/*-access.log.gz" - "ALLCHART OFF" - "DNS WRITE" - "HOSTNAME \"{{ full_name }}\"" - "OUTFILE /WWW/campus/analog.html" - name: Create /Logs/. become: yes file: path: /Logs state: directory mode: u=rwx,g=rx,o=rx - name: Create /Logs/apache2-public/. become: yes file: path: /Logs/apache2-public state: directory owner: monkey group: staff mode: u=rwx,g=srwx,o=rx
4.15. Add Monkey to Web Server Group
Monkey needs to be in www-data
so that it can run
/WWW/live/Photos/Private/cronjob
to publish photos from multiple
user cloud accounts, found in files owned by www-data
, files like
InstantUpload/Camera/2021/01/IMG_20210115_092838.jpg
in
/var/www/nextcloud/data/$USER/files/
.
roles_t/abbey-core/tasks/main.yml
- name: Add Monkey to Nextcloud group. become: yes user: name: monkey append: yes groups: www-data
4.16. Install netpbm For Photo Processing
Monkey's photo processing scripts use netpbm
commands like
jpegtopnm
.
roles_t/abbey-core/tasks/main.yml
- name: Install netpbm.
become: yes
apt: pkg=netpbm
4.17. Install Samba
The abbey core provides NAS (Network Attached Storage) service to the cloister network. It also provides writable shares for a Home Assistant appliance (Raspberry Pi).
- Install
samba
. - Create system user
hass
. - Create
/home/hass/{media,backup,share}/
with appropriate permissions.
roles_t/abbey-core/tasks/main.yml
- name: Install Samba. become: yes apt: pkg=samba - name: Add system user hass. become: yes user: name: hass system: yes - name: Add {{ ansible_user }} to hass group. become: yes user: name: "{{ ansible_user }}" append: yes groups: hass - name: Configure shares. become: yes blockinfile: block: | [Shared] path = /Shared guest ok = yes read only = yes [HASS-backup] comment = Home Assistant backup path = /home/hass/backup valid users = hass read only = no [HASS-media] comment = Home Assistant media path = /home/hass/media valid users = hass read only = yes [HASS-share] comment = Home Assistant share path = /home/hass/share valid users = hass read only = no dest: /etc/samba/smb.conf marker: "# {mark} ABBEY MANAGED BLOCK" notify: New shares.
roles_t/abbey-core/handlers/main.yml
- name: New shares. become: yes systemd: service: smbd state: reloaded
5. The Abbey Gate Role
Birchwood Abbey's gate is a $110 µPC configured as A Small Institute
Gate, thus providing a campus VPN on a campus Wi-Fi access point. It
routes network traffic from its wifi
and lan
interfaces to its
isp
interface (and back) with NAT. That is all the abbey requires
of its gate, so there is no additional Ansible configuration in this
chapter (yet).
5.1. The Abbey Gate's Network Interfaces
The abbey gate's lan
interface is the PC's built-in Ethernet
interface, connected to the cloister Ethernet, a Gigabit Ethernet
switch. Its wifi
interface is a USB3.0 Ethernet adapter connected
with a cross-over cable to the WAN interface of a Think Penguin
TPE-R1300 (and at one time a Linksys WRT1900AC). The isp
interface
is another USB3.0 Ethernet adapter connected with a cross-over cable
to the Ethernet interface of a "cable modem" (a Starlink terminal).
The MAC address of each interface is set in private/vars.yml
(see
Institute/private/vars.yml
) as the values of the gate_lan_mac
,
gate_wifi_mac
and gate_isp_mac
variables.
5.2. The Abbey's Starlink Configuration
The abbey connects to Starlink via Ethernet, and disables Starlink's Wi-Fi access point. An Ethernet adapter add-on (ordered separately) was installed on the Starlink cable, and a second USB-Ethernet dongle on Gate. The adapters were then connected with a cross-over cable.
The abbey could have avoided buying a separate cloister Wi-Fi access point, and used Starlink's Wi-Fi instead, with or without its add-on Ethernet interface. Instead, the abbey invested in a 2.4GHz-only Think Penguin access point, and connected it to a third Ethernet interface on Gate. This was preferred for a number of reasons.
The abbey uses ISPs other than Starlink, tethering to a cellphone when under trees, or even limping along on campground Wi-Fi where the land of woven trees has cut off even cell service.
The abbey uses long and complex passwords, especially on public facing services like Wi-Fi. Such a password has been laboriously entered into several household IoT devices. Connecting them to a dedicated, ISP-independent cloister Wi-Fi access point ensures a reliable IoT with zero re-configuration.
Using Starlink's add-on Ethernet interface allowed its Wi-Fi to be disabled, reducing the Wi-Fi clutter in the campground ether.
The Think Penguin access point is transparent, trustworthy hardware that has earned a Respects Your Freedom certification (see https://ryf.fsf.org/).
And most importantly, a dedicated and trustworthy cloister Wi-Fi keeps at least our local network traffic out of view of our ISPs.
5.3. Alternate ISPs
The abbey used to use a cell phone on a USB tether to get Internet
service. At that time, Gate's /etc/netplan/60-isp.yaml
file was the
following.
network: ethernets: tether: match: name: usb0 set-name: isp dhcp4: true dhcp4-overrides: use-dns: false
The abbey has occasionally used a campground Wi-Fi for Internet
service, using a 60-isp.yaml
file similar to the lines below.
network: wifis: tether: match: name: wlan0 set-name: isp dhcp4: true dhcp4-overrides: use-dns: false access-points: "AP with password": password: "password" "AP with no password": {}
6. The Abbey Cloister Role
Birchwood Abbey's cloister is a small institute campus. The campus
role configures all campus machines to trust the institute's CA, sync
with the campus time server, and forward email to Core. The
abbey-cloister
role additionally configures cloistered machines to
use the cloister Apt cache, respond to Core's NAGIOS and Munin network
monitors, and to install Emacs. There are also a few OS specific
tasks, namely configuration required on Raspberry Pi OS machines.
Wireless clients are issued keys for the cloister VPN by the ./abbey
client
command which is currently identical to the ./inst client
command (described in The Client Command). The wireless, cloistered
hosts never roam, are not associated with a member, and so are
"campus" clients, issued keys with commands like this:
./abbey client campus new-host-name
6.1. Use Cloister Apt Cache
The Apt-Cacher:TNG program does not work well on the frontier, so is not a common part of a small institute. But it is helpful even for a cloister with less than a dozen hosts (especially to a homogeneous cloister using many of the same packages), so it is tolerable to the abbey's monks. Monks are patient enough to re-run failed scans repeatedly until few or no incomplete or damaged files are found. Depending on the quality of the Internet connection, this may take a while.
Again, https
repositories are contacted directly, cached only on the
local host.
roles_t/abbey-cloister/tasks/main.yml
--- - name: Use the local Apt package cache. become: yes copy: content: > Acquire::http::Proxy "http://apt-cacher.birchwood.private.:3142"; Acquire::https::Proxy "DIRECT"; dest: /etc/apt/apt.conf.d/01proxy mode: u=rw,g=r,o=r
6.2. Configure Cloister NRPE
Each cloistered host is a small institute campus host and thus is
already running an NRPE server (a NAGIOS Remote Plugin Executor
server) with a custom inst_sensors
monitor (described in Configure
NRPE of A Small Institute). The abbey adds one complication: yet
another check_sensors
variant, abbey_pisensors
, installed on
Raspberry Pis (architecture aarch64
) only.
roles_t/abbey-cloister/tasks/main.yml
- name: Install abbey_pisensors NAGIOS plugin. become: yes copy: src: ../abbey-core/files/abbey_pisensors dest: /usr/local/sbin/abbey_pisensors mode: u=rwx,g=rx,o=rx when: ansible_architecture == 'aarch64' - name: Configure NAGIOS command. become: yes copy: content: | command[abbey_pisensors]=/usr/local/sbin/abbey_pisensors dest: /etc/nagios/nrpe.d/abbey.cfg when: ansible_architecture == 'aarch64' notify: Reload NRPE server.
roles_t/abbey-cloister/handlers/main.yml
- name: Reload NRPE server. become: yes systemd: service: nagios-nrpe-server state: reloaded
6.3. Install Munin Node
Each cloistered host is a Munin node.
roles_t/abbey-cloister/tasks/main.yml
- name: Install Munin Node. become: yes apt: pkg=munin-node - name: Add {{ ansible_user }} to munin group. become: yes user: name: "{{ ansible_user }}" append: yes groups: munin
Again, one of our cloistered hosts has sensors producing unfortunate
measurements. The next task configures Anoat's libsensors
to ignore
them.
roles_t/abbey-cloister/tasks/main.yml
- name: Configure {{ inventory_hostname }} sensors(1). copy: content: | chip "iwlwifi_1-virtual-0" ignore temp1 chip "acpitz-acpi-0" ignore temp1 dest: /etc/sensors.d/site.conf when: inventory_hostname == 'anoat'
6.4. Install Emacs
The monks of the abbey are masters of the staff and Emacs.
roles_t/abbey-cloister/tasks/main.yml
- name: Install monastic software.
become: yes
apt: pkg=emacs
7. The Abbey Weather Role
Birchwood Abbey now uses Home Assistant to record and display weather data from an Ecowitt GW2001 IoT gateway connecting wirelessly to a WS90 (7 function weather station) and a couple WN31s (temp/humidity sensors).
The configuration of the GW2001 IoT hub involved turning off the Wi-Fi access point, and disabling unused channels. The hub reports the data from all sensors in range, anyone's sensors. These new data sources are noticed and recorded by Home Assistant automatically as similarly equipped campers come and go. Disabling unused channels helps avoid these distractions.
The configuration of Home Assistant involved installing the Ecowitt "integration". This was accomplished by choosing "Settings", then "Devices & services", then "Add Integration", and searching for "Ecowitt". Once installed, the integration created dozens of weather entities which were organized into an "Abbey" dashboard.
8. The Abbey DVR Role
The abbey uses AgentDVR to record video from PoE IP HD security cameras. It is installed and configured as described here.
8.1. AgentDVR Installation
AgentDVR is installed at the abbey according to the iSpy web site's latest(?) instructions. The "download" button on iSpy's Download page (https://www.ispyconnect.com/download), when "Agent DVR - Linux/ macOS/ RPi" is chosen, suggests the following command lines (the second of which is broken across three lines).
sudo apt-get install curl bash <(curl -s "https://raw.githubusercontent.com/\ ispysoftware/agent-install-scripts/main/v2/\ install.sh")
Before executing these commands, Ansible is enlisted to make certain preparations.
8.1.1. AgentDVR Installation Preparation
AgentDVR runs in the abbey as a system user, agentdvr
, which
installs and runs the service. Though a system user, the account gets
a home directory, /home/agentdvr/
in which to install AgentDVR, and
a login shell, /bin/bash
. This much Ansible can do in preparation.
./abbey config dvrs
After the agentdvr
account is created, it is temporarily authorized
to run a handful of system commands (as root
!). This small set is
sufficient if the offer to create the system service is declined.
The following commands create this authorization in ~/01agentdvr
,
validate and install it in /etc/sudoers.d/01agentdvr
. Such caution
is taken because a syntax error anywhere in /etc/sudoers.d/
can make
the sudo
command inoperative, cutting off access to all elevated
privileges until a "rescue" (involving a reboot) is performed.
echo "ALL ALL=(agentdvr) NOPASSWD: /bin/systemctl,/bin/apt-get,\ /sbin/adduser,/sbin/usermod" >~/01agentdvr sudo chown root:root ~/01agentdvr sudo chmod 440 ~/01agentdvr visudo --check --owner --perms ~/01agentdvr sudo mv ~/01agentdvr /etc/sudoers.d/
8.1.2. AgentDVR Installation Execution
With the above preparations, the system administrator can get a shell
session under the agentdvr
account to run iSpy's installation script
in the empty /home/agentdvr/
directory.
sudo apt-get install curl
sudo -u agentdvr <(curl -s "https:.../install.sh")
The script creates the /home/agentdvr/AgentDVR/
directory, and
offers to install a system service. The offer is declined. Instead,
Ansible is run again.
8.1.3. AgentDVR Installation Completion
When Ansible is run a second time, after the installation script, it
sees the new /home/agentdvr/AgentDVR/
directory and creates (and
starts) the new system service.
./abbey config dvrs
Also after the installation, the system administrator revokes the
agentdvr
account's authorizations to modify packages and accounts.
sudo rm /etc/sudoers.d/01agentdvr
8.2. Create User agentdvr
AgentDVR runs as the system user agentdvr
, which is created here.
roles_t/abbey-dvr/tasks/main.yml
--- - name: Create agentdvr. become: yes user: name: agentdvr system: yes home: /home/agentdvr shell: /bin/bash append: yes groups: video - name: Add {{ ansible_user }} to agentdvr group. become: yes user: name: "{{ ansible_user }}" append: yes groups: agentdvr - name: Create /home/agentdvr/. become: yes file: path: /home/agentdvr state: directory owner: agentdvr group: agentdvr mode: u=rwx,g=rwxs,o=rx
8.3. Test For AgentDVR/
The following task probes for the /home/agentdvr/AgentDVR/
directory, to detect that the build/install process has completed. It
registers the results in the agentdvr
variable. Several of the
remaining installation steps are skipped unless
agentdvr.stat.exists
.
roles_t/abbey-dvr/tasks/main.yml
- name: Test for AgentDVR directory.
stat:
path: /home/agentdvr/AgentDVR
register: agentdvr
- debug:
msg: "/home/agentdvr/AgentDVR/ does not yet exist"
when: not agentdvr.stat.exists
8.4. Create AgentDVR Service
This service definition came from the template downloaded (from here)
by the installer, specifically the linux_setup2.sh
script downloaded
by install.sh
.
roles_t/abbey-dvr/tasks/main.yml
- name: Install AgentDVR.service. become: yes copy: content: | [Unit] Description=AgentDVR [Service] WorkingDirectory=/home/agentdvr/AgentDVR ExecStart=/home/agentdvr/AgentDVR/Agent # fix memory management issue with dotnet core Environment="MALLOC_TRIM_THRESHOLD_=100000" # to query logs using journalctl, set a logical name here SyslogIdentifier=AgentDVR User=agentdvr # ensure the service automatically restarts Restart=always # amount of time to wait before restarting the service RestartSec=5 [Install] WantedBy=multi-user.target dest: /etc/systemd/system/AgentDVR.service when: agentdvr.stat.exists - name: Enable/Start AgentDVR.service. become: yes systemd: service: AgentDVR enabled: yes state: started when: agentdvr.stat.exists
8.5. Create AgentDVR Storage
The abbey uses a separate volume to store surveillance recordings,
lest the DVR program fill the root file system. The volume is mounted
at /DVR/
. The following tasks create /DVR/AgentDVR/video/
(whether a large volume is mounted there or not!) with appropriate
permissions so that the instructions for configuring a default storage
location do not fail.
roles_t/abbey-dvr/tasks/main.yml
- name: Create /DVR/AgentDVR/. become: yes file: state: directory path: /DVR/AgentDVR owner: agentdvr group: agentdvr mode: u=rwx,g=rxs,o= - name: Create /DVR/AgentDVR/video/. become: yes file: state: directory path: /DVR/AgentDVR/video owner: agentdvr group: agentdvr mode: u=rwx,g=rxs,o=
8.6. Configure IP Cameras
A new security camera is setup as described in Cloistering, after
which the camera should be accessible by name on the abbey networks.
Assuming ping -c1 new
works, the camera's web interface will be
accessible at http://new/
.
The administrator uses this to make the following changes.
- Set a password on the administrative account.
- Create an unprivileged user with a short password,
e.g.
user:blah
. - Set the frame rate to 5fps. The abbey prefers HD resolution and long duration logs, thus fewer frames per second.
8.7. Configure AgentDVR's Cameras
After Ansible has configured and started the AgentDVR service, its web
UI will be available at http://core:8090/
. The initial Live View
will be empty, overlayed with instructions to click the edit button.
The wizard will ask for each device's general configuration parameters. The abbey uses SV3C IP cameras with a full HD stream as well as a standard definition "vice stream". AgentDVR wants both.
- General:
- On: yes
- Name: Outside
- Source Type: Network Camera
- Username: user
- Password: blah
- Live URL: rtsp://new.birchwood.private:554/12
- Record URL: rtsp://new.birchwood.private:554/11
Additional cameras are added via the "New Device" item in the Server Menu. This step is completed when all cameras are streaming to AgentDVR's Live View.
8.8. Configure AgentDVR's Default Storage
AgentDVR's web interface is also used to configure a default storage
location. From the Server Menu (upper left), the administrator chooses
Configuration Settings, the Storage tab, the Configure button, and the
add (plus) button. The storage location is set to /DVR/AgentDVR/
and the "default" toggle is set. Several OK buttons then need to be
pressed before the task is complete.
8.9. Configure AgentDVR's Recordings
After a default storage location has been configured, AgentDVR's cameras can begin recording. The "Edit Devices" dialog lists (via the "Edit Devices" item in the Server Menu) the configured cameras. The edit buttons lead to the device settings where the following parameters are set (in the Recording and Storage tabs).
- Recording:
- Mode: Constant
- Encoder: Raw Record Stream
- Max record time: 900 (15 minutes)
- Storage:
- Location: DVR/AgentDVR
- Folder: Outside
- Storage Management:
- On: yes
- Max Size: 0 (unlimited)
- Max Age: 168 (7 days)
9. The Abbey TVR Role
The abbey has a few TV tuners and a subscription to Schedules Direct for North American TV broadcast schedules. It uses one (master) MythTV server and its MythWeb interface to make and serve recordings of area broadcasts.
The Abbey TVR Role installs the MythTV backend and the MythWeb web
interface on the master server. It configures the Apache web server
to serve MythWeb pages at e.g. http://new/mythweb/
.
9.1. Building MythTV and MythWeb
Neither Debian nor the MythTV project provide binary packages of MythTV and MythWeb. The project recommends building from source according to their Build from Source wiki page. To do this, the target host will need several dozen "developer" packages installed. Thus the abbey's TVR role proceeds in two phases.
In the first phase, the MythTV project's Ansible code, in
mythtv-ansible/
, is used to assemble a list of packages needed
during the build. The packages are installed and the rest of the
role's tasks are skipped. This allows the administrator to manually
build and install MythTV, creating /usr/local/bin/mythtv-setup
.
The administrator will also download and install MythWeb before
running the TVR role again for its second phase. The administrator
will not be able to run mythtv-setup
before completing the second
phase.
In the second phase, the role finds mythtv-setup
has been installed
on the target host and so proceeds with the "Post-installation tasks"
section of the wiki page. This still leaves a number of manual steps
to be performed with the mythtv-setup
program, e.g. configuring a
video source and capture card, after which the backend can be started.
9.2. TVR Machine Setup
A new TVR machine needs only Cloistering to prepare it for
Ansible. As part of that process, it should be added to the tvrs
group in the hosts
file. An existing server can become a TVR
machine simply by adding it to the tvrs
group.
9.3. Include Abbey Variables
Private variables in private/vars-abbey.yml
are needed, as in the
abbey-core
role. The file path is relative to the playbook's
directory, playbooks/
.
roles_t/abbey-tvr/tasks/main.yml
--- - name: Include private abbey variables. include_vars: ../private/vars-abbey.yml
9.4. Install MythTV Build Requisites
A number of developer packages are needed to build MythTV. The wiki
page recommends Ansible playbooks to assemble the appropriate list of
package names (several dozen count) depending on the target OS
version. The playbooks are in https://github.com/MythTV/ansible which
contains a README.md
.
The instructions in the README.md
are to clone the repository and
run sudo ansible-playbook -i hosts qt5.yml
on the build machine.
However the abbey prefers to keep the Ansible code on an
administrator's machine with the rest of the abbey's roles. The
following commands were used to create a mythtv-ansible/
subdirectory. (A git pull origin
command in this subdirectory might
be appropriate to download updates.)
git clone https://github.com/MythTV/ansible mythtv-ansible
cd mythtv-ansible
git checkout fixes/32
The abbey-tvr
role uses a couple tasks files in mythtv-ansible/
directly, bypassing the inventories, playbooks and roles, after
"fixing" the final apt
tasks by adding become: yes
. After making
these edits, the git diff
command should produce something like the
following.
diff --git a/roles/mythtv-deb/tasks/main.yml b/roles/mythtv-deb/tasks index 868c9b7..3dcf115 100644 --- a/roles/mythtv-deb/tasks/main.yml +++ b/roles/mythtv-deb/tasks/main.yml @@ -366,6 +366,7 @@ '{{ lookup("flattened", deb_pkg_lst) }}' - name: install packages + become: yes apt: name: '{{ lookup("flattened", deb_pkg_lst ) }}' diff --git a/roles/qt5/tasks/qt5-deb.yml b/roles/qt5/tasks/qt5-deb.ym index 7a1a0bc..26ba782 100644 --- a/roles/qt5/tasks/qt5-deb.yml +++ b/roles/qt5/tasks/qt5-deb.yml @@ -25,6 +25,7 @@ '{{ lookup("flattened", deb_pkg_lst) }}' - name: install deb qt5 packages + become: yes apt: name: '{{ lookup("flattened", deb_pkg_lst ) }}'
roles_t/abbey-tvr/tasks/mains.yml
- name: Install MythTV runtime requisites.
become: yes
apt:
pkg: [ mariadb-server, xmltv ]
- name: Install MythTV build requisites.
include_tasks: "{{ item }}"
loop:
- ../mythtv-ansible/roles/mythtv-deb/tasks/main.yml
- ../mythtv-ansible/roles/qt5/tasks/qt5-deb.yml
The tasks above install runtime and compile-time requisites during the
"first" run of e.g. ./abbey config new
. The "first" run can be
repeated until successful. The remaining tasks are skipped until
MythTV is built and installed.
9.5. Build and Install MythTV
After a successful "first" run of e.g. ./abbey config new
, the
target machine is prepared to build (and install) MythTV. The
following commands are used.
cd /usr/local/src/ git clone https://github.com/MythTV/mythtv cd mythtv/ git checkout fixes/32 cd mythtv/ ./configure make sudo make install
The make install
command does not need to be run as root
if
bin/
, lib/
, include/
, share/
in /usr/local/
and
dist-packages/
in /usr/local/lib/python3.9/
on the target machine
are writable by the builder.
The following task probes for the mythtv-setup
program, installed in
/usr/local/bin/
, to detect that the build/install process has
completed. It registers the results in the mythtv
variable.
Several of the remaining installation steps are skipped unless
mythtv.stat.exists
.
roles_t/abbey-tvr/tasks/main.yml
- name: Test for MythTV binary packages.
stat:
path: /usr/local/bin/mythtv-setup
register: mythtv
- debug:
msg: "/usr/local/bin/mythtv-setup does not yet exist"
when: not mythtv.stat.exists
9.6. Create MythTV User
MythTV Backend needs to run as its own user: mythtv
.
roles_t/abbey-tvr/tasks/main.yml
- name: Create mythtv. become: yes user: name: mythtv system: yes
9.7. Create MythTV DB
MythTV's MariaDB database is created by the following task, when the
mysql_db
Ansible module supports check_implicit_admin
.
- name: Create MythTV DB. become: yes mysql_db: check_implicit_admin: yes name: mythconverg collation: utf8mb4_general_ci encoding: utf8mb4
Unfortunately it does not currently, yet the institute prefers the
more secure Unix socket authentication method. Rather than create a
privileged DB user, the mythconverg
database is created manually
(below).
9.8. Create MythTV DB User
The DB user's password is taken from the mythtv_dbpass
variable,
kept in private/vars-abbey.yml
, and generated e.g. with the apg -n
1 -x 12 -m 12
command.
private_ex/vars-abbey.yml
mythtv_dbpass: daJkibpoJkag
The following task would create the DB user (mysql_user
supports
check_implicit_admin
) but the mythconverg
database was not
created above.
- name: Create MythTV DB user. become: yes mysql_user: check_implicit_admin: yes name: mythtv password: "{{ mythtv_dbpass }}" priv: "mythconverg.*:all"
9.9. Manually Create MythTV DB and DB User
The MythTV database and database user are created manually with the
following SQL (with the mythtv_dbpass
spliced in). The SQL commands
are entered at the SQL prompt of the sudo mysql
command, or perhaps
piped into the command.
create database mythconverg character set utf8mb4 collate utf8mb4_general_ci; create user 'mythtv'@'%' identified by '{{ mythtv_dbpass }}'; create user 'mythtv'@'localhost' identified by '{{ mythtv_dbpass }}'; grant all privileges on mythconverg.* to 'mythtv'@'%' with grant option; grant all privileges on mythconverg.* to 'mythtv'@'localhost' with grant option; flush privileges; exit;
9.10. Load DB Timezone Info
Starting with MythTV version 0.26, the time zone tables must be loaded
into MySQL. The MariaDB installed by Debian 11 seems to need this
too. The test SQL produced NULL
.
SELECT CONVERT_TZ(NOW(), 'SYSTEM', 'Etc/UTC');
After running the following command line, the test SQL produced
e.g. 2022-09-13 20:15:41
.
mysql_tzinfo_to_sql /usr/share/zoneinfo | sudo mysql mysql
9.11. Create MythTV Backend Service
This task installs the mythtv-backend.service
file.
roles_t/abbey-tvr/tasks/mains.yml
- name: Create mythtv-backend service. become: yes copy: content: | [Unit] Description=MythTV Backend Documentation=https://www.mythtv.org/wiki/Mythbackend After=mysql.service network.target [Service] User=mythtv ExecStartPre=/bin/sleep 30 #TimeoutStartSec=infinity ExecStart=/usr/local/bin/mythbackend --quiet --syslog local7 StartLimitBurst=10 StartLimitInterval=10m Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target dest: /etc/systemd/system/mythtv-backend.service when: mythtv.stat.exists notify: Reload Systemd.
roles_t/abbey-tvr/handlers/main.yml
--- - name: Reload Systemd. become: yes command: systemctl daemon-reload
9.12. Set PHP Timezone
This task checks PHP's timezone. If unset, MythTV's backend logs bitter complaints.
roles_t/abbey-tvr/tasks/main.yml
- name: Configure PHP date.timezone. become: yes lineinfile: regexp: date.timezone ?= line: date.timezone = {{ lookup('file', '/etc/timezone') }} path: "{{ item }}" loop: - /etc/php/8.2/cli/php.ini - /etc/php/8.2/apache2/php.ini when: mythtv.stat.exists notify: Restart Apache2.
roles_t/abbey-tvr/handlers/main.yml
- name: Restart Apache2. become: yes systemd: service: apache2 state: restarted
9.13. Create MythTV Storage Area
The backend does not have a default storage area for its recordings.
A path to an appropriate directory must be set with the mythtv-setup
program (as described below). The abbey uses
/home/mythtv/Recordings/
for MythTV's default storage. This task
creates that directory and ensures it has appropriate permissions.
roles_t/abbey-tvr/tasks/main.yml
- name: Create MythTV storage area.
become: yes
file:
state: directory
dest: /home/mythtv/Recordings
owner: mythtv
group: mythtv
mode: u=rwx,g+rwx,o=rx
9.14. Configure MythTV Backend
With MythTV built and installed, and the post-installation tasks
addressed, MythTV Setup (the mythtv-setup
program) can be run. It
must be run by the mythtv
user, whose home directory will contain
the MythTV (and XMLTV) configuration files. The program is best run
remotely (unless there is a graphical desktop on the server) by a
command like ssh -X mythtv@new mythtv-setup
.
Patience is required. The mythtv-setup
program was not written for
X11 and the X11 adapter has a difficult job. It is often hard to
determine what button is selected or how to proceed (sometimes simply
with ESC
!). Sticking to the arrow, enter and escape keys best
emulates a TV remote (for which the interface was designed).
In MythTV Setup:
- In the initial MythTV Startup Status ("Unable to connect to
Database."), use the "Setup" button to get to "Database
Configuration". Leave the default hostname (
localhost
), port (3306
), database name (mythconverg
) and user (mythtv
). Enter the value ofmythtv_dbpass
(inprivate/vars-abbey.yml
) for the password. Leave the rest of the settings at their default values. Leave "Database Configuration" by pressing Escape and confirming "Save and Exit". - Once in MythTV Setup proper, you will see the main menu. Scroll
down and choose "Storage Directories". In the Local Storage Groups
dialog, add to the "Local 'Default' Storage Group Directories" a new
directory:
/home/mythtv/Recordings
.
9.15. Configure Tuner
The abbey has a Silicon Dust Homerun HDTV Duo (with two tuners). It
is setup as described in Cloistering, after which the tuner is
accessible by name (e.g. new
) on the cloister network. Assuming
ping -c1 new
works, the tuner should be accessible via the
hdhomerun_config_gui
command, a graphical interface contributed to
Debian by Silicon Dust and found in the hdhomerun-config-gui
package. The program, run with the command hdhomerun_config_gui
,
will broadcast on the localnet to find any Homeruns there, but the new
tuner's domain name or IP address can also be entered.
9.16. Add HDHomerun and Mr.Antenna
In MythTV Setup:
- Choose "Capture cards".
- Choose "(Add Capture Card)", then the "New Capture Card".
- Choose Card Type and select "HDHomeRun Networked Tuner".
- Press the right arrow key to see card type parameters. Choose the tuner's address, which should be listed assuming the tuner and TVR are on the same subnet (e.g. the private Ethernet).
- Save and Exit (via Escape key).
- Choose "Video sources".
- Choose "(New Video Source)", then the "New Video Source".
- Enter video source name "Mr.Antenna".
- Choose listings grabber "Schedules Direct JSON API (xmltv)".
- Save and Exit.
- Choose "Input Connections".
- Choose the HDHomeRun.
- Choose video source "Mr.Antenna".
- Save and Exit.
- Choose "Capture cards".
- Add a second HDHomeRun as above.
- Save and Exit.
- Choose "Input connections".
- Connect the second HDHomeRun to Mr.Antenna as above.
- Save and Exit.
- Exit MythTV Setup or continue directly to Scan for New Channels. In
any case, do not run
mythfilldatabase
.
9.17. Scan for New Channels
In MythTV Setup:
- Choose "Channel Editor".
- Navigate to the "Delete" button, leaving Video Source All (right and down and down, or left six times, or sump'n). Confirm deletion of all channels.
- Choose video source Mr.Antenna, then Channel Scan. Scroll down to the "scan" button and choose it (select and Enter).
- Choose "Insert All" when the scan is complete and the count of channels is presented. Delete All unused transports.
- Save and Exit from the scan. Exit from the channel editor.
- Exit MythTV Setup. Do not run
mythfilldatabase
.
9.18. Configure XMLTV
The xmltv
package, specifically its tv_grab_zz_sdjson
program, is
used to download broadcast listings from Schedules Direct. The
program is run by the mythtv
user (like mythtv-setup
) and is
initially configured (the first time) using its --configure
option.
tv_grab_zz_sdjson --configure cp ~/.xmltv/tv_grab_zz_sdjson.conf ~/.mythtv/Mr.Antenna.xmltv
The --configure
command above prompts with many questions and
creates ~/.xmltv/tv_grab_zz_sdjson.conf
, which is copied to
~/.mythtv/Mr.Antenna.xmltv
where mythfilldatabase
will find it.
Afterwards any re-configuration should use the following command.
tv_grab_zz_sdjson --configure \
--config-file ~/.mythtv/Mr.Antenna.xmltv
Here is a transcript of a session with tv_grab_zz_sdjson
. Note that
the list of "inputs" available in a postal code typically ends with
the OTA (over the air) broadcasts.
$ tv_grab_zz_sdjson --configure --config-file .mythtv/Mr.Antenna.xml Cache file for lineups, schedules and programs. Cache file: [/home/mythtv/.xmltv/tv_grab_zz_sdjson.cache] If you are migrating from a different grabber selecting an alternate channel ID format can make the migration easier. Select channel ID format: 0: Default Format (eg: I12345.json.schedulesdirect.org) 1: tv_grab_na_dd Format (eg: I12345.labs.zap2it.com) 2: MythTV Internal DD Grabber Format (eg: 12345) Select one: [0,1,2 (default=0)] As the JSON data only includes the previously shown date normally th XML output should only have the date. However some programs such as older versions of MythTV also need a time. Select previously shown format: 0: Date Only 1: Date And Time Select one: [0,1 (default=0)] Schedules Direct username. Username: USERNAME Schedules Direct password. Password: PASSWORD ** POST https://json.schedulesdirect.org/20141201/token ==> 200 OK ** GET https://json.schedulesdirect.org/20141201/status ==> 200 OK ( ** GET https://json.schedulesdirect.org/20141201/lineups ==> 200 OK This step configures the lineups enabled for your Schedules Direct account. It impacts all other configurations and programs using the JSON API with your account. A maximum of 4 lineups can by added to your account. In a later step you will choose which lineups or channels to actually use for this configuration. Current lineups enabled for your Schedules Direct account: #. Lineup ID | Name | Location | Transport 1. USA-OTA-57719 | Local Over the Air Broadcast | 57719 | Antenna Edit account lineups: [continue,add,delete (default=continue)] Choose whether you want to include complete lineups or individual channels for this configuration. Select mode: [lineups,channels (default=lineups)] ** GET https://json.schedulesdirect.org/20141201/lineups ==> 200 OK Choose lineups to use for this configuration. USA-OTA-57719 [yes,no,all,none (default=no)] all
Once configured, the mythfilldatabase
program should be able to use
tv_grab_zz_sdjson
to connect to Schedules Direct and download the
chosen line-up. However mythfilldatabase
is happiest when the
backend is running, so it is not run until then.
9.19. Debug XMLTV
If the mythfilldatabase
command fails or expected listings do not
appear, more information is available by adding the --verbose
option. The --help
option also reveals much, including a --manual
option for "interactive configuration".
sudo -H -u mythtv mythfilldatabase --verbose
The command might, for example, show that it is failing to run a
tv_grab_zz_sdjson
command like the following.
nice tv_grab_zz_sdjson \ --config-file '/home/mythtv/.mythtv/Mr.Antenna.xmltv' \ --output /tmp/myths5Sq35 --quiet
Running a similar command (without --quiet
) might be more revealing.
sudo -H -u mythtv \ tv_grab_zz_sdjson \ --config-file '/home/mythtv/.mythtv/Mr.Antenna.xmltv' \ --output /tmp/mythFUBAR
9.20. Configure MythTV Backend Logging
The abbey directs MythTV log messages to /var/log/mythtv.log
(and
away from /var/log/syslog
) and rotates the log file.
roles_t/abbey-tvr/tasks/main.yml
- name: Install =/etc/rsyslog.d/40-mythtv.conf. become: yes copy: content: | :msg,startswith," myth" -/var/log/mythtv.log & stop dest: /etc/rsyslog.d/40-mythtv.conf - name: Install =/etc/logrotate.d/mythtv=. become: yes copy: content: | /var/log/mythtv.log { daily size=10M rotate 7 notifempty copytruncate missingok postrotate reload rsyslog >/dev/null 2>&1 || true endscript } dest: /etc/logrotate.d/mythtv
9.21. Start MythTV Backend
After configuring with mythtv-setup
as discussed above, start and
enable (at boot time) the mythtv-backend
service.
sudo systemctl enable mythtv-backend sudo systemctl start mythtv-backend systemctl status -l mythtv-backend sudo -u mythtv mythfilldatabase
9.22. Install MythWeb
MythWeb, like MythTV, is installed from a Git repository. The
following commands create /usr/local/share/mythtv/mythweb/
by
cloning the MythWeb repository in /usr/local/src/mythweb/
, checking
out the appropriate branch, and copying the appropriate portion.
cd /usr/local/src/ git clone https://github.com/MythTV/mythweb ( cd mythweb/; git checkout fixes/32 ) rsync -C mythweb /usr/local/share/mythtv/
The following tasks take care of the rest of the installation.
roles_t/abbey-tvr/tasks/main.yml
- name: Install MythWeb requisites.
become: yes
apt:
pkg: [ apache2, php, php-mysql ]
- name: Install MythWeb in web server DocumentRoot.
file:
state: link
src: /usr/local/share/mythtv/mythweb
dest: /var/www/html/mythweb
- name: Configure MythWeb data directory.
file:
state: directory
dest: /var/www/html/mythweb/data
group: www-data
mode: u=rwx,g+rwx,o=rx
- name: Install MythWeb configuration.
become: yes
template:
src: mythweb.conf.j2
dest: /etc/apache2/sites-available/mythweb.conf
notify: Restart Apache2.
- name: Enable MythWeb configuration.
become: yes
command:
cmd: a2ensite -q mythweb
creates: /etc/apache2/sites-enabled/mythweb.conf
notify: Restart Apache2.
roles_t/abbey-tvr/templates/mythweb.conf.j2
# # Apache configuration directives for MythWeb. # # Note that this file is maintained by the network administration. <Directory "/var/www/html/mythweb/data"> # For Apache 2.2 #Options -All +FollowSymLinks +IncludesNoExec # For Apache 2.4+ Options +FollowSymLinks +IncludesNoExec </Directory> <Directory "/var/www/html/mythweb" > <Files mythweb.*> setenv db_server "127.0.0.1" setenv db_name "mythconverg" setenv db_login "mythtv" setenv db_password "{{ mythtv_dbpass }}" </Files> <Files *.php> php_value file_uploads 0 php_value allow_url_fopen On php_value zlib.output_handler Off php_value memory_limit 64M php_value max_execution_time 30 php_value display_startup_errors On php_value display_errors On </Files> RewriteEngine on RewriteRule \ ^(css|data|images|js|themes|skins|README|INSTALL|[a-z_]+\.(php|pl))(/|$)\ - [L] RewriteRule ^(pl(/.*)?)$ mythweb.pl/$1 [QSA,L] RewriteRule ^(.+)$ mythweb.php/$1 [QSA,L] RewriteRule ^(.*)$ mythweb.php [QSA,L] AllowOverride All Options FollowSymLinks AddType video/nuppelvideo .nuv AddType image/x-icon .ico <IfModule deflate_module> BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE application/x-javascript </IfModule> <IfModule headers_module> Header append Vary User-Agent env=!dont-vary </IfModule> <Files *.pl> SetHandler cgi-script Options +ExecCGI </Files> </Directory>
9.23. Change Broadcast Area
The abbey changes location almost weekly, so its HDTV broadcast area changes frequently. At the start of a long stay the administrator uses the MythTV Setup program to scan for the new area's channels, as described in Scan for New Channels.
To change MythTV's "listings", the administrator needs the new area's
postal code and the username and password of the abbey's Schedules
Direct account. The administrator then runs the tv_grab_zz_sdjson
program as user mythtv
.
tv_grab_zz_sdjson --configure \
--config-file ~/.mythtv/Mr.Antenna.xmltv
The program will prompt for the zip code and offer a list of "inputs" available in that area, as described in Configure XMLTV.
Then the administrator can re-start the backend.
sudo systemctl start mythtv-backend
And the mythtv
account can run mythfilldatabase
.
mythfilldatabase
10. The Ansible Configuration
The abbey's Ansible configuration, like that of A Small Institute, is
kept on an administrator's notebook. The private SSH key that allows
remote access to privileged accounts on all abbey servers is kept on
an encrypted, off-line volume plugged into the administrator's
notebook only when running ./abbey
commands.
The small institute provided examples of both public and private
variables. This document includes the abbey's actual public
variables, and examples of the private variables. As in A Small
Institute, this document's roles tangle into roles_t/
, separate from
the running (and perhaps recently debugged!) code in roles/
.
The configuration of a small institute is included as a git sub-module
in Institute/
. Its roles are included in the roles_path
setting
in ansible.cfg
. Its example hosts
inventory, and public/
and
private/
directories are not included, and are replaced by abbey
specific versions.
NOTE: if you have not read at least the Overview of A Small Institute you are lost.
The Ansible configuration:
ansible.cfg
- The Ansible configuration file.
hosts
- The inventory of hosts.
playbooks/site.yml
- The play that assigns roles to hosts.
public/
- Variables, certificates.
public/vars.yml
- The institutional variables.
private/
- Sensitive variables, files, templates.
private/vars.yml
- Sensitive institutional variables.
private/vars-abbey.yml
- Sensitive liturgical variables.
roles/
- The running copy of
roles_t/
. roles_t/
- The liturgical roles as tangled from this document.
Institute/roles/
- The running copy of
Institute/roles_t/
. Institute/roles_t/
- The institutional roles as tangled from
Institute/README.org
.
The first three files in the list are included in this chapter. The
rest are built up piecemeal by (tangled from) this document,
README.org
, and Institute/README.org
.
10.1. ansible.cfg
This is much like the example (test) institutional configuration file,
except the roles are found in Institute/roles/
as well as roles/
.
ansible.cfg
[defaults] interpreter_python=/usr/bin/python3 vault_password_file=Secret/vault-password inventory=hosts roles_path=roles:Institute/roles
10.2. hosts
hosts
all: vars: ansible_user: sysadm ansible_ssh_extra_args: -i Secret/ssh_admin/id_rsa hosts: # The Main Servers: Front, Gate and Core. droplet: ansible_host: 159.65.75.60 ansible_become_password: "{{ become_droplet }}" anoat: ansible_become_password: "{{ become_anoat }}" dantooine: ansible_become_password: "{{ become_dantooine }}" # Campus kamino: ansible_become_password: "{{ become_kamino }}" kessel: ansible_become_password: "{{ become_kessel }}" ord-mantell: ansible_become_password: "{{ become_ord_mantell }}" # Notebooks endor: ansible_become_password: "{{ become_endor }}" sullust: ansible_host: 127.0.0.1 ansible_become_password: "{{ become_sullust }}" postfix_mydestination: >- sullust.birchwood.private sullust sullust.localdomain localhost.localdomain localhost children: front: hosts: droplet: gate: hosts: anoat: core: hosts: dantooine: campus: hosts: anoat: kamino: kessel: ord-mantell: dvrs: hosts: dantooine: tvrs: hosts: dantooine: webtvs: hosts: kamino: kessel: ord-mantell: notebooks: hosts: endor: sullust: builders: hosts: sullust: kamino:
10.3. playbooks/site.yml
This playbook provisions the entire network by applying first the institutional roles, then the liturgical roles.
playbooks/site.yml
--- - name: Configure All hosts: all roles: [ all ] - name: Configure Front hosts: front roles: [ front, abbey-front ] - name: Configure Gate hosts: gate roles: [ gate ] - name: Configure Core hosts: core roles: [ core, abbey-core ] - name: Configure Cloister hosts: campus roles: [ campus, abbey-cloister ] - name: Configure DVRs hosts: dvrs roles: [ abbey-dvr ] - name: Configure TVRs hosts: tvrs roles: [ abbey-tvr ]
11. The Abbey Commands
The ./abbey
script encodes the abbey's canonical procedures. It
includes The Institute Commands and adds a few abbey-specific
sub-commands.
11.1. Abbey Command Overview
Institutional sub-commands:
- config
- Check/Set the configuration of one or all hosts.
- new
- Create system accounts for a new member.
- old
- Disable system accounts for a former member.
- pass
- Set the password of a current member.
- client
- Produce an OpenVPN configuration (
.ovpn
) file for a member's device.
Liturgical sub-commands:
- tz
- Run
timedatectl set-timezone
on cloister servers. - upgrade
- Run
apt update; apt full-upgrade --autoremove
on all hosts. - reboots
- Look for
/run/reboot*
on all hosts. - versions
- Report
ansible_distribution
,_distribution_version
, and_architecture
for all hosts.
11.2. Abbey Command Script
The script begins with the following prefix and trampolines.
abbey
#!/usr/bin/perl -w # # DO NOT EDIT. This file was tangled from README.org. use strict; if (grep { $_ eq $ARGV[0] } qw(CA config new old pass client)) { exec "./Institute/inst", @ARGV; }
The small institute's ./inst
command expects to be running in
Institute/
, not ./
, but it only references public/
, private/
,
Secret/
and playbooks/check-inst-vars.yml
, and will find the abbey
specific versions of these. The roles_path
setting in ansible.cfg
effectively merges the institutional roles into the distinctly named
abbey specific roles. The roles likewise reference files with
relative names, and will find the abbey specific private/
directory (named ../private/
relative to playbooks/
).
Ansible does not implement a playbooks_path
key, so the following
code block "duplicates" the action of the institute's
check-inst-vars.yml
.
playbooks/check-inst-vars.yml
- import_playbook: ../Institute/playbooks/check-inst-vars.yml
11.3. The Upgrade Command
The script implements an upgrade
sub-command that runs apt update
and apt full-upgrade --autoremove
on all abbey managed machines. It
recognizes an optional -n
flag indicating that the upgrade tasks
should only be checked. Any other (single, optional) argument must be
a limit pattern. For example:
./abbey upgrade ./abbey upgrade -n ./abbey upgrade core ./abbey upgrade -n core ./abbey upgrade '!front'
abbey
if ($ARGV[0] eq "upgrade") { shift; my @args = ( "-e", "\@Secret/become.yml" ); if (defined $ARGV[0] && $ARGV[0] eq "-n") { shift; push @args, "--check", "--diff"; } if (defined $ARGV[0]) { my $limit = $ARGV[0]; shift; die "illegal characters: $limit" if $limit !~ /^!?[a-z][-a-z0-9,!]+$/; push @args, "-l", $limit; } exec ("ansible-playbook", @args, "playbooks/upgrade.yml"); }
playbooks/upgrade.yml
- hosts: all tasks: - name: Upgrade packages. become: yes apt: update_cache: yes upgrade: full autoremove: yes purge: yes autoclean: yes - name: Check for /run/reboot-required. stat: path: /run/reboot-required no_log: true register: st - debug: msg: Reboot required. when: st.stat.exists
11.4. The Reboots Command
The script implements a reboots
sub-command that looks for
/run/reboot-required
on all abbey managed machines.
abbey
if ($ARGV[0] eq "reboots") { exec ("ansible-playbook", "-e", "\@Secret/become.yml", "playbooks/reboots.yml"); }
playbooks/reboots.yml
--- - hosts: all tasks: - stat: path: /run/reboot-required register: st - debug: msg: Reboot required. when: st.stat.exists
11.5. The Versions Command
The script implements a versions
sub-command that reports the
operating system version of all abbey managed machines.
abbey
if ($ARGV[0] eq "versions") { exec ("ansible-playbook", "-e", "\@Secret/become.yml", "playbooks/versarch.yml"); }
playbooks/versarch.yml
- hosts: all tasks: - debug: msg: >- {{ ansible_distribution }} {{ ansible_distribution_version }} {{ ansible_architecture }}
11.6. The TZ Command
The abbey changes location almost weekly, so its timezone changes occasionally. Droplet does not move. Gate and other simple servers are kept in UTC. Core, the DVRs, TVRs, Home Assistant and the desktops all want updating to the current local timezone. Home Assistant and the desktops are managed maually, but the rest can all be updated using Ansible.
The tz
sub-command runs the timezone.yml
playbook, which uses the
current timezone/city on the administrator's notebook and updates
Core, the DVRs and TVRs. Each runs timedatectl set-timezone
and
restarts the affected services.
This is an experimental playbook until it is used/tested with separate
machines hosting the DVR and TVR services. It assumes each host sees
the new_tz
result registered by it in a previous play and not by the
last host in the previous play.
abbey
if ($ARGV[0] eq "tz") { my $city = `cat /etc/timezone`; chomp $city; my $zone = `date +%Z`; chomp $zone; print "Setting timezones to $city.\n"; exec ("ansible-playbook", "-e", "\@Secret/become.yml", "-e", "zone=$zone", "-e", "city=$city", "playbooks/timezone.yml"); }
playbooks/timezone.yml
--- - hosts: core, dvrs, tvrs, webtvs tasks: - name: Update timezone. become: yes command: timedatectl set-timezone {{ city }} when: ansible_date_time.tz != zone register: new_tz - hosts: dvrs tasks: - name: Restart AgentDVR. become: yes systemd: service: AgentDVR state: restarted when: new_tz.changed - hosts: tvrs tasks: - name: Restart MythTV. become: yes systemd: service: "{{ item }}" state: restarted loop: [ mysql, mythtv-backend ] when: new_tz.changed - hosts: core tasks: - name: Update PHP date.timezone. become: yes lineinfile: regexp: date.timezone ?= line: date.timezone = {{ city }} path: "{{ item }}" loop: - /etc/php/8.2/cli/php.ini - /etc/php/8.2/apache2/php.ini notify: Restart Apache2. handlers: - name: Restart Apache2. become: yes systemd: service: apache2 state: restarted
11.7. Abbey Command Help
abbey
my $ops = "config,new,old,pass,client,upgrade,reboots,versions,tz"; die "usage: $0 [$ops]\n";
12. Cloistering
This is how a new machine is brought into the cloister. The process is initially quite different depending on the device type but then narrows down to the common preparation of all machines administered by Ansible.
12.1. IoT Devices
A wireless IoT device (smart TV, Blu-ray deck, etc.) cannot install Debian nor even an OpenVPN app from F-Droid. And it shouldn't. As an untrustworthy bit of kit, it should have no access to the cloister, merely the Internet. It need not appear in the Ansible inventory.
IoT devices trusted enough to be patched to the cloister Ethernet (IP
cameras, TV Tuners, etc.) are added to /etc/dhcp/dhcpd.conf
and
given a private domain name as described in the following steps.
Wireless IoT devices are manually configured with the cloister Wi-Fi password and may be given a private domain name as described in the last step:
12.2. Raspberry Pis
The abbey's Raspberry Pi runs the Raspberry Pi OS desktop off an external, USB3.0 SSD. A fresh install should go something like this:
- Write the disk image,
2023-12-05-raspios-bookworm-arm64.img.xz
, to the SSD and plug it into the Pi. Leave the µSD card socket empty. - Attach an HDMI monitor, a USB keyboard/mouse, and the cloister Ethernet, and power up.
- Answer first-boot installation questions:
- Language: English (USA)
- Keyboard: English (USA)
- new username: sysadm
- new password: fubar
- Add to Core DHCP
- Create Wired Domain Name
- Log in as
sysadm
on the console. - Run
sudo raspi-config
and use the following menu items.- S4 Hostname (Set name for this computer on a network): new
- I1 SSH (Enable/disable remote command line access using SSH): enable
- A1 Expand Filesystem (Ensures that all of the SD card is available)
- Update From Cloister Apt Cache
- Authorize Remote Administration
- Configure with Ansible
If the Pi is going to operate wirelessly, the following additional steps are taken.
12.3. PCs
Most of the abbey's machines, like Core and Gate, are general-purpose PCs running Debian. The process of cloistering these machines follows.
- Write the disk image, e.g.
debian-12.2.0-amd64-netinst.iso
, to a USB drive and insert it in the PC. - Attach an HDMI monitor, a USB keyboard/mouse, and the cloister Ethernet, and power up. Choose to boot from the USB drive.
- Answer first-boot installation questions:
- Language: English (USA)
- Keyboard: English (USA)
- new username: sysadm
- new password: fubar
- Add to Core DHCP
- Create Wired Domain Name
- Log in as
sysadm
on the console. - Update From Cloister Apt Cache
Install OpenSSH. Plain Debian does not come with OpenSSH installed.
sudo apt install openssh-server
- Authorize Remote Administration
- Configure with Ansible
If the PC is going to operate wirelessly, the following additional steps are taken.
12.4. Add to Core DHCP
When a new machine is connected to the cloister Ethernet, its MAC address must be added to Core's DHCP configuration. Core does not provide network addresses to new devices automatically.
IoT devices (IP cameras, HDTV tuners, etc.) often have their MAC
address printed on their case or mentioned in a configuration page.
The MAC address must also appear in the device's DHCP Discover
broadcasts, which are logged to /var/log/daemon.log
on Core. As a
last (or first!) resort, the following command line should reveal the
new device's MAC.
tail -100 /var/log/daemon.log | grep DISCOVER
With the new device's Ethernet MAC in hand, a stanza like the
following is added to the bottom of private/core-dhcpd.conf
. The IP
address must be unique. Typically the next host number after the last
entry is chosen.
host new {
hardware ethernet 08:00:27:f3:41:66; fixed-address 192.168.56.4; }
The DHCP service is then restarted (not reloaded).
sudo systemctl restart isc-dhcp-server
Soon after this the device should be offered a lease for its IP
address, 192.168.56.4
. It might be power cycled to speed up the
process.
When successful, the following command shows the device is accessible,
reporting 1 packets transmitted, 1 received, 0% packet loss...
.
ping -c1 192.168.56.4
12.5. Create Wired Domain Name
A wired device is assigned an IP address when it is added to Core's
DHCP configuration (as in Add to Core DHCP). A private domain name is
then associated with this address. If the device is intended to
operate wirelessly, the name for its address is modified with a -w
suffix. Thus new-w.small.private
would be the name of the new
device while it is temporarily connected to the cloister Ethernet, and
new.small.private
would be its "normal" name used when it is on the
cloister Wi-Fi.
The private domain name is created by adding a line like the following
to private/db.domain
and incrementing the serial number at the top
of the file.
new-w IN A 192.168.56.4
The reverse mapping is also created by adding a line like the
following to private/db.private
and incrementing the serial number
at the top of that file.
4 IN PTR new-w.small.private.
After ./abbey config core
updates Core, resolution of the new-w
name can be tested.
resolvectl query new-w.small.private. resolvectl query 192.168.56.4
12.6. Update From Cloister Apt Cache
- Log in as
sysadm
on the console. Create
/etc/apt/apt.conf.d/01proxy
.D=apt-cacher.small.private. echo "Acquire::http::Proxy \"http://$D:3142\";" \ | sudo tee /etc/apt/apt.conf.d/01proxy
Update the system and reboot.
sudo apt update sudo apt full-upgrade --autoremove sudo reboot
12.7. Authorize Remote Administration
To remotely administer new-w
, Ansible must be authorized to login as
sysadm@new-w
without a login password, using an SSH key pair. This is
accomplished by copying Ansible's SSH public key to new-w
.
scp Secret/ssh_admin/id_rsa.pub sysadm@new-w:admin_key
Then on new-w
(logged in as sysadm
) the public key is installed in
~sysadm/.ssh/authorized_keys
.
( cd; umask 077; mkdir .ssh; cp admin_key .ssh/authorized_keys )
Now the administrator can test access to new-w
using Ansible's SSH
key.
ssh -i Secret/ssh_admin/id_rsa sysadm@new-w
12.8. Configure with Ansible
With remote administration authorized and tested (as in Authorize
Remote Administration), and the machine connected to the cloister
Ethernet, the configuration of new-w
can be completed by Ansible.
Note that if the machine is staying on the cloister Ethernet, its
domain name will be new
(having had no -w
suffix added).
First new-w
is added to Ansible's inventory in hosts
. A new-w
section is added to the list of all hosts, and an empty section of the
same name is added to the list of campus
hosts. If the machine uses
the usual privileged account name, sysadm
, the ansible_user
key in
not needed.
hosts:
...
new-w:
ansible_user: pi
ansible_become_password: "{{ become_new }}"
...
children:
...
campus:
hosts:
...
new-w:
If the sudo
command on new-w
never prompts sysadm
for a
password, then the ansible_become_password
setting is also not
needed. Otherwise, the password is added to Secret/become.yml
as
shown below.
echo -n "become_new: " >>Secret/become.yml ansible-vault encrypt_string PASSWORD >>Secret/become.yml
Finally the ./abbey config new-w
command is run. It will install
several additional software packages and change several more
configuration files.
./abbey config new-w
12.9. Connect to Cloister Wi-Fi
On an IoT device, or a Debian or Android "desktop", the cloister Wi-Fi
name and password are entered manually. Once the device is connected,
its Wi-Fi IP address may be discovered in its network settings, and
perhaps via the access point's local domain, e.g. as new.lan
on a
desktop connected to the cloister Wi-Fi.
Wireless Debian machines use ifupdown
configured with a short
/etc/network/interfaces.d/wifi
drop-in. In this example, the Wi-Fi
interface on new
is named wlan0
.
/etc/network/interfaces.d/wifi
auto wlan0 iface wlan0 inet dhcp wpa-ssid "Birchwood Abbey" wpa-psk "PASSWORD"
Once the sudo ifup wlan0
command is successful, the machine will get
an IP address on the access point's local network (revealed by the
command ip addr show dev wlan0
).
The new Wi-Fi IP address, e.g. 192.168.10.225
, should be tested on a
desktop connected to the Wi-Fi using the following ping
command.
ping -c1 192.168.10.225
12.10. Connect to Cloister VPN
Wireless devices (with the cloister Wi-Fi password) can get an IP address and a default route to the Internet with no special configuration. Neither said devices nor the access point require special configuration. Any Wi-Fi access point, e.g. as found in a cable modem, will work with zero configuration. The abbey's networks, however, are not accessible except via the cloister VPN.
Connections to the cloister VPN are authorized by OpenVPN
configuration (.ovpn
) files generated by the ./abbey client...
command (aka The Client Command). These are secret files, kept
readable only by their owners and are deleted after use. They are
copied to new OpenVPN clients using secure (ssh
) connections.
12.10.1. Debian Servers
Wireless Debian servers (without NetworkManager) are connected to the cloister VPN via the following process.
- Create a new client certificate and OpenVPN configuration for the new abbey server.
- Copy the
campus.ovpn
file to the new machine. - On the new machine:
- Install the
openvpn-systemd-resolved
package. - Copy
campus.ovpn
to/etc/openvpn/cloister.conf
. - Start the OpenVPN service.
- Check that the cloister VPN was connected.
- Logout and unplug the cloister Ethernet.
- Test the cloister VPN connection (and private name resolution)
with
ping -c1 core
.
And these are the commands:
./abbey client campus new
scp campus.ovpn sysadm@new-w:
ssh sysadm@new-w
sudo apt install openvpn-systemd-resolved
sudo cp campus.ovpn /etc/openvpn/cloister.conf
sudo systemctl start openvpn@cloister
systemctl status openvpn@cloister
ping -c1 core
sudo systemctl enable openvpn@cloister
rm campus.ovpn
logout
rm campus.ovpn
It may be necessary to reboot before the final tests.
12.10.2. Debian Desktops
Wireless Debian desktops (with NetworkManager) include our 8GB Core i3 NUC (Intel®'s Next Unit of Computing) and our 8GB Raspberry Pi 4. They run the Pop!_OS and Raspberry Pi OS desktops respectively. They are connected to the cloister VPN via the following process.
- Create a new client certificate and OpenVPN configuration for the
new abbey desktop, a
campus.ovpn
file. Create a
wifi
file that looks like this (assuming the wireless network device is namedwlan0
).auto wlan0 iface wlan0 inet dhcp wpa-ssid "Birchwood Abbey" wpa-psk "PASSWORD"
- Copy the
wifi
andcampus.ovpn
files to the new machine. - On the new machine:
- Install the
openvpn-systemd-resolved
package. - Copy
wifi
to/etc/network/interfaces.d/
. - Bring up the Wi-Fi interface.
- Copy
campus.ovpn
to/etc/openvpn/cloister.conf
. - Start the OpenVPN service.
- Check that the cloister VPN was connected.
- Logout and unplug the cloister Ethernet.
- Test the cloister VPN connection (and private name resolution)
with
ping -c1 core
.
And these are the commands:
./abbey client campus new
scp wifi campus.ovpn sysadm@new-w:
ssh sysadm@new-w
sudo apt install openvpn-systemd-resolved
sudo cp wifi /etc/network/interfaces.d/
sudo ifup wlan0
sudo cp campus.ovpn /etc/openvpn/cloister.conf
sudo systemctl start openvpn@cloister
systemctl status openvpn@cloister
ping -c1 core
sudo systemctl enable openvpn@cloister
rm wifi campus.ovpn
logout
rm wifi campus.ovpn
It may be necessary to reboot before the final tests.
As configured above, the wireless Debian desktops make automatic,
persistent connections to the cloister Wi-Fi and VPN, and so can be
used much like a wired desktop machine. They are typically connected
to a large TV and auto-login to an unprivileged account named house
,
i.e. anyone in the house.
12.10.3. Private Desktops
Member notebooks are private machines not remotely administered by the abbey. These machines roam, and so are authorized to connect to the cloister VPN or the public VPN. This is how they are connected to the VPNs:
- Create a new client certificate and OpenVPN configurations for the
new abbey desktop,
campus.ovpn
andpublic.ovnp
files. - Copy the
campus.ovpn
andpublic.ovpn
files to the new machine. - On the new machine:
- Install the
openvpn-systemd-resolved
andnetwork-manager-openvpn-gnome
packages. - Open the desktop Settings > Network > VPN + > Import from
file… and choose
~/campus.ovpn
. - Open the Routes dialogues for both IPv4 and IPv6 and choose "Use this connection only for resources on its network.".
- Save the new VPN.
- Do the same with the
~/public.ovpn
file. - Connect the appropriate VPN and test it (and private name
resolution) with
ping -c1 core
. - Expunge (delete and empty the trash) the
~/campus.ovpn
and~/public.ovpn
files.
We assume the desktop is running NetworkManager, which is the case in all our Debian desktops from Pop!_OS and Ubuntu to Mint and Raspberry Pi OS.
Note that a new member's notebook does not need to be patched to the
cloister Ethernet nor connected to the cloister Wi-Fi. It can be
authorized "remotely" simply by copying the .ovpn
files securely,
e.g. using ssh
to any "known host" on the Internet.
The members of A Small Institute are peers, and enjoy complete, individual privacy. The administrator does not expect to have "root access" to members' machines, their desktops, personal diaries and photos. The monks of the abbey are brothers, and tolerate a little less than complete individual privacy (still expecting all necessary and appropriate privacy, being in a position to punish deviants).
Our private notebooks are included in the Ansible inventory, mainly so
they can be included in the weekly (or more frequent!) network
upgrades. The campus
and abbey-cloister
roles are not applied
though their Postfix and other configurations are recommended. Remote
access by the administrator is authorized and the privileged account's
password is included in Secret/become.yml
.
12.10.4. Android
Android phones and tablets are connected to the cloister VPN via the
following process. Note that they do not appear in the set of
campus
hosts, are not configured by Ansible, and do not appear in
the host inventory.
- Create a new client certificate and campus/public OpenVPN configurations for the new abbey Android.
- Copy the
campus.ovpn
andpublic.ovpn
files to a USB drive. - On the Android machine:
- Connect to the cloister Wi-Fi.
- Install F-Droid and use it to install OpenVPN.
- Connect the USB drive, perhaps with an OTG (On The Go) adapter,
and open the
campus.ovpn
file. The file should be opened with the OpenVPN app, which will appear to ask for confirmation before creating the new VPN. - Open the
public.ovpn
file and create a second VPN.
The .ovpn
files must be transferred to the Android via a secure
medium: the scp
command, a USB drive, a cloud download, or perhaps
an encrypted email. In the following commands, the files are copied
to a USB drive labeled Transfers
. After insertion into the Android,
its "storage" is viewed with the Files app, which should launch
OpenVPN when a .ovpn
file is opened.
./abbey client android dicks-tablet dick cp campus.ovpn public.ovpn /media/sysadm/Transfers/ rm campus.ovpn public.ovpn
12.11. Create Wireless Domain Name
A wireless machine is assigned a Wi-Fi address when it connects to the
cloister Wi-Fi, and a "VPN address" when it connects to Gate's OpenVPN
server. The VPN address can be discovered by running ip addr show
dev ovpn
on the machine, or inspecting /etc/openvpn/ipp.txt
on
Gate. Once discovered, a private domain name,
e.g. new.small.private
, can be associated with the VPN address, e.g
10.84.138.7
. The administrator adds a line like the following to
private/db.domain
and increments the serial number at the top of the
file.
new IN A 10.84.138.7
The administrator also creates the reverse mapping by adding a line
like the following to private/db.campus_vpn
and incrementing the
serial number at the top of that file.
7 IN PTR new.small.private.
After ./abbey config core
updates Core, the administrator can test
resolution of the new name.
resolvectl query new.small.private. resolvectl query 10.84.138.7
A wireless device with no Ethernet interface and unable to run OpenVPN
gets just a Wi-Fi address. It can be given a private domain name
(e.g. new.small.private
) associated with the Wi-Fi address
(e.g. 192.168.10.225
), but a reverse lookup on a machine connected
to the Wi-Fi may yield a name like new.lan
(provided by the access
point) while elsewhere (e.g. on the cloister Ethernet) the IP address
will not resolve at all. (There is no "reverse mapping" to be added
to private/db.campus_vpn
.)