From: Matt Birkholz Date: Sun, 17 Dec 2023 23:20:55 +0000 (-0700) Subject: Initial version. X-Git-Url: https://birchwood-abbey.net/git?a=commitdiff_plain;h=e23b88ab267abf73db7fcc5d678ac26e4829eb26;p=Institute Initial version. --- e23b88ab267abf73db7fcc5d678ac26e4829eb26 diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..122f718 --- /dev/null +++ b/.gitignore @@ -0,0 +1,2 @@ +/roles/ +/private/vars.pl diff --git a/Backup/WWW/campus/index.html b/Backup/WWW/campus/index.html new file mode 100644 index 0000000..ed6a9b0 --- /dev/null +++ b/Backup/WWW/campus/index.html @@ -0,0 +1,13 @@ + + + Campus + + + +

+ This is an example top-level HTML document simulating a small + institute's campus home page. +

+ + + diff --git a/Backup/WWW/live/index.html b/Backup/WWW/live/index.html new file mode 100644 index 0000000..bb630ef --- /dev/null +++ b/Backup/WWW/live/index.html @@ -0,0 +1,13 @@ + + + Live + + + +

+ This is an example top-level HTML document simulating the public + home page of a small institution. +

+ + + diff --git a/Backup/WWW/test/index.html b/Backup/WWW/test/index.html new file mode 100644 index 0000000..e0464fd --- /dev/null +++ b/Backup/WWW/test/index.html @@ -0,0 +1,13 @@ + + + Test + + + +

+ This is an example top-level HTML document simulating a draft of a + new public home page for a small institution. +

+ + + diff --git a/README.html b/README.html new file mode 100644 index 0000000..e8d92c4 --- /dev/null +++ b/README.html @@ -0,0 +1,9705 @@ + + + + + + + +A Small Institute + + + + + + + +
+

A Small Institute

+

+The Ansible scripts herein configure a small institute's hosts +according to their roles in the institute's network of public and +private servers. The network topology allows the institute to present +an expendable public face (easily wiped clean) while maintaining a +secure and private campus that can function with or without the +Internet. +

+
+

1. Overview

+
+

+This small institute has a public server on the Internet, Front, that +handles the institute's email, web site, and cloud. Front is small, +cheap, and expendable, contains only public information, and functions +mostly as a VPN server relaying to a campus network. +

+ +

+The campus network is one or more machines physically connected via +Ethernet (or a similarly secure medium) for private, un-encrypted +communication in a core locality. One of the machines on this +Ethernet is Core, the small institute's main server. Core provides a +number of essential localnet services (DHCP, DNS, NTP), and a private, +campus web site. It is also the home of the institute cloud and is +where all of the institute's data actually reside. When the campus +ISP (Internet Service Provider) is connected, a separate host, Gate, +routes campus traffic to the ISP (via NAT). Through Gate, Core +connects to Front making the institute email, cloud, etc. available to +members off campus. +

+ +
+                =                                                   
+              _|||_                                                 
+        =-The-Institute-=                                           
+          =   =   =   =                                             
+          =   =   =   =                                             
+        =====-Front-=====                                           
+                |                                                   
+        -----------------                                           
+      (                   )                                         
+     (   The Internet(s)   )----(Hotel Wi-Fi)                       
+      (                   )        |                                
+        -----------------          +----Member's notebook off campus
+                |                                                   
+=============== | ==================================================
+                |                                           Premises
+          (Campus ISP)                                              
+                |            +----Member's notebook on campus       
+                |            |                                      
+                | +----(Campus Wi-Fi)                               
+                | |                                                 
+============== Gate ================================================
+                |                                            Private
+                +----Ethernet switch                                
+                        |                                           
+                        +----Core                                   
+                        +----Servers (NAS, DVR, etc.)               
+
+ +

+Members of the institute use commodity notebooks and open source +desktops. When off campus, members access institute resources via the +VPN on Front (via hotel Wi-Fi). When on campus, members can use the +much faster and always available (despite Internet connectivity +issues) VPN on Gate (via campus Wi-Fi). A member's Android phones and +devices can use the same Wi-Fis, VPNs (via the OpenVPN app) and +services. On a desktop or by phone, at home or abroad, members can +access their email and the institute's private web and cloud. +

+ +

+The institute email service reliably delivers messages in seconds, so +it is the main mode of communication amongst the membership, which +uses OpenPGP encryption to secure message content. +

+
+
+
+

2. Caveats

+
+

+This small institute prizes its privacy, so there is little or no +accommodation for spyware (aka commercial software). The members of +the institute are dedicated to refining good tools, making the best +use of software that does not need nor want our hearts, our money, nor +even our attention. +

+ +

+Unlike a commercial cloud service with redundant hardware and multiple +ISPs, Gate is a real choke point. When Gate cannot reach the +Internet, members abroad will not be able to reach Core, their email +folders, nor the institute cloud. They can chat privately with +other members abroad or consult the public web site on Front. Members +on campus will have their email and cloud, but no Internet and thus +no new email and no chat with members abroad. Keeping our data on +campus means we can keep operating without the Internet if we are on +campus. +

+ +

+Keeping your data secure on campus, not on the Internet, means when +your campus goes up in smoke, so does your data, unless you made +an off-site (or at least fire-safe!) backup copy. +

+ +

+Security and privacy are the focus of the network architecture and +configuration, not anonymity. There is no support for Tor. The +VPNs do not funnel all Internet traffic through anonymizing +services. They do not try to defeat geo-fencing. +

+ +

+This is not a showcase of the latest technologies. It is not expected +to change except slowly. +

+ +

+The services are intended for the SOHO (small office, home office, 4-H +chapter, medical clinic, gun-running biker gang, etc.) with a small, +fairly static membership. Front can be small and cheap (10USD per +month) because of this assumption. +

+
+
+
+

3. The Services

+
+

+The small institute's network is designed to provide a number of +services. An understanding of how institute hosts co-operate is +essential to understanding the configuration of specific hosts. This +chapter covers institute services from a network wide perspective, and +gets right down in its subsections to the Ansible code that enforces +its policies. On first reading, those subsections should be skipped; +they reference particulars first introduced in the following chapter. +

+
+
+

3.1. The Name Service

+
+

+The institute has a public domain, e.g. small.example.org, and a +private domain, e.g. small.private. The public has access only to +the former and, as currently configured, to only one address (A +record): Front's public IP address. Members connected to the campus, +via wire or VPN, use the campus name server which can resolve +institute private domain names like core.small.private. If +small.private is also used as a search domain, members can use short +names like core. +

+
+
+
+

3.2. The Email Service

+
+

+Front provides the public SMTP (Simple Mail Transfer Protocol) service +that accepts email from the Internet, delivering messages addressed to +the institute's domain name, e.g. to postmaster@small.example.org. +Its Postfix server accepts email for member accounts and any public +aliases (e.g. postmaster). Messages are delivered to member +~/Maildir/ directories via Dovecot. +

+ +

+If the campus is connected to the Internet, the new messages are +quickly picked up by Core and stored in member ~/Maildir/ +directories there. Securely stored on Core, members can decrypt and +sort their email using common, IMAP-based tools. (Most mail apps can +use IMAP, the Internet Message Access Protocol.) +

+ +

+Core transfers messages from Front using Fetchmail's --idle option, +which instructs Fetchmail to maintain a connection to Front so that it +can (with good campus connectivity) get notifications to pick up new +email. Members of the institute typically employ email apps that work +similarly, alerting them to new email on Core. Thus members enjoy +email messages that arrive as fast as text messages (but with the +option of real, end-to-end encryption). +

+ +

+If the campus loses connectivity to the Internet, new email +accumulates in ~/Maildir/ directories on Front. If a member is +abroad, with Internet access, their new emails can be accessed via +Front's IMAPS (IMAP Secured [with SSL/TLS]) service, available at the +institute domain name. When the campus regains Internet connectivity, +Core will collect the new email. +

+ +

+Core is the campus mail hub, securely storing members' incoming +emails, and relaying their outgoing emails. It is the "smarthost" for +the campus. Campus machines send all outgoing email to Core, and +Core's Postfix server accepts messages from any of the institute's +networks. +

+ +

+Core delivers messages addressed to internal host names locally. For +example webmaster@test.small.private is delivered to webmaster on +Core. Core relays other messages to its smarthost, Front, which is +declared by the institute's SPF (Sender Policy Framework) DNS record +to be the only legitimate sender of institute emails. Thus the +Internet sees the institute's outgoing email coming from a server at +an address matching the domain's SPF record. The institute does not +sign outgoing emails per DKIM (Domain Keys Identified Mail), yet. +

+ +
+Example Small Institute SPF Record
TXT    v=spf1 ip4:159.65.75.60 -all
+
+
+ +

+There are a number of configuration settings that, for +interoperability, should be in agreement on the Postfix servers and +the campus clients. Policy also requires certain settings on both +Postfix or both Dovecot servers. To ensure that the same settings are +applied on both, the shared settings are defined here and included via +noweb reference in the server configurations. For example the Postfix +setting for the maximum message size is given in a code block labeled +postfix-message-size below and then included in both Postfix +configurations wherever <<postfix-message-size>> appears. +

+
+
+

3.2.1. The Postfix Configurations

+
+

+The institute aims to accommodate encrypted email containing short +videos, messages that can quickly exceed the default limit of 9.77MiB, +so the institute uses a limit 10 times greater than the default, +100MiB. Front should always have several gigabytes free to spool a +modest number (several 10s) of maximally sized messages. Furthermore +a maxi-message's time in the spool is nominally a few seconds, after +which it moves on to Core (the big disks). This Postfix setting +should be the same throughout the institute, so that all hosts can +handle maxi-messages. +

+ +
+postfix-message-size
- { p: message_size_limit, v: 104857600 }
+
+
+ +

+Queue warning and bounce times were shortened at the institute. Email +should be delivered in seconds. If it cannot be delivered in an hour, +the recipient has been cut off, and a warning is appropriate. If it +cannot be delivered in 4 hours, the information in the message is +probably stale and further attempts to deliver it have limited and +diminishing value. The sender should decide whether to continue by +re-sending the bounce (or just grabbing the go-bag!). +

+ +
+postfix-queue-times
- { p: delay_warning_time, v: 1h }
+- { p: maximal_queue_lifetime, v: 4h }
+- { p: bounce_queue_lifetime, v: 4h }
+
+
+ +

+The Debian default Postfix configuration enables SASL authenticated +relaying and opportunistic TLS with a self-signed, "snake oil" +certificate. The institute substitutes its own certificates and +disables relaying (other than for the local networks). +

+ +
+postfix-relaying
- p: smtpd_relay_restrictions
+  v: permit_mynetworks reject_unauth_destination
+
+
+ +

+Dovecot is configured to store emails in each member's ~/Maildir/. +The same instruction is given to Postfix for the belt-and-suspenders +effect. +

+ +
+postfix-maildir
- { p: home_mailbox, v: Maildir/ }
+
+
+ +

+The complete Postfix configurations for Front and Core use these +common settings as well as several host-specific settings as discussed +in the respective roles below. +

+
+
+
+

3.2.2. The Dovecot Configurations

+
+

+The Dovecot settings on both Front and Core disable POP and require +TLS. +

+ +

+The official documentation for Dovecot once was a Wiki but now is +https://doc.dovecot.org, yet the Wiki is still distributed in +/usr/share/doc/dovecot-core/wiki/. +

+ +
+dovecot-tls
protocols = imap
+ssl = required
+
+
+ +

+Both servers should accept only IMAPS connections. The following +configuration keeps them from even listening at the IMAP port +(e.g. for STARTTLS commands). +

+ +
+dovecot-ports
service imap-login {
+  inet_listener imap {
+    port = 0
+  }
+}
+
+
+ +

+Both Dovecot servers store member email in members' local ~/Maildir/ +directories. +

+ +
+dovecot-maildir
mail_location = maildir:~/Maildir
+
+
+ +

+The complete Dovecot configurations for Front and Core use these +common settings with host specific settings for ssl_cert and +ssl_key. +

+
+
+
+
+

3.3. The Web Services

+
+

+Front provides the public HTTP service that serves institute web pages +at e.g. https://small.example.org/. The small institute initially +runs with a self-signed, "snake oil" server certificate, causing +browsers to warn of possible fraud, but this certificate is easily +replaced by one signed by a recognized authority, as discussed in The +Front Role. +

+ +

+The Apache2 server finds its web pages in the /home/www/ directory +tree. Pages can also come from member home directories. For +example the HTML for https://small.example.org/~member would come +from the /home/member/Public/HTML/index.html file. +

+ +

+The server does not run CGI scripts. This keeps Front's CPU +requirements cheap. CGI scripts can be used on Core. Indeed +Nextcloud on Core uses PHP and the whole LAMP (Linux, Apache, MySQL, +PHP) stack. +

+ +

+Core provides a campus HTTP service with several virtual hosts. +These web sites can only be accessed via the campus Ethernet or an +institute VPN. In either situation Core's many private domain names +become available, e.g. www.small.private. In many cases these +domain names can be shortened e.g. to www. Thus the campus home +page is accessible in a dozen keystrokes: http://www/ (plus Enter). +

+ +

+Core's web sites: +

+ +
+
http://www/
is the small institute's campus web site. It +serves files from the staff-writable /WWW/campus/ directory +tree.
+
http://live/
is a local copy of the institute's public web +site. It serves the files in the /WWW/live/ directory tree, +which is mirrored to Front.
+
http://test/
is a test copy of the institute's public web +site. It tests new web designs in the /WWW/test/ directory +tree. Changes here are merged into the live tree, /WWW/live/, +once they are complete and tested.
+
http://core/
is the Debian default site. The institute does +not munge this site, to avoid conflicts with Debian-packaged web +services (e.g. Nextcloud, Zoneminder, MythTV's MythWeb).
+
+ +

+Core runs a cron job under a system account named monkey that +mirrors /WWW/live/ to Front's /home/www/ every 15 minutes. +Vandalism on Front should not be possible, but if it happens Monkey +will automatically wipe it within 15 minutes. +

+
+
+
+

3.4. The Cloud Service

+
+

+Core runs Nextcloud to provide a private institute cloud at +http://core.small.private/nextcloud/. It is managed manually per +The Nextcloud Server Administration Guide. The code and data, +including especially database dumps, are stored in /Nextcloud/ which +is included in Core's backup procedure as described in Backups. The +default Apache2 configuration expects to find the web scripts in +/var/www/nextcloud/, so the institute symbolically links this to +/Nextcloud/nextcloud/. +

+ +

+Note that authenticating to a non-HTTPS URL like +http://core.small.private/ is often called out as insecure, but the +domain name is private and the service is on a directly connected +private network. +

+
+
+
+

3.5. The VPN Services

+
+

+The institute's public and campus VPNs have many common configuration +options that are discussed here. These are included, with example +certificates and network addresses, in the complete server +configurations of The Front Role and The Gate Role, as well as the +matching client configurations in The Core Role and the .ovpn files +generated by The Client Command. The configurations are based on the +documentation for OpenVPN v2.4: the openvpn(8) manual page and this +web page. +

+
+
+

3.5.1. The VPN Configuration Options

+
+

+The institute VPNs use UDP on a subnet topology (rather than +point-to-point) with "split tunneling". The UDP support accommodates +real-time, connection-less protocols. The split tunneling is for +efficiency with frontier bandwidth. The subnet topology, with the +client-to-client option, allows members to "talk" to each other on +the VPN subnets using any (experimental) protocol. +

+ +
+openvpn-dev-mode
dev-type tun
+dev ovpn
+topology subnet
+client-to-client
+
+
+ +

+A keepalive option is included on the servers so that clients detect +an unreachable server and reset the TLS session. The option's default +is doubled to 2 minutes out of respect for frontier service +interruptions. +

+ +
+openvpn-keepalive
keepalive 10 120
+
+
+ +

+As mentioned in The Name Service, the institute uses a campus name +server. OpenVPN is instructed to push its address and the campus +search domain. +

+ +
+openvpn-dns
push "dhcp-option DOMAIN {{ domain_priv }}"
+push "dhcp-option DNS {{ core_addr }}"
+
+
+ +

+The institute does not put the OpenVPN server in a chroot jail, but +it does drop privileges to run as user nobody:nobody. The +persist- options are needed because nobody cannot open the tunnel +device nor the key files. +

+ +
+openvpn-drop-priv
user nobody
+group nogroup
+persist-key
+persist-tun
+
+
+ +

+The institute does a little additional hardening, sacrificing some +compatibility with out-of-date clients. Such clients are generally +frowned upon at the institute. Here cipher is set to AES-256-GCM, +the default for OpenVPN v2.4, and auth is upped to SHA256 from +SHA1. +

+ +
+openvpn-crypt
cipher AES-256-GCM
+auth SHA256
+
+
+ +

+Finally, a max-client limit was chosen to frustrate flooding while +accommodating a few members with a handful of devices each. +

+ +
+openvpn-max
max-clients 20
+
+
+ +

+The institute's servers are lightly loaded so a few debugging options +are appropriate. To help recognize host addresses in the logs, and +support direct client-to-client communication, host IP addresses are +made "persistent" in the ipp.txt file. The server's status is +periodically written to the openvpn-status.log and verbosity is +raised from the default level 1 to level 3 (just short of a deluge). +

+ +
+openvpn-debug
ifconfig-pool-persist ipp.txt
+status openvpn-status.log
+verb 3
+
+
+
+
+
+
+

3.6. Accounts

+
+

+A small institute has just a handful of members. For simplicity (and +thus security) static configuration files are preferred over complex +account management systems, LDAP, Active Directory, and the like. The +Ansible scripts configure the same set of user accounts on Core and +Front. The Institute Commands (e.g. ./inst new dick) capture the +processes of enrolling, modifying and retiring members of the +institute. They update the administrator's membership roll, and run +Ansible to create (and disable) accounts on Core, Front, Nextcloud, +etc. +

+ +

+The small institute does not use disk quotas nor access control lists. +It relies on Unix group membership and permissions. It is Debian +based and thus uses "user groups" by default. Sharing is typically +accomplished via the campus cloud and the resulting desktop files can +all be private (readable and writable only by the owner) by default. +

+
+
+

3.6.1. The Administration Accounts

+
+

+The institute avoids the use of the root account (uid 0) because +it is exempt from the normal Unix permissions checking. The sudo +command is used to consciously (conscientiously!) run specific scripts +and programs as root. When installation of a Debian OS leaves the +host with no user accounts, just the root account, the next step is +to create a system administrator's account named sysadm and to give +it permission to use the sudo command (e.g. as described in The +Front Machine). When installation prompts for the name of an +initial, privileged user account the same name is given (e.g. as +described in The Core Machine). Installation may not prompt and +still create an initial user account with a distribution specific name +(e.g. pi). Any name can be used as long as it is provided as the +value of ansible_user in hosts. Its password is specified by a +vault-encrypted variable in the Secret/become.yml file. (The +hosts and Secret/become.yml files are described in The Ansible +Configuration.) +

+
+
+
+

3.6.2. The Monkey Accounts

+
+

+The institute's Core uses a special account named monkey to run +background jobs with limited privileges. One of those jobs is to keep +the public web site mirror up-to-date, so a corresponding monkey +account is created on Front as well. +

+
+
+
+
+

3.7. Keys

+
+

+The institute keeps its "master secrets" in an encrypted +volume on an off-line hard drive, e.g. a LUKS (Linux Unified Key +Setup) format partition on a USB pen/stick. The Secret/ +sub-directory is actually a symbolic link to this partition's +automatic mount point, e.g. /media/sysadm/ADE7-F866/. Unless this +volume is mounted (unlocked) at Secret/, none of the ./inst +commands will work. +

+ +

+Chief among the institute's master secrets is the SSH key to the +privileged accounts on all of the institute servers. It is stored +in Secret/ssh_admin/id_rsa. The institute uses several more SSH +keys listed here: +

+ +
+
Secret/ssh_admin/
The SSH key pair for A Small Institute +Administrator.
+
Secret/ssh_monkey/
The key pair used by Monkey to update the +website on Front (and other unprivileged tasks).
+
Secret/ssh_front/
The host key pair used by Front to +authenticate itself.
+
+ +

+The institute uses a number of X.509 certificates to authenticate VPN +clients and servers. They are created by the EasyRSA Certificate +Authority stored in Secret/CA/. +

+ +
+
Secret/CA/pki/ca.crt
The institute CA (certificate +authority).
+ +
Secret/CA/pki/issued/small.example.org.crt
The public Apache, +Postfix, and OpenVPN servers on Front.
+ +
Secret/CA/pki/issued/gate.small.private.crt
The campus +OpenVPN server on Gate.
+ +
Secret/CA/pki/issued/core.small.private.crt
The campus +Apache (thus Nextcloud), and Dovecot-IMAPd servers.
+ +
Secret/CA/pki/issued/core.crt
Core's client certificate by +which it authenticates to Front.
+
+ +

+The ./inst client command creates client certificates and keys, and +can generate OpenVPN configuration (.ovpn) files for Android and +Debian. The command updates the institute membership roll, requiring +the member's username, keeping a list of the member's clients (in case +all authorizations need to be revoked quickly). The list of client +certificates that have been revoked is stored along with the +membership roll (in private/members.yml as the value of revoked). +

+ +

+Finally, the institute uses an OpenPGP key to secure sensitive emails +(containing passwords or private keys) to Core. +

+ +
+
Secret/root.gnupg/
The "home directory" used to create the +public/secret key pair.
+
Secret/root-pub.pem
The ASCII armored OpenPGP public key for +e.g. root@core.small.private.
+
Secret/root-sec.pem
The ASCII armored OpenPGP secret key.
+
+ +

+When The CA Command sees an empty Secret/CA/ directory, as +though just created by running the EasyRSA make-cadir command in +Secret/ (a new, encrypted volume), the ./inst CA command creates +all of the certificates and keys mentioned above. It may prompt for +the institute's full name. +

+ +

+The institute administrator updates a couple encrypted copies of this +drive after enrolling new members, changing a password, issuing VPN +credentials, etc. +

+ +
+rsync -a Secret/ Secret2/
+rsync -a Secret/ Secret3/
+
+ + +

+This is out of consideration for the fragility of USB drives, and the +importance of a certain SSH private key, without which the +administrator will have to login with a password, hopefully stored in +the administrator's password keep, to install a new SSH key. +

+
+
+
+

3.8. Backups

+
+

+The small institute backs up its data, but not so much so that nothing +can be deleted. It actually mirrors user directories (/home/), the +web sites (/WWW/), Nextcloud (/Nextcloud/), and any capitalized +root directory entry, to a large off-line disk. Where incremental +backups are desired, a CMS like git is used. +

+ +

+Off-site backups are not a priority due to cost and trust issues, and +the low return on the investment given the minuscule risk of a +catastrophe big enough to obliterate all local copies. And the +institute's public contributions are typically replicated in public +code repositories like GitHub and GNU Savannah. +

+ +

+The following example /usr/local/sbin/backup script pauses +Nextcloud, dumps its database, rsyncs /home/, /WWW/ and +/Nextcloud/ to a /backup/ volume (mounting and unmounting +/backup/ if necessary), then continues Nextcloud. The script +assumes the backup volume is labeled Backup and formatted per LUKS +version 2. +

+ +

+Given the -n flag, the script does a "pre-sync" which does not pause +Nextcloud nor dump its DB. A pre-sync gets the big file (video) +copies done while Nextcloud continues to run. A follow-up sudo +backup (without -n) produces the complete copy (with all the +files mentioned in the Nextcloud database dump). +

+ +
+private/backup
#!/bin/bash -e
+#
+# DO NOT EDIT.  Maintained (will be replaced) by Ansible.
+#
+# sudo backup [-n]
+
+if [ `id -u` != "0" ]
+then
+    echo "This script must be run as root."
+    exit 1
+fi
+
+if [ "$1" = "-n" ]
+then
+    presync=yes
+    shift
+fi
+
+if [ "$#" != "0" ]
+then
+    echo "usage: $0 [-n]"
+    exit 2
+fi
+
+function cleanup () {
+    sleep 2
+    finish
+}
+
+trap cleanup SIGHUP SIGINT SIGQUIT SIGPIPE SIGTERM
+
+function start () {
+
+    if ! mountpoint -q /backup/
+    then
+        echo "Mounting /backup/."
+        cryptsetup luksOpen /dev/disk/by-partlabel/Backup backup
+        mount /dev/mapper/backup /backup
+        mounted=indeed
+    else
+        echo "Found /backup/ already mounted."
+        mounted=
+    fi
+
+    if [ ! -d /backup/home ]
+    then
+        echo "The backup device should be mounted at /backup/"
+        echo "yet there is no /backup/home/ directory."
+        exit 2
+    fi
+
+    if [ ! $presync ]
+    then
+        echo "Putting nextcloud into maintenance mode."
+        ( cd /Nextcloud/nextcloud/
+          sudo -u www-data php occ maintenance:mode --on &>/dev/null )
+
+        echo "Dumping nextcloud database."
+        ( cd /Nextcloud/
+          umask 07
+          BAK=`date +"%Y%m%d"`-dbbackup.bak.gz
+          CNF=/Nextcloud/dbbackup.cnf
+          mysqldump --defaults-file=$CNF nextcloud | gzip > $BAK
+          chmod 440 $BAK )
+    fi
+
+}
+
+function finish () {
+
+    if [ ! $presync ]
+    then
+        echo "Putting nextcloud back into service."
+        ( cd /Nextcloud/nextcloud/
+          sudo -u www-data php occ maintenance:mode --off &>/dev/null )
+    fi
+
+    if [ $mounted ]
+    then
+        echo "Unmounting /backup/."
+        umount /backup
+        cryptsetup luksClose backup
+        mounted=
+    fi
+    echo "Done."
+    echo "The backup device can be safely disconnected."
+
+}
+
+start
+
+for D in /home /[A-Z]*; do
+    echo "Updating /backup$D/."
+    ionice --class Idle --ignore \
+        rsync -av --delete --exclude=.NoBackups $D/ /backup$D/
+done
+
+finish
+
+
+
+
+
+
+

4. The Particulars

+
+

+This chapter introduces Ansible variables intended to simplify +changes, like customization for another institute's particulars. The +variables are separated into public information (e.g. an institute's +name) or private information (e.g. a network interface address), and +stored in separate files: public/vars.yml and private/vars.yml. +

+ +

+The example settings in this document configure VirtualBox VMs as +described in the Testing chapter. For more information about how a +small institute turns the example Ansible code into a working Ansible +configuration, see chapter The Ansible Configuration. +

+
+
+

4.1. Generic Particulars

+
+

+The small institute's domain name is used quite frequently in the +Ansible code. The example used here is small.example.org. The +following line sets domain_name to that value. (Ansible will then +replace {{ domain_name }} in the code with small.example.org.) +

+ +
+public/vars.yml
---
+domain_name: small.example.org
+domain_priv: small.private
+
+
+ +

+The private version of the institute's domain name should end with one +of the top-level domains expected for this purpose: .intranet, +.internal, .private, .corp, .home or .lan.1 +

+
+
+
+

4.2. Subnets

+
+

+The small institute uses a private Ethernet, two VPNs, and an +untrusted Ethernet (for the campus Wi-Fi access point). Each must +have a unique private network address. Hosts using the VPNs are also +using foreign private networks, e.g. a notebook on a hotel Wi-Fi. To +better the chances that all of these networks get unique addresses, +the small institute uses addresses in the IANA's (Internet Assigned +Numbers Authority's) private network address ranges except the +192.168 address range already in widespread use. This still leaves +69,632 8 bit networks (each addressing up to 254 hosts) from which to +choose. The following table lists their CIDRs (subnet numbers in +Classless Inter-Domain Routing notation) in abbreviated form (eliding +69,624 rows). +

+ + + + +++ ++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1: IANA Private 8bit Subnetwork CIDRs
Subnet CIDRHost Addresses
10.0.0.0/2410.0.0.1 – 10.0.0.254
10.0.1.0/2410.0.1.1 – 10.0.1.254
10.0.2.0/2410.0.2.1 – 10.0.2.254
10.255.255.0/2410.255.255.1 – 10.255.255.254
172.16.0.0/24172.16.0.1 – 172.16.0.254
172.16.1.0/24172.16.1.1 – 172.16.1.254
172.16.2.0/24172.16.2.1 – 172.16.2.254
172.31.255.0/24172.31.255.1 – 172.31.255.254
+ +

+The following Emacs Lisp randomly chooses one of these 8 bit subnets. +The small institute used it to pick its four private subnets. An +example result follows the code. +

+ +
+
(let ((bytes
+         (let ((i (random (+ 256 16))))
+           (if (< i 256)
+               (list 10        i         (1+ (random 254)))
+             (list  172 (+ 16 (- i 256)) (1+ (random 254)))))))
+  (format "%d.%d.%d.0/24" (car bytes) (cadr bytes) (caddr bytes)))
+
+
+ +

+The four private networks are named and given example CIDRs in the +code block below. The small institute treats these addresses as +sensitive information so the code block below "tangles" into +private/vars.yml rather than public/vars.yml. Two of the +addresses are in 192.168 subnets because they are part of a test +configuration using mostly-default VirtualBoxes (described here). +

+ +
+private/vars.yml
---
+private_net_cidr:           192.168.56.0/24
+public_vpn_net_cidr:        10.177.86.0/24
+campus_vpn_net_cidr:        10.84.138.0/24
+gate_wifi_net_cidr:         192.168.57.0/24
+
+
+ +

+The network addresses are needed in several additional formats, e.g. +network address and subnet mask (10.84.138.0 255.255.255.0). The +following boilerplate uses Ansible's ipaddr filter to set several +corresponding variables, each with an appropriate suffix, +e.g. _net_and_mask rather than _net_cidr. +

+ +
+private/vars.yml
private_net:             "{{ private_net_cidr | ipaddr('network') }}"
+private_net_mask:        "{{ private_net_cidr | ipaddr('netmask') }}"
+private_net_and_mask:      "{{ private_net }} {{ private_net_mask }}"
+public_vpn_net:       "{{ public_vpn_net_cidr | ipaddr('network') }}"
+public_vpn_net_mask:  "{{ public_vpn_net_cidr | ipaddr('netmask') }}"
+public_vpn_net_and_mask:
+                     "{{ public_vpn_net }} {{ public_vpn_net_mask }}"
+campus_vpn_net:       "{{ campus_vpn_net_cidr | ipaddr('network') }}"
+campus_vpn_net_mask:  "{{ campus_vpn_net_cidr | ipaddr('netmask') }}"
+campus_vpn_net_and_mask:
+                     "{{ campus_vpn_net }} {{ campus_vpn_net_mask }}"
+gate_wifi_net:         "{{ gate_wifi_net_cidr | ipaddr('network') }}"
+gate_wifi_net_mask:    "{{ gate_wifi_net_cidr | ipaddr('netmask') }}"
+gate_wifi_net_and_mask:
+                       "{{ gate_wifi_net }} {{ gate_wifi_net_mask }}"
+gate_wifi_broadcast: "{{ gate_wifi_net_cidr | ipaddr('broadcast') }}"
+
+
+ +

+The institute prefers to configure its services with IP addresses +rather than domain names, and one of the most important for secure and +reliable operation is Front's public IP address known to the world by +the institute's Internet domain name. +

+ +
+public/vars.yml
front_addr: 192.168.15.5
+
+
+ +

+The example address is a private network address because the example +configuration is intended to run in a test jig made up of VirtualBox +virtual machines and networks, and the VirtualBox user manual uses +192.168.15.0 in its example configuration of a "NAT Network" +(simulating Front's ISP's network). +

+ +

+Finally, five host addresses are needed frequently in the Ansible +code. The first two are Core's and Gate's addresses on the private +Ethernet. The next two are Gate's and the campus Wi-Fi's addresses on +the Gate-WiFi subnet, the tiny Ethernet (gate_wifi_net) between Gate +and the (untrusted) campus Wi-Fi access point. The last is Front's +address on the public VPN, perversely called front_private_addr. +The following code block picks the obvious IP addresses for Core +(host 1) and Gate (host 2). +

+ +
+private/vars.yml
core_addr_cidr:             "{{ private_net_cidr | ipaddr('1') }}"
+gate_addr_cidr:             "{{ private_net_cidr | ipaddr('2') }}"
+gate_wifi_addr_cidr:        "{{ gate_wifi_net_cidr | ipaddr('1') }}"
+wifi_wan_addr_cidr:         "{{ gate_wifi_net_cidr | ipaddr('2') }}"
+front_private_addr_cidr:    "{{ public_vpn_net_cidr | ipaddr('1') }}"
+
+core_addr:                 "{{ core_addr_cidr | ipaddr('address') }}"
+gate_addr:                 "{{ gate_addr_cidr | ipaddr('address') }}"
+gate_wifi_addr:       "{{ gate_wifi_addr_cidr | ipaddr('address') }}"
+wifi_wan_addr:         "{{ wifi_wan_addr_cidr | ipaddr('address') }}"
+front_private_addr:
+                  "{{ front_private_addr_cidr | ipaddr('address') }}"
+
+
+
+
+
+
+

5. The Hardware

+
+

+The small institute's network was built by its system administrator +using Ansible on a trusted notebook. The Ansible configuration and +scripts were generated by "tangling" the Ansible code included here. +(The Ansible Configuration describes how to do this.) The following +sections describe how Front, Gate and Core were prepared for Ansible. +

+
+
+

5.1. The Front Machine

+
+

+Front is the small institute's public facing server, a virtual machine +on the Internets. It needs only as much disk as required by the +institute's public web site. Often the cheapest offering (4GB RAM, 1 +core, 20GB disk) is sufficient. The provider should make it easy and +fast to (re)initialize the machine to a factory fresh Debian Server, +and install additional Debian software packages. Indeed it should be +possible to quickly re-provision a new Front machine from a frontier +Internet café using just the administrator's notebook. +

+
+
+

5.1.1. A Digital Ocean Droplet

+
+

+The following example prepared a new front on a Digital Ocean droplet. +The institute administrator opened an account at Digital Ocean, +registered an ssh key, and used a Digital Ocean control panel to +create a new machine (again, one of the cheapest, smallest available) +with Ubuntu Server 20.04LTS installed. Once created, the machine and +its IP address (159.65.75.60) appeared on the panel. Using that +address, the administrator logged into the new machine with ssh. +

+ +

+On the administrator's notebook (in a terminal): +

+ +
+notebook$ ssh root@159.65.75.60
+root@ubuntu# 
+
+ + +

+The freshly created Digital Ocean droplet came with just one account, +root, but the small institute avoids remote access to the "super +user" account (per the policy in The Administration Accounts), so the +administrator created a sysadm account with the ability to request +escalated privileges via the sudo command. +

+ +
+root@ubuntu# adduser sysadm
+...
+New password: givitysticangout
+Retype new password: givitysticangout
+...
+        Full Name []: System Administrator
+...
+Is the information correct? [Y/n] 
+root@ubuntu# adduser sysadm sudo
+root@ubuntu# logout
+notebook$
+
+ + +

+The password was generated by gpw, saved in the administrator's +password keep, and later added to Secret/become.yml as shown below. +(Producing a working Ansible configuration with Secret/become.yml +file is described in The Ansible Configuration.) +

+ +
+notebook$ gpw 1 16
+givitysticangout
+notebook$ echo -n "become_front: " >>Secret/become.yml
+notebook$ ansible-vault encrypt_string givitysticangout \
+notebook_     >>Secret/become.yml
+
+ + +

+After creating the sysadm account on the droplet, the administrator +concatenated a personal public ssh key and the key found in +Secret/ssh_admin/ (created by The CA Command) into an admin_keys +file, copied it to the droplet, and installed it as the +authorized_keys for sysadm. +

+ +
+notebook$ cat ~/.ssh/id_rsa.pub Secret/ssh_admin/id_rsa.pub \
+notebook_     > admin_keys
+notebook$ rsync admin_keys sysadm@159.65.75.60:
+The authenticity of host '159.65.75.60' can't be established.
+....
+Are you sure you want to continue connecting (...)? yes
+...
+sysadm@159.65.75.60's password: givitysticangout
+notebook$ ssh sysadm@159.65.75.60
+sysadm@159.65.75.60's password: givitysticangout
+sysadm@ubuntu$ ( mask 077; mkdir .ssh; \
+sysadm@ubuntu_   cp admin_keys .ssh/authorized_keys; \
+sysadm@ubuntu_   rm admin_keys )
+sysadm@ubuntu$ logout
+notebook$ rm admin_keys
+notebook$
+
+ + +

+The administrator then tested the password-less ssh login as well as +the privilege escalation command. +

+ +
+notebook$ ssh sysadm@159.65.75.60
+sysadm@ubuntu$ sudo head -1 /etc/shadow
+[sudo] password for sysadm:
+root:*:18355:0:99999:7:::
+
+ + +

+After passing the above test, the administrator disabled root logins +on the droplet. The last command below tested that root logins were +indeed denied. +

+ +
+sysadm@ubuntu$ sudo rm -r /root/.ssh
+sysadm@ubuntu# logout
+notebook$ ssh root@159.65.75.60
+root@159.65.75.60: Permission denied (publickey).
+notebook$ 
+
+ + +

+At this point the droplet was ready for configuration by Ansible. +Later, provisioned with all of Front's services and tested, the +institute's domain name was changed, making 159.65.75.60 its new +address. +

+
+
+
+
+

5.2. The Core Machine

+
+

+Core is the small institute's private file, email, cloud and whatnot +server. It should have some serious horsepower (RAM, cores, GHz) and +storage (hundreds of gigabytes). An old desktop system might be +sufficient and if later it proves it is not, moving Core to new +hardware is "easy" and good practice. It is also straightforward to +move the heaviest workloads (storage, cloud, internal web sites) to +additional machines. +

+ +

+Core need not have a desktop, and will probably be more reliable if it +is not also playing games. It will run continuously 24/7 and will +benefit from a UPS (uninterruptible power supply). It's file system +and services are critical. +

+ +

+The following example prepared a new core on a PC with Debian 11 +freshly installed. During installation, the machine was named core, +no desktop or server software was installed, no root password was set, +and a privileged account named sysadm was created (per the policy in +The Administration Accounts). +

+ +
+New password: oingstramextedil
+Retype new password: oingstramextedil
+...
+        Full Name []: System Administrator
+...
+Is the information correct? [Y/n] 
+
+ + +

+The password was generated by gpw, saved in the administrator's +password keep, and later added to Secret/become.yml as shown below. +(Producing a working Ansible configuration with Secret/become.yml +file is described in The Ansible Configuration.) +

+ +
+notebook$ gpw 1 16
+oingstramextedil
+notebook$ echo -n "become_core: " >>Secret/become.yml
+notebook$ ansible-vault encrypt_string oingstramextedil \
+notebook_     >>Secret/become.yml
+
+ + +

+With Debian freshly installed, Core needed several additional software +packages. The administrator temporarily plugged Core into a cable +modem and installed them as shown below. +

+ +
+$ sudo apt install openssh-server rsync isc-dhcp-server netplan.io \
+_                  bind9 fetchmail openvpn apache2
+
+ + +

+The Nextcloud configuration requires Apache2, MariaDB and a number of +PHP modules. Installing them while Core was on a cable modem sped up +final configuration "in position" (on a frontier). +

+ +
+$ sudo apt install mariadb-server php php-{bcmath,curl,gd,gmp,json}\
+_                  php-{mysql,mbstring,intl,imagick,xml,zip} \
+_                  libapache2-mod-php
+
+ + +

+Next, the administrator concatenated a personal public ssh key and the +key found in Secret/ssh_admin/ (created by The CA Command) into an +admin_keys file, copied it to Core, and installed it as the +authorized_keys for sysadm. +

+ +
+notebook$ cat ~/.ssh/id_rsa.pub Secret/ssh_admin/id_rsa.pub \
+notebook_     > admin_keys
+notebook$ rsync admin_keys sysadm@core.lan:
+The authenticity of host 'core.lan' can't be established.
+....
+Are you sure you want to continue connecting (...)? yes
+...
+sysadm@core.lan's password: oingstramextedil
+notebook$ ssh sysadm@core.lan
+sysadm@core.lan's password: oingstramextedil
+sysadm@core$ ( mask 077; mkdir .ssh; \
+sysadm@core_   cp admin_keys .ssh/authorized_keys )
+sysadm@core$ rm admin_keys
+sysadm@core$ logout
+notebook$ rm admin_keys
+notebook$
+
+ + +

+Note that the name core.lan should be known to the cable modem's DNS +service. An IP address might be used instead, discovered with an ip +a on Core. +

+ +

+Now Core no longer needed the Internets so it was disconnected from +the cable modem and connected to the campus Ethernet switch. Its +primary Ethernet interface was temporarily (manually) configured with +a new, private IP address and a default route. +

+ +

+In the example command lines below, the address 10.227.248.1 was +generated by the random subnet address picking procedure described in +Subnets, and is named core_addr in the Ansible code. The second +address, 10.227.248.2, is the corresponding address for Gate's +Ethernet interface, and is named gate_addr in the Ansible +code. +

+ +
+sysadm@core$ sudo ip address add 10.227.248.1 dev enp82s0
+sysadm@core$ sudo ip route add default via 10.227.248.2 dev enp82s0
+
+ + +

+At this point Core was ready for provisioning with Ansible. +

+
+
+
+

5.3. The Gate Machine

+
+

+Gate is the small institute's route to the Internet, and the campus +Wi-Fi's route to the private Ethernet. It has three network +interfaces. +

+ +
    +
  1. lan is its main Ethernet interface, connected to the campus's +private Ethernet switch.
  2. +
  3. wifi is its second Ethernet interface, connected to the campus +Wi-Fi access point's WAN Ethernet interface (with a cross-over +cable).
  4. +
  5. isp is its third network interface, connected to the campus +ISP. This could be an Ethernet device connected to a cable +modem. It could be a USB port tethered to a phone, a +USB-Ethernet adapter, or a wireless adapter connected to a +campground Wi-Fi access point, etc.
  6. +
+ +
+=============== | ==================================================
+                |                                           Premises
+          (Campus ISP)                                              
+                |            +----Member's notebook on campus       
+                |            |                                      
+                | +----(Campus Wi-Fi)                               
+                | |                                                 
+============== Gate ================================================
+                |                                            Private
+                +----Ethernet switch                                
+
+
+
+

5.3.1. Alternate Gate Topology

+
+

+While Gate and Core really need to be separate machines for security +reasons, the campus Wi-Fi and the ISP's Wi-Fi can be the same machine. +This avoids the need for a second Wi-Fi access point and leads to the +following topology. +

+ +
+=============== | ==================================================
+                |                                           Premises
+           (House ISP)                                              
+          (House Wi-Fi)-----------Member's notebook on campus       
+          (House Ethernet)                                          
+                |                                                   
+============== Gate ================================================
+                |                                            Private
+                +----Ethernet switch                                
+
+

+In this case Gate has two interfaces and there is no Gate-WiFi subnet. +

+ +

+Support for this "alternate" topology is planned but not yet +implemented. Like the original topology, it should require no +changes to a standard cable modem's default configuration (assuming +its Ethernet and Wi-Fi clients are allowed to communicate). +

+
+
+
+

5.3.2. Original Gate Topology

+
+

+The Ansible code in this document is somewhat dependent on the +physical network shown in the Overview wherein Gate has three network +interfaces. +

+ +

+The following example prepared a new gate on a PC with Debian 11 +freshly installed. During installation, the machine was named gate, +no desktop or server software was installed, no root password was set, +and a privileged account named sysadm was created (per the policy in +The Administration Accounts). +

+ +
+New password: icismassssadestm
+Retype new password: icismassssadestm
+...
+        Full Name []: System Administrator
+...
+Is the information correct? [Y/n] 
+
+ + +

+The password was generated by gpw, saved in the administrator's +password keep, and later added to Secret/become.yml as shown below. +(Producing a working Ansible configuration with Secret/become.yml +file is described in The Ansible Configuration.) +

+ +
+notebook$ gpw 1 16
+icismassssadestm
+notebook$ echo -n "become_gate: " >>Secret/become.yml
+notebook$ ansible-vault encrypt_string icismassssadestm \
+notebook_     >>Secret/become.yml
+
+ + +

+With Debian freshly installed, Gate needed a couple additional +software packages. The administrator temporarily plugged Gate into a +cable modem and installed them as shown below. +

+ +
+$ sudo apt install openssh-server isc-dhcp-server netplan.io
+
+ + +

+Next, the administrator concatenated a personal public ssh key and the +key found in Secret/ssh_admin/ (created by The CA Command) into an +admin_keys file, copied it to Gate, and installed it as the +authorized_keys for sysadm. +

+ +
+notebook$ cat ~/.ssh/id_rsa.pub Secret/ssh_admin/id_rsa.pub \
+notebook_     > admin_keys
+notebook$ rsync admin_keys sysadm@gate.lan:
+The authenticity of host 'gate.lan' can't be established.
+....
+Are you sure you want to continue connecting (...)? yes
+...
+sysadm@gate.lan's password: icismassssadestm
+notebook$ ssh sysadm@gate.lan
+sysadm@gate.lan's password: icismassssadestm
+sysadm@gate$ ( mask 077; mkdir .ssh; \
+sysadm@gate_   cp admin_keys .ssh/authorized_keys )
+sysadm@core$ rm admin_keys
+sysadm@core$ logout
+notebook$ rm admin_keys
+notebook$
+
+ + +

+Note that the name gate.lan should be known to the cable modem's DNS +service. An IP address might be used instead, discovered with an ip +a command on Gate. +

+ +

+Now Gate no longer needed the Internets so it was disconnected from +the cable modem and connected to the campus Ethernet switch. Its +primary Ethernet interface was temporarily (manually) configured with +a new, private IP address. +

+ +

+In the example command lines below, the address 10.227.248.2 was +generated by the random subnet address picking procedure described in +Subnets, and is named gate_addr in the Ansible code. +

+ +
+$ sudo ip address add 10.227.248.2 dev eth0
+
+ + +

+Gate was also connected to the USB Ethernet dongles cabled to the +campus Wi-Fi access point and the campus ISP. The three network +adapters are known by their MAC addresses, the values of the variables +gate_lan_mac, gate_wifi_mac, and gate_isp_mac. (For more +information, see the Gate role's Configure Netplan task.) +

+ +

+At this point Gate was ready for provisioning with Ansible. +

+
+
+
+
+
+

6. The Front Role

+
+

+The front role installs and configures the services expected on the +institute's publicly accessible "front door": email, web, VPN. The +virtual machine is prepared with an Ubuntu Server install and remote +access to a privileged, administrator's account. (For details, see +The Front Machine.) +

+ +

+Front initially presents the same self-signed, "snake oil" server +certificate for its HTTP, SMTP and IMAP services, created by the +institute's certificate authority but "snake oil" all the same +(assuming the small institute is not a well recognized CA). The HTTP, +SMTP and IMAP servers are configured to use the certificate (and +private key) in /etc/server.crt (and /etc/server.key), so +replacing the "snake oil" is as easy as replacing these two files, +perhaps with symbolic links to, for example, +/etc/letsencrypt/live/small.example.org/fullchain.pem. +

+ +

+Note that the OpenVPN server does not use /etc/server.crt. It +uses the institute's CA and server certificates, and expects client +certificates signed by the institute CA. +

+
+
+

6.1. Include Particulars

+
+

+The front role's tasks contain references to several common +institute particulars, variables in the public and private vars.yml +files and the institute membership roll in private/members.yml. The +first front role tasks are to include these files (described in The +Particulars and Account Management). +

+ +

+The code block below is the first to tangle into +roles/front/tasks/main.yml. +

+ +
+roles/front/tasks/main.yml
---
+- name: Include public variables.
+  include_vars: ../public/vars.yml
+  tags: accounts
+
+- name: Include private variables.
+  include_vars: ../private/vars.yml
+  tags: accounts
+
+- name: Include members.
+  include_vars: "{{ lookup('first_found', membership_rolls) }}"
+  tags: accounts
+
+
+
+
+
+

6.2. Configure Hostname

+
+

+This task ensures that Front's /etc/hostname and /etc/mailname are +correct. The correct /etc/mailname is essential to proper email +delivery. +

+ +
+roles_t/front/tasks/main.yml
- name: Configure hostname.
+  become: yes
+  copy:
+    content: "{{ domain_name }}\n"
+    dest: "{{ item }}"
+  loop:
+  - /etc/hostname
+  - /etc/mailname
+  notify: Update hostname.
+
+
+ +
+roles_t/front/handlers/main.yml
---
+- name: Update hostname.
+  become: yes
+  command: hostname -F /etc/hostname
+
+
+
+
+
+

6.3. Enable Systemd Resolved

+
+

+The systemd-networkd and systemd-resolved service units are not +enabled by default in Debian, but are the default in Ubuntu, and +work with Netplan. The /usr/share/doc/systemd/README.Debian.gz file +recommends both services be enabled and /etc/resolv.conf be +replaced with a symbolic link to /run/systemd/resolve/resolv.conf. +The institute follows these recommendations (and not the suggestion +to enable "persistent logging", yet). In Debian 12 there is a +systemd-resolved package that symbolically links /etc/resolv.conf +(and provides /lib/systemd/systemd-resolved, formerly part of the +systemd package). +

+ +

+These tasks are included in all of the roles, and so are given in a +separate code block named enable-resolved.2 +

+ +
+roles_t/front/tasks/main.yml
+- name: Install systemd-resolved.
+  become: yes
+  apt: pkg=systemd-resolved
+  when:
+  - ansible_distribution == 'Debian'
+  - 11 < ansible_distribution_major_version|int
+
+- name: Enable/Start systemd-networkd.
+  become: yes
+  systemd:
+    service: systemd-networkd
+    enabled: yes
+    state: started
+
+- name: Enable/Start systemd-resolved.
+  become: yes
+  systemd:
+    service: systemd-resolved
+    enabled: yes
+    state: started
+
+- name: Link /etc/resolv.conf.
+  become: yes
+  file:
+    path: /etc/resolv.conf
+    src: /run/systemd/resolve/resolv.conf
+    state: link
+    force: yes
+  when:
+  - ansible_distribution == 'Debian'
+  - 12 > ansible_distribution_major_version|int
+
+
+ +
+enable-resolved
+- name: Install systemd-resolved.
+  become: yes
+  apt: pkg=systemd-resolved
+  when:
+  - ansible_distribution == 'Debian'
+  - 11 < ansible_distribution_major_version|int
+
+- name: Enable/Start systemd-networkd.
+  become: yes
+  systemd:
+    service: systemd-networkd
+    enabled: yes
+    state: started
+
+- name: Enable/Start systemd-resolved.
+  become: yes
+  systemd:
+    service: systemd-resolved
+    enabled: yes
+    state: started
+
+- name: Link /etc/resolv.conf.
+  become: yes
+  file:
+    path: /etc/resolv.conf
+    src: /run/systemd/resolve/resolv.conf
+    state: link
+    force: yes
+  when:
+  - ansible_distribution == 'Debian'
+  - 12 > ansible_distribution_major_version|int
+
+
+
+
+
+

6.4. Add Administrator to System Groups

+
+

+The administrator often needs to read (directories of) log files owned +by groups root and adm. Adding the administrator's account to +these groups speeds up debugging. +

+ +
+roles_t/front/tasks/main.yml
+- name: Add {{ ansible_user }} to system groups.
+  become: yes
+  user:
+    name: "{{ ansible_user }}"
+    append: yes
+    groups: root,adm
+
+
+
+
+
+

6.5. Configure SSH

+
+

+The SSH service on Front needs to be known to Monkey. The following +tasks ensure this by replacing the automatically generated keys with +those stored in Secret/ssh_front/etc/ssh/ and restarting the server. +

+ +
+roles_t/front/tasks/main.yml
+- name: Install SSH host keys.
+  become: yes
+  copy:
+    src: ../Secret/ssh_front/etc/ssh/{{ item.name }}
+    dest: /etc/ssh/{{ item.name }}
+    mode: "{{ item.mode }}"
+  loop:
+  - { name: ssh_host_ecdsa_key,       mode: "u=rw,g=,o=" }
+  - { name: ssh_host_ecdsa_key.pub,   mode: "u=rw,g=r,o=r" }
+  - { name: ssh_host_ed25519_key,     mode: "u=rw,g=,o=" }
+  - { name: ssh_host_ed25519_key.pub, mode: "u=rw,g=r,o=r" }
+  - { name: ssh_host_rsa_key,         mode: "u=rw,g=,o=" }
+  - { name: ssh_host_rsa_key.pub,     mode: "u=rw,g=r,o=r" }
+  notify: Reload SSH server.
+
+
+ +
+roles_t/front/handlers/main.yml
+- name: Reload SSH server.
+  become: yes
+  systemd:
+    service: ssh
+    state: reloaded
+
+
+
+
+
+

6.6. Configure Monkey

+
+

+The small institute runs cron jobs and web scripts that generate +reports and perform checks. The un-privileged jobs are run by a +system account named monkey. One of Monkey's more important jobs on +Core is to run rsync to update the public web site on Front. Monkey +on Core will login as monkey on Front to synchronize the files (as +described in *Configure Apache2). To do that without needing a +password, the monkey account on Front should authorize Monkey's SSH +key on Core. +

+ +
+roles_t/front/tasks/main.yml
+- name: Create monkey.
+  become: yes
+  user:
+    name: monkey
+    system: yes
+
+- name: Authorize monkey@core.
+  become: yes
+  vars:
+    pubkeyfile: ../Secret/ssh_monkey/id_rsa.pub
+  authorized_key:
+    user: monkey
+    key: "{{ lookup('file', pubkeyfile) }}"
+    manage_dir: yes
+
+- name: Add {{ ansible_user }} to monkey group.
+  become: yes
+  user:
+    name: "{{ ansible_user }}"
+    append: yes
+    groups: monkey
+
+
+
+
+
+

6.7. Install Rsync

+
+

+Monkey uses Rsync to keep the institute's public web site up-to-date. +

+ +
+roles_t/front/tasks/main.yml
+- name: Install rsync.
+  become: yes
+  apt: pkg=rsync
+
+
+
+
+
+

6.8. Install Unattended Upgrades

+
+

+The institute prefers to install security updates as soon as possible. +

+ +
+roles_t/front/tasks/main.yml
+- name: Install basic software.
+  become: yes
+  apt: pkg=unattended-upgrades
+
+
+
+
+
+

6.9. Configure User Accounts

+
+

+User accounts are created immediately so that Postfix and Dovecot can +start delivering email immediately, without returning "no such +recipient" replies. The Account Management chapter describes the +members and usernames variables used below. +

+ +
+roles_t/front/tasks/main.yml
+- name: Create user accounts.
+  become: yes
+  user:
+    name: "{{ item }}"
+    password: "{{ members[item].password_front }}"
+    update_password: always
+    home: /home/{{ item }}
+  loop: "{{ usernames }}"
+  when: members[item].status == 'current'
+  tags: accounts
+
+- name: Disable former users.
+  become: yes
+  user:
+    name: "{{ item }}"
+    password: "!"
+  loop: "{{ usernames }}"
+  when: members[item].status != 'current'
+  tags: accounts
+
+- name: Revoke former user authorized_keys.
+  become: yes
+  file:
+    path: /home/{{ item }}/.ssh/authorized_keys
+    state: absent
+  loop: "{{ usernames }}"
+  when: members[item].status != 'current'
+  tags: accounts
+
+
+
+
+
+

6.10. Trust Institute Certificate Authority

+
+

+Front should recognize the institute's Certificate Authority as +trustworthy, so its certificate is added to Front's set of trusted +CAs. More information about how the small institute manages its +X.509 certificates is available in Keys. +

+ +
+roles_t/front/tasks/main.yml
+- name: Trust the institute CA.
+  become: yes
+  copy:
+    src: ../Secret/CA/pki/ca.crt
+    dest: /usr/local/share/ca-certificates/{{ domain_name }}.crt
+    mode: u=r,g=r,o=r
+    owner: root
+    group: root
+  notify: Update CAs.
+
+
+ +
+roles_t/front/handlers/main.yml
+- name: Update CAs.
+  become: yes
+  command: update-ca-certificates
+
+
+
+
+
+

6.11. Install Server Certificate

+
+

+The servers on Front use the same certificate (and key) to +authenticate themselves to institute clients. They share the +/etc/server.crt and /etc/server.key files, the latter only +readable by root. +

+ +
+roles_t/front/tasks/main.yml
+- name: Install server certificate/key.
+  become: yes
+  copy:
+    src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }}
+    dest: /etc/server.{{ item.typ }}
+    mode: "{{ item.mode }}"
+    force: no
+  loop:
+  - { path: "issued/{{ domain_name }}", typ: crt,
+      mode: "u=r,g=r,o=r" }
+  - { path: "private/{{ domain_name }}", typ: key,
+      mode: "u=r,g=,o=" }
+  notify:
+  - Restart Postfix.
+  - Restart Dovecot.
+
+
+
+
+
+

6.12. Configure Postfix on Front

+
+

+Front uses Postfix to provide the institute's public SMTP service, and +uses the institute's domain name for its host name. The default +Debian configuration (for an "Internet Site") is nearly sufficient. +Manual installation may prompt for configuration type and mail name. +The appropriate answers are listed here but will be checked +(corrected) by Ansible tasks below. +

+ +
    +
  • General type of mail configuration: Internet Site
  • +
  • System mail name: small.example.org
  • +
+ +

+As discussed in The Email Service above, Front's Postfix configuration +includes site-wide support for larger message sizes, shorter queue +times, the relaying configuration, and the common path to incoming +emails. These and a few Front-specific Postfix configurations +settings make up the complete configuration (below). +

+ +

+Front relays messages from the institute's public VPN via which Core +relays messages from the campus. +

+ +
+postfix-front-networks
- p: mynetworks
+  v: >-
+     {{ public_vpn_net_cidr }}
+     127.0.0.0/8
+     [::ffff:127.0.0.0]/104
+     [::1]/128
+
+
+ +

+Front uses one recipient restriction to make things difficult for +spammers, with permit_mynetworks at the start to not make things +difficult for internal hosts, who do not have (public) domain names. +

+ +
+postfix-front-restrictions
- p: smtpd_recipient_restrictions
+  v: >-
+     permit_mynetworks
+     reject_unauth_pipelining
+     reject_unauth_destination
+     reject_unknown_sender_domain
+
+
+ +

+Front uses Postfix header checks to strip Received headers from +outgoing messages. These headers contain campus host and network +names and addresses in the clear (un-encrypted). Stripping them +improves network privacy and security. Front also strips User-Agent +headers just to make it harder to target the program(s) members use to +open their email. These headers should be stripped only from outgoing +messages; incoming messages are delivered locally, without +smtp_header_checks. +

+ +
+postfix-header-checks
- p: smtp_header_checks
+  v: regexp:/etc/postfix/header_checks.cf
+
+
+ +
+postfix-header-checks-content
/^Received:/    IGNORE
+/^User-Agent:/  IGNORE
+
+
+ +

+The complete Postfix configuration for Front follows. In addition to +the options already discussed, it must override the loopback-only +Debian default for inet_interfaces. +

+ +
+postfix-front
- { p: smtpd_tls_cert_file, v: /etc/server.crt }
+- { p: smtpd_tls_key_file, v: /etc/server.key }
+- p: mynetworks
+  v: >-
+     {{ public_vpn_net_cidr }}
+     127.0.0.0/8
+     [::ffff:127.0.0.0]/104
+     [::1]/128
+- p: smtpd_recipient_restrictions
+  v: >-
+     permit_mynetworks
+     reject_unauth_pipelining
+     reject_unauth_destination
+     reject_unknown_sender_domain
+- p: smtpd_relay_restrictions
+  v: permit_mynetworks reject_unauth_destination
+- { p: message_size_limit, v: 104857600 }
+- { p: delay_warning_time, v: 1h }
+- { p: maximal_queue_lifetime, v: 4h }
+- { p: bounce_queue_lifetime, v: 4h }
+- { p: home_mailbox, v: Maildir/ }
+- p: smtp_header_checks
+  v: regexp:/etc/postfix/header_checks.cf
+
+
+ +

+The following Ansible tasks install Postfix, modify +/etc/postfix/main.cf according to the settings given above, and +start and enable the service. +

+ +
+roles_t/front/tasks/main.yml
+- name: Install Postfix.
+  become: yes
+  apt: pkg=postfix
+
+- name: Configure Postfix.
+  become: yes
+  lineinfile:
+    path: /etc/postfix/main.cf
+    regexp: "^ *{{ item.p }} *="
+    line: "{{ item.p }} = {{ item.v }}"
+  loop:
+  - { p: smtpd_tls_cert_file, v: /etc/server.crt }
+  - { p: smtpd_tls_key_file, v: /etc/server.key }
+  - p: mynetworks
+    v: >-
+       {{ public_vpn_net_cidr }}
+       127.0.0.0/8
+       [::ffff:127.0.0.0]/104
+       [::1]/128
+  - p: smtpd_recipient_restrictions
+    v: >-
+       permit_mynetworks
+       reject_unauth_pipelining
+       reject_unauth_destination
+       reject_unknown_sender_domain
+  - p: smtpd_relay_restrictions
+    v: permit_mynetworks reject_unauth_destination
+  - { p: message_size_limit, v: 104857600 }
+  - { p: delay_warning_time, v: 1h }
+  - { p: maximal_queue_lifetime, v: 4h }
+  - { p: bounce_queue_lifetime, v: 4h }
+  - { p: home_mailbox, v: Maildir/ }
+  - p: smtp_header_checks
+    v: regexp:/etc/postfix/header_checks.cf
+  notify: Restart Postfix.
+
+- name: Install Postfix header_checks.
+  become: yes
+  copy:
+    content: |
+      /^Received:/      IGNORE
+      /^User-Agent:/    IGNORE
+    dest: /etc/postfix/header_checks.cf
+  notify: Postmap header checks.
+
+- name: Enable/Start Postfix.
+  become: yes
+  systemd:
+    service: postfix
+    enabled: yes
+    state: started
+
+
+ +
+roles_t/front/handlers/main.yml
+- name: Restart Postfix.
+  become: yes
+  systemd:
+    service: postfix
+    state: restarted
+
+- name: Postmap header checks.
+  become: yes
+  command:
+    chdir: /etc/postfix/
+    cmd: postmap header_checks.cf
+  notify: Restart Postfix.
+
+
+
+
+
+

6.13. Configure Public Email Aliases

+
+

+The institute's Front needs to deliver email addressed to a number of +common aliases as well as those advertised on the web site. System +daemons like cron(8) may also send email to system accounts like +monkey. The following aliases make these customary mailboxes +available. The aliases are installed in /etc/aliases in a block +with a special marker so that additional blocks can be installed by +other Ansible roles. Note that the postmaster alias forwards to +root in the default Debian configuration, and the following aliases +do not include the crucial root alias that forwards to the +administrator. It could be included here or in a separate block +created by a more specialized role. +

+ +
+roles_t/front/tasks/main.yml
- name: Install institute email aliases.
+  become: yes
+  blockinfile:
+    block: |
+        abuse:          root
+        webmaster:      root
+        admin:          root
+        monkey:         monkey@{{ front_private_addr }}
+        root:           {{ ansible_user }}
+    path: /etc/aliases
+    marker: "# {mark} INSTITUTE MANAGED BLOCK"
+  notify: New aliases.
+
+
+ +
+roles_t/front/handlers/main.yml
+- name: New aliases.
+  become: yes
+  command: newaliases
+
+
+
+
+
+

6.14. Configure Dovecot IMAPd

+
+

+Front uses Dovecot's IMAPd to allow user Fetchmail jobs on Core to +pick up messages. Front's Dovecot configuration is largely the Debian +default with POP and IMAP (without TLS) support disabled. This is a +bit "over the top" given that Core accesses Front via VPN, but helps +to ensure privacy even when members must, in extremis, access recent +email directly from their accounts on Front. For more information +about Front's role in the institute's email services, see The Email +Service. +

+ +

+The institute follows the recommendation in the package +README.Debian (in /usr/share/dovecot-core/). Note that the +default "snake oil" certificate can be replaced with one signed by a +recognized authority (e.g. Let's Encrypt) so that email apps will not +ask about trusting the self-signed certificate. +

+ +

+The following Ansible tasks install Dovecot's IMAP daemon and its +/etc/dovecot/local.conf configuration file, then starts the service +and enables it to start at every reboot. +

+ +
+roles_t/front/tasks/main.yml
+- name: Install Dovecot IMAPd.
+  become: yes
+  apt: pkg=dovecot-imapd
+
+- name: Configure Dovecot IMAPd.
+  become: yes
+  copy:
+    content: |
+      protocols = imap
+      ssl = required
+      ssl_cert = </etc/server.crt
+      ssl_key = </etc/server.key
+      service imap-login {
+        inet_listener imap {
+          port = 0
+        }
+      }
+      mail_location = maildir:~/Maildir
+    dest: /etc/dovecot/local.conf
+  notify: Restart Dovecot.
+
+- name: Enable/Start Dovecot.
+  become: yes
+  systemd:
+    service: dovecot
+    enabled: yes
+    state: started
+
+
+ +
+roles_t/front/handlers/main.yml
+- name: Restart Dovecot.
+  become: yes
+  systemd:
+    service: dovecot
+    state: restarted
+
+
+
+
+
+

6.15. Configure Apache2

+
+

+This is the small institute's public web site. It is simple, static, +and thus (hopefully) difficult to subvert. There are no server-side +scripts to run. The standard Debian install runs the server under the +www-data account, which does not need any permissions. It will +serve only world-readable files. +

+ +

+The server's document root, /home/www/, is separate from the Debian +default /var/www/html/ and (presumably) on the largest disk +partition. The directory tree, from the document root to the leaf +HTML files, should be owned by monkey, and only writable by its +owner. It should not be writable by the Apache2 server (running as +www-data). +

+ +

+The institute uses several SSL directives to trim protocol and cipher +suite compatibility down, eliminating old and insecure methods and +providing for forward secrecy. Along with an up-to-date Let's Encrypt +certificate, these settings win the institute's web site an A rating +from Qualys SSL Labs (https://www.ssllabs.com/). +

+ +

+The apache-ciphers block below is included last in the Apache2 +configuration, so that its SSLCipherSuite directive can override +(narrow) any list of ciphers set earlier (e.g. by Let's +Encrypt!3). The protocols and cipher suites specified here were +taken from https://www.ssllabs.com/projects/best-practices in 2022. +

+ +
+apache-ciphers
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
+SSLHonorCipherOrder on
+SSLCipherSuite {{ [ 'ECDHE-ECDSA-AES128-GCM-SHA256',
+                    'ECDHE-ECDSA-AES256-GCM-SHA384',
+                    'ECDHE-ECDSA-AES128-SHA',
+                    'ECDHE-ECDSA-AES256-SHA',
+                    'ECDHE-ECDSA-AES128-SHA256',
+                    'ECDHE-ECDSA-AES256-SHA384',
+                    'ECDHE-RSA-AES128-GCM-SHA256',
+                    'ECDHE-RSA-AES256-GCM-SHA384',
+                    'ECDHE-RSA-AES128-SHA',
+                    'ECDHE-RSA-AES256-SHA',
+                    'ECDHE-RSA-AES128-SHA256',
+                    'ECDHE-RSA-AES256-SHA384',
+                    'DHE-RSA-AES128-GCM-SHA256',
+                    'DHE-RSA-AES256-GCM-SHA384',
+                    'DHE-RSA-AES128-SHA',
+                    'DHE-RSA-AES256-SHA',
+                    'DHE-RSA-AES128-SHA256',
+                    'DHE-RSA-AES256-SHA256',
+                    '!aNULL',
+                    '!eNULL',
+                    '!LOW',
+                    '!3DES',
+                    '!MD5',
+                    '!EXP',
+                    '!PSK',
+                    '!SRP',
+                    '!DSS',
+                    '!RC4' ] |join(":") }}
+
+
+ +

+The institute supports public member (static) web pages. A member can +put an index.html file in their ~/Public/HTML/ directory on Front +and it will be served as https://small.example.org/~member/ (if the +member's account name is member and the file is world readable). +

+ +

+On Front, a member's web pages are available only when they appear in +/home/www-users/ (via a symbolic link), giving the administration +more control over what appears on the public web site. The tasks +below create or remove the symbolic links. +

+ +

+The following are the necessary Apache2 directives: a UserDir +directive naming /home/www-users/, a matching Directory block that +allows the server to follow the symbol links, and a Directory block +that matches the user directories and includes the standard Require +and AllowOverride directives used on all of the institute's static +web sites (https://small.example.org/, http://live/, and +http://test/). +

+ +
+apache-userdir-front
UserDir /home/www-users
+<Directory /home/www-users/>
+        Require all granted
+        AllowOverride None
+</Directory>
+
+
+ +
+apache-userdir-directory
Require all granted
+AllowOverride None
+
+
+ +

+The institute requires the use of HTTPS on Front, so its default HTTP +virtual host permanently redirects requests to their corresponding +HTTPS URLs. +

+ +
+apache-redirect-front
<VirtualHost *:80>
+        Redirect permanent / https://{{ domain_name }}/
+</VirtualHost>
+
+
+ +

+The complete Apache2 configuration for Front is given below. It is +installed in /etc/apache2/sites-available/{{ domain_name }}.conf (as +expected by Let's Encrypt's Certbot). It includes the fragments +described above and adds a VirtualHost block for the HTTPS service +(also as expected by Certbot). The VirtualHost optionally includes +an additional configuration file to allow other Ansible roles to +specialize this configuration without disturbing the institute file. +

+ +

+The DocumentRoot directive is accompanied by a Directory block +that authorizes access to the tree, and ensures .htaccess files +within the tree are disabled for speed and security. This and most of +Front's Apache2 directives (below) are intended for the top level, not +inside a VirtualHost block, to apply globally. +

+ +
+apache-front
ServerName {{ domain_name }}
+ServerAdmin webmaster@{{ domain_name }}
+
+DocumentRoot /home/www
+<Directory /home/www/>
+        Require all granted
+        AllowOverride None
+</Directory>
+
+UserDir /home/www-users
+<Directory /home/www-users/>
+        Require all granted
+        AllowOverride None
+</Directory>
+
+ErrorLog ${APACHE_LOG_DIR}/error.log
+CustomLog ${APACHE_LOG_DIR}/access.log combined
+
+<VirtualHost *:80>
+        Redirect permanent / https://{{ domain_name }}/
+</VirtualHost>
+
+<VirtualHost *:443>
+        SSLEngine on
+        SSLCertificateFile /etc/server.crt
+        SSLCertificateKeyFile /etc/server.key
+        IncludeOptional \
+            /etc/apache2/sites-available/{{ domain_name }}-vhost.conf
+</VirtualHost>
+
+SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
+SSLHonorCipherOrder on
+SSLCipherSuite {{ [ 'ECDHE-ECDSA-AES128-GCM-SHA256',
+                    'ECDHE-ECDSA-AES256-GCM-SHA384',
+                    'ECDHE-ECDSA-AES128-SHA',
+                    'ECDHE-ECDSA-AES256-SHA',
+                    'ECDHE-ECDSA-AES128-SHA256',
+                    'ECDHE-ECDSA-AES256-SHA384',
+                    'ECDHE-RSA-AES128-GCM-SHA256',
+                    'ECDHE-RSA-AES256-GCM-SHA384',
+                    'ECDHE-RSA-AES128-SHA',
+                    'ECDHE-RSA-AES256-SHA',
+                    'ECDHE-RSA-AES128-SHA256',
+                    'ECDHE-RSA-AES256-SHA384',
+                    'DHE-RSA-AES128-GCM-SHA256',
+                    'DHE-RSA-AES256-GCM-SHA384',
+                    'DHE-RSA-AES128-SHA',
+                    'DHE-RSA-AES256-SHA',
+                    'DHE-RSA-AES128-SHA256',
+                    'DHE-RSA-AES256-SHA256',
+                    '!aNULL',
+                    '!eNULL',
+                    '!LOW',
+                    '!3DES',
+                    '!MD5',
+                    '!EXP',
+                    '!PSK',
+                    '!SRP',
+                    '!DSS',
+                    '!RC4' ] |join(":") }}
+
+
+ +

+Ansible installs the configuration above in +e.g. /etc/apache2/sites-available/small.example.org.conf and runs +a2ensite -q small.example.org to enable it. +

+ +
+roles_t/front/tasks/main.yml
+- name: Install Apache2.
+  become: yes
+  apt: pkg=apache2
+
+- name: Enable Apache2 modules.
+  become: yes
+  apache2_module:
+    name: "{{ item }}"
+  loop: [ ssl, userdir ]
+  notify: Restart Apache2.
+
+- name: Create DocumentRoot.
+  become: yes
+  file:
+    path: /home/www
+    state: directory
+    owner: monkey
+    group: monkey
+
+- name: Configure web site.
+  become: yes
+  copy:
+    content: |
+      ServerName {{ domain_name }}
+      ServerAdmin webmaster@{{ domain_name }}
+
+      DocumentRoot /home/www
+      <Directory /home/www/>
+        Require all granted
+        AllowOverride None
+      </Directory>
+
+      UserDir /home/www-users
+      <Directory /home/www-users/>
+        Require all granted
+        AllowOverride None
+      </Directory>
+
+      ErrorLog ${APACHE_LOG_DIR}/error.log
+      CustomLog ${APACHE_LOG_DIR}/access.log combined
+
+      <VirtualHost *:80>
+        Redirect permanent / https://{{ domain_name }}/
+      </VirtualHost>
+
+      <VirtualHost *:443>
+        SSLEngine on
+        SSLCertificateFile /etc/server.crt
+        SSLCertificateKeyFile /etc/server.key
+        IncludeOptional \
+            /etc/apache2/sites-available/{{ domain_name }}-vhost.conf
+      </VirtualHost>
+
+      SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
+      SSLHonorCipherOrder on
+      SSLCipherSuite {{ [ 'ECDHE-ECDSA-AES128-GCM-SHA256',
+                          'ECDHE-ECDSA-AES256-GCM-SHA384',
+                          'ECDHE-ECDSA-AES128-SHA',
+                          'ECDHE-ECDSA-AES256-SHA',
+                          'ECDHE-ECDSA-AES128-SHA256',
+                          'ECDHE-ECDSA-AES256-SHA384',
+                          'ECDHE-RSA-AES128-GCM-SHA256',
+                          'ECDHE-RSA-AES256-GCM-SHA384',
+                          'ECDHE-RSA-AES128-SHA',
+                          'ECDHE-RSA-AES256-SHA',
+                          'ECDHE-RSA-AES128-SHA256',
+                          'ECDHE-RSA-AES256-SHA384',
+                          'DHE-RSA-AES128-GCM-SHA256',
+                          'DHE-RSA-AES256-GCM-SHA384',
+                          'DHE-RSA-AES128-SHA',
+                          'DHE-RSA-AES256-SHA',
+                          'DHE-RSA-AES128-SHA256',
+                          'DHE-RSA-AES256-SHA256',
+                          '!aNULL',
+                          '!eNULL',
+                          '!LOW',
+                          '!3DES',
+                          '!MD5',
+                          '!EXP',
+                          '!PSK',
+                          '!SRP',
+                          '!DSS',
+                          '!RC4' ] |join(":") }}
+    dest: /etc/apache2/sites-available/{{ domain_name }}.conf
+  notify: Restart Apache2.
+
+- name: Enable web site.
+  become: yes
+  command:
+    cmd: a2ensite -q {{ domain_name }}
+    creates: /etc/apache2/sites-enabled/{{ domain_name }}.conf
+  notify: Restart Apache2.
+
+- name: Enable/Start Apache2.
+  become: yes
+  systemd:
+    service: apache2
+    enabled: yes
+    state: started
+
+
+ +
+roles_t/front/handlers/main.yml
+- name: Restart Apache2.
+  become: yes
+  systemd:
+    service: apache2
+    state: restarted
+
+
+ +

+Furthermore, the default web site and its HTTPS version is disabled so +that it does not interfere with its replacement. +

+ +
+roles_t/front/tasks/main.yml
+- name: Disable default vhosts.
+  become: yes
+  file:
+    path: /etc/apache2/sites-enabled/{{ item }}
+    state: absent
+  loop: [ 000-default.conf, default-ssl.conf ]
+  notify: Restart Apache2.
+
+
+ +

+The redundant default other-vhosts-access-log configuration option +is also disabled. There are no other virtual hosts, and it stores the +same records as access.log. +

+ +
+roles_t/front/tasks/main.yml
+- name: Disable other-vhosts-access-log option.
+  become: yes
+  file:
+    path: /etc/apache2/conf-enabled/other-vhosts-access-log.conf
+    state: absent
+  notify: Restart Apache2.
+
+
+ +

+Finally, the UserDir is created and populated with symbolic links to +the users' ~/Public/HTML/ directories. +

+ +
+roles_t/front/tasks/main.yml
+- name: Create UserDir.
+  become: yes
+  file:
+    path: /home/www-users/
+    state: directory
+
+- name: Create UserDir links.
+  become: yes
+  file:
+    path: /home/www-users/{{ item }}
+    src: /home/{{ item }}/Public/HTML
+    state: link
+    force: yes
+  loop: "{{ usernames }}"
+  when: members[item].status == 'current'
+  tags: accounts
+
+- name: Disable former UserDir links.
+  become: yes
+  file:
+    path: /home/www-users/{{ item }}
+    state: absent
+  loop: "{{ usernames }}"
+  when: members[item].status != 'current'
+  tags: accounts
+
+
+
+
+
+

6.16. Configure OpenVPN

+
+

+Front uses OpenVPN to provide the institute's public VPN service. The +configuration is straightforward with one complication. OpenVPN needs +to know how to route to the campus VPN, which is only accessible when +Core is connected. OpenVPN supports these dynamic routes internally +with client-specific configuration files. The small institute uses +one of these, /etc/openvpn/ccd/core, so that OpenVPN will know to +route packets for the campus networks to Core. +

+ +
+openvpn-ccd-core
iroute {{ private_net_and_mask }}
+iroute {{ campus_vpn_net_and_mask }}
+
+
+ +

+The VPN clients are not configured to route all of their traffic +through the VPN, so Front pushes routes to the other institute +networks. The clients thus know to route traffic for the private +Ethernet or campus VPN to Front on the public VPN. (If the clients +were configured to route all traffic through the VPN, the one +default route is all that would be needed.) Front itself is in the +same situation, outside the institute networks with a default route +through some ISP, and thus needs the same routes as the clients. +

+ +
+openvpn-front-routes
route {{ private_net_and_mask }}
+route {{ campus_vpn_net_and_mask }}
+push "route {{ private_net_and_mask }}"
+push "route {{ campus_vpn_net_and_mask }}"
+
+
+ +

+The complete OpenVPN configuration for Front includes a server +option, the client-config-dir option, the routes mentioned above, +and the common options discussed in The VPN Service. +

+ +
+openvpn-front
server {{ public_vpn_net_and_mask }}
+client-config-dir /etc/openvpn/ccd
+route {{ private_net_and_mask }}
+route {{ campus_vpn_net_and_mask }}
+push "route {{ private_net_and_mask }}"
+push "route {{ campus_vpn_net_and_mask }}"
+dev-type tun
+dev ovpn
+topology subnet
+client-to-client
+keepalive 10 120
+push "dhcp-option DOMAIN {{ domain_priv }}"
+push "dhcp-option DNS {{ core_addr }}"
+user nobody
+group nogroup
+persist-key
+persist-tun
+cipher AES-256-GCM
+auth SHA256
+max-clients 20
+ifconfig-pool-persist ipp.txt
+status openvpn-status.log
+verb 3
+ca /usr/local/share/ca-certificates/{{ domain_name }}.crt
+cert server.crt
+key server.key
+dh dh2048.pem
+tls-auth ta.key 0
+
+
+ +

+Finally, here are the tasks (and handler) required to install and +configure the OpenVPN server on Front. +

+ +
+roles_t/front/tasks/main.yml
+- name: Install OpenVPN.
+  become: yes
+  apt: pkg=openvpn
+
+- name: Enable IP forwarding.
+  become: yes
+  sysctl:
+    name: net.ipv4.ip_forward
+    value: "1"
+    state: present
+
+- name: Create OpenVPN client configuration directory.
+  become: yes
+  file:
+    path: /etc/openvpn/ccd
+    state: directory
+  notify: Restart OpenVPN.
+
+- name: Install OpenVPN client configuration for Core.
+  become: yes
+  copy:
+    content: |
+      iroute {{ private_net_and_mask }}
+      iroute {{ campus_vpn_net_and_mask }}
+    dest: /etc/openvpn/ccd/core
+  notify: Restart OpenVPN.
+
+- name: Disable former VPN clients.
+  become: yes
+  copy:
+    content: "disable\n"
+    dest: /etc/openvpn/ccd/{{ item }}
+  loop: "{{ revoked }}"
+  tags: accounts
+
+- name: Install OpenVPN server certificate/key.
+  become: yes
+  copy:
+    src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }}
+    dest: /etc/openvpn/server.{{ item.typ }}
+    mode: "{{ item.mode }}"
+  loop:
+  - { path: "issued/{{ domain_name }}", typ: crt,
+      mode: "u=r,g=r,o=r" }
+  - { path: "private/{{ domain_name }}", typ: key,
+      mode: "u=r,g=,o=" }
+  notify: Restart OpenVPN.
+
+- name: Install OpenVPN secrets.
+  become: yes
+  copy:
+    src: ../Secret/{{ item.src }}
+    dest: /etc/openvpn/{{ item.dest }}
+    mode: u=r,g=,o=
+  loop:
+  - { src: front-dh2048.pem, dest: dh2048.pem }
+  - { src: front-ta.key, dest: ta.key }
+  notify: Restart OpenVPN.
+
+- name: Configure OpenVPN.
+  become: yes
+  copy:
+    content: |
+      server {{ public_vpn_net_and_mask }}
+      client-config-dir /etc/openvpn/ccd
+      route {{ private_net_and_mask }}
+      route {{ campus_vpn_net_and_mask }}
+      push "route {{ private_net_and_mask }}"
+      push "route {{ campus_vpn_net_and_mask }}"
+      dev-type tun
+      dev ovpn
+      topology subnet
+      client-to-client
+      keepalive 10 120
+      push "dhcp-option DOMAIN {{ domain_priv }}"
+      push "dhcp-option DNS {{ core_addr }}"
+      user nobody
+      group nogroup
+      persist-key
+      persist-tun
+      cipher AES-256-GCM
+      auth SHA256
+      max-clients 20
+      ifconfig-pool-persist ipp.txt
+      status openvpn-status.log
+      verb 3
+      ca /usr/local/share/ca-certificates/{{ domain_name }}.crt
+      cert server.crt
+      key server.key
+      dh dh2048.pem
+      tls-auth ta.key 0
+    dest: /etc/openvpn/server.conf
+    mode: u=r,g=r,o=
+  notify: Restart OpenVPN.
+
+- name: Enable/Start OpenVPN.
+  become: yes
+  systemd:
+    service: openvpn@server
+    enabled: yes
+    state: started
+
+
+ +
+roles_t/front/handlers/main.yml
+- name: Restart OpenVPN.
+  become: yes
+  systemd:
+    service: openvpn@server
+    state: restarted
+
+
+
+
+
+

6.17. Configure Kamailio

+
+

+Front uses Kamailio to provide a SIP service on the public VPN so that +members abroad can chat privately. This is a connection-less UDP +service that can be used with or without encryption. The VPN's +encryption can be relied upon or an extra layer can be used when +necessary. (Apps cannot tell if a network is secure and often assume +the luser is an idiot, so they insist on doing some encryption.) +

+ +

+Kamailio listens on all network interfaces by default, but the +institute expects its SIP traffic to be aggregated and encrypted via +the public VPN. To enforce this expectation, Kamailio is instructed +to listen only on Front's public VPN. The private name +sip.small.private resolves to this address for the convenience +of members configuring SIP clients. The server configuration +specifies the actual IP, known here as front_private_addr. +

+ +
+kamailio
listen=udp:{{ front_private_addr }}:5060
+
+
+ +

+The Ansible tasks that install and configure Kamailio follow, but +before Kamailio is configured (thus started), the service is tweaked +by a configuration drop (which must notify Systemd before the service +starts). +

+ +

+The first step is to install Kamailio. +

+ +
+roles_t/front/tasks/main.yml
+- name: Install Kamailio.
+  become: yes
+  apt: pkg=kamailio
+
+
+ +

+Now the configuration drop concerns the network device on which +Kamailio will be listening, the tun device created by OpenVPN. The +added configuration settings inform Systemd that Kamailio should not +be started before the tun device has appeared. +

+ +
+roles_t/front/tasks/main.yml
+- name: Create Kamailio/Systemd configuration drop.
+  become: yes
+  file:
+    path: /etc/systemd/system/kamailio.service.d
+    state: directory
+
+- name: Create Kamailio dependence on OpenVPN server.
+  become: yes
+  copy:
+    content: |
+      [Unit]
+      Requires=sys-devices-virtual-net-ovpn.device
+      After=sys-devices-virtual-net-ovpn.device
+    dest: /etc/systemd/system/kamailio.service.d/depend.conf
+  notify: Reload Systemd.
+
+
+ +
+roles_t/front/handlers/main.yml
+- name: Reload Systemd.
+  become: yes
+  command: systemctl daemon-reload
+
+
+ +

+Finally, Kamailio can be configured and started. +

+ +
+roles_t/front/tasks/main.yml
+- name: Configure Kamailio.
+  become: yes
+  copy:
+    content: |
+      listen=udp:{{ front_private_addr }}:5060
+    dest: /etc/kamailio/kamailio-local.cfg
+  notify: Restart Kamailio.
+
+- name: Enable/Start Kamailio.
+  become: yes
+  systemd:
+    service: kamailio
+    enabled: yes
+    state: started
+
+
+ +
+roles_t/front/handlers/main.yml
+- name: Restart Kamailio.
+  become: yes
+  systemd:
+    service: kamailio
+    state: restarted
+
+
+
+
+
+
+

7. The Core Role

+
+

+The core role configures many essential campus network services as +well as the institute's private cloud, so the core machine has +horsepower (CPUs and RAM) and large disks and is prepared with a +Debian install and remote access to a privileged, administrator's +account. (For details, see The Core Machine.) +

+
+
+

7.1. Include Particulars

+
+

+The first task, as in The Front Role, is to include the institute +particulars and membership roll. +

+ +
+roles_t/core/tasks/main.yml
---
+- name: Include public variables.
+  include_vars: ../public/vars.yml
+  tags: accounts
+- name: Include private variables.
+  include_vars: ../private/vars.yml
+  tags: accounts
+- name: Include members.
+  include_vars: "{{ lookup('first_found', membership_rolls) }}"
+  tags: accounts
+
+
+
+
+
+

7.2. Configure Hostname

+
+

+This task ensures that Core's /etc/hostname and /etc/mailname are +correct. Core accepts email addressed to the institute's public or +private domain names, e.g. to dick@small.example.org as well as +dick@small.private. The correct /etc/mailname is essential to +proper email delivery. +

+ +
+roles_t/core/tasks/main.yml
+- name: Configure hostname.
+  become: yes
+  copy:
+    content: "{{ item.name }}\n"
+    dest: "{{ item.file }}"
+  loop:
+  - { name: "core.{{ domain_priv }}", file: /etc/mailname }
+  - { name: "{{ inventory_hostname }}", file: /etc/hostname }
+  notify: Update hostname.
+
+
+ +
+roles_t/core/handlers/main.yml
---
+- name: Update hostname.
+  become: yes
+  command: hostname -F /etc/hostname
+
+
+
+
+
+

7.3. Enable Systemd Resolved

+
+

+Core starts the systemd-networkd and systemd-resolved service +units on boot. See Enable Systemd Resolved. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install systemd-resolved.
+  become: yes
+  apt: pkg=systemd-resolved
+  when:
+  - ansible_distribution == 'Debian'
+  - 11 < ansible_distribution_major_version|int
+
+- name: Enable/Start systemd-networkd.
+  become: yes
+  systemd:
+    service: systemd-networkd
+    enabled: yes
+    state: started
+
+- name: Enable/Start systemd-resolved.
+  become: yes
+  systemd:
+    service: systemd-resolved
+    enabled: yes
+    state: started
+
+- name: Link /etc/resolv.conf.
+  become: yes
+  file:
+    path: /etc/resolv.conf
+    src: /run/systemd/resolve/resolv.conf
+    state: link
+    force: yes
+  when:
+  - ansible_distribution == 'Debian'
+  - 12 > ansible_distribution_major_version|int
+
+
+
+
+
+

7.4. Configure Systemd Resolved

+
+

+Core runs the campus name server, so Resolved is configured to use it +(or dns.google), to include the institute's domain in its search +list, and to disable its cache and stub listener. +

+ +
+roles_t/core/tasks/main.yml
+- name: Configure resolved.
+  become: yes
+  lineinfile:
+    path: /etc/systemd/resolved.conf
+    regexp: "{{ item.regexp }}"
+    line: "{{ item.line }}"
+  loop:
+  - { regexp: '^ *DNS *=', line: "DNS=127.0.0.1" }
+  - { regexp: '^ *FallbackDNS *=', line: "FallbackDNS=8.8.8.8" }
+  - { regexp: '^ *Domains *=', line: "Domains={{ domain_priv }}" }
+  - { regexp: '^ *Cache *=', line: "Cache=no" }
+  - { regexp: '^ *DNSStubListener *=', line: "DNSStubListener=no" }
+  notify:
+  - Reload Systemd.
+  - Restart Systemd resolved.
+
+
+ +
+roles_t/core/handlers/main.yml
+- name: Reload Systemd.
+  become: yes
+  command: systemctl daemon-reload
+
+- name: Restart Systemd resolved.
+  become: yes
+  systemd:
+    service: systemd-resolved
+    state: restarted
+
+
+
+
+
+

7.5. Configure Netplan

+
+

+Core's network interface is statically configured using Netplan and an +/etc/netplan/60-core.yaml file. That file provides Core's address +on the private Ethernet, the campus name server and search domain, and +the default route through Gate to the campus ISP. A second route, +through Core itself to Front, is advertised to other hosts, but is not +created here. It is created by OpenVPN when Core connects to Front's +VPN. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install netplan.
+  become: yes
+  apt: pkg=netplan.io
+
+- name: Configure netplan.
+  become: yes
+  copy:
+    content: |
+      network:
+        renderer: networkd
+        ethernets:
+          {{ ansible_default_ipv4.interface }}:
+            dhcp4: false
+            addresses: [ {{ core_addr_cidr }} ]
+            nameservers:
+              search: [ {{ domain_priv }} ]
+              addresses: [ {{ core_addr }} ]
+            gateway4: {{ gate_addr }}
+    dest: /etc/netplan/60-core.yaml
+    mode: u=rw,g=r,o=
+  notify: Apply netplan.
+
+
+ +
+roles_t/core/handlers/main.yml
+- name: Apply netplan.
+  become: yes
+  command: netplan apply
+
+
+
+
+
+

7.6. Configure DHCP For the Private Ethernet

+
+

+Core speaks DHCP (Dynamic Host Configuration Protocol) using the +Internet Software Consortium's DHCP server. The server assigns unique +network addresses to hosts plugged into the private Ethernet as well +as advertising local net services, especially the local Domain Name +Service. +

+ +

+The example configuration file, private/core-dhcpd.conf, uses +RFC3442's extension to encode a second (non-default) static route. +The default route is through the campus ISP at Gate. A second route +directs campus traffic to the Front VPN through Core. This is just an +example file. The administrator adds and removes actual machines from +the actual private/core-dhcpd.conf file. +

+ +
+private/core-dhcpd.conf
option domain-name "small.private";
+option domain-name-servers 192.168.56.1;
+
+default-lease-time 3600;
+max-lease-time 7200;
+
+ddns-update-style none;
+
+authoritative;
+
+log-facility daemon;
+
+option rfc3442-routes code 121 = array of integer 8;
+
+subnet 192.168.56.0 netmask 255.255.255.0 {
+  option subnet-mask 255.255.255.0;
+  option broadcast-address 192.168.56.255;
+  option routers 192.168.56.2;
+  option ntp-servers 192.168.56.1;
+  option rfc3442-routes 24, 10,177,86, 192,168,56,1, 0, 192,168,56,2;
+}
+
+host core {
+  hardware ethernet 08:00:27:45:3b:a2; fixed-address 192.168.56.1; }
+host gate {
+  hardware ethernet 08:00:27:e0:79:ab; fixed-address 192.168.56.2; }
+host server {
+  hardware ethernet 08:00:27:f3:41:66; fixed-address 192.168.56.3; }
+
+
+ +

+The following tasks install the ISC's DHCP server and configure it +with the real private/core-dhcpd.conf (not the example above). +

+ +
+roles_t/core/tasks/main.yml
+- name: Install DHCP server.
+  become: yes
+  apt: pkg=isc-dhcp-server
+
+- name: Configure DHCP interface.
+  become: yes
+  lineinfile:
+    path: /etc/default/isc-dhcp-server
+    line: INTERFACESv4="{{ ansible_default_ipv4.interface }}"
+    regexp: ^INTERFACESv4=
+  notify: Restart DHCP server.
+
+- name: Configure DHCP subnet.
+  become: yes
+  copy:
+    src: ../private/core-dhcpd.conf
+    dest: /etc/dhcp/dhcpd.conf
+  notify: Restart DHCP server.
+
+- name: Enable/Start DHCP server.
+  become: yes
+  systemd:
+    service: isc-dhcp-server
+    enabled: yes
+    state: started
+
+
+ +
+roles_t/core/handlers/main.yml
+- name: Restart DHCP server.
+  become: yes
+  systemd:
+    service: isc-dhcp-server
+    state: restarted
+
+
+
+
+
+

7.7. Configure BIND9

+
+

+Core uses BIND9 to provide a private-view name service for the +institute as described in The Name Service. The configuration +supports reverse name lookups, resolving many private network +addresses to private domain names. +

+ +

+The following tasks install and configure BIND9 on Core. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install BIND9.
+  become: yes
+  apt: pkg=bind9
+
+- name: Configure BIND9 with named.conf.options.
+  become: yes
+  copy:
+    content: |
+      acl "trusted" {
+          {{ private_net_cidr }};
+          {{ public_vpn_net_cidr }};
+          {{ campus_vpn_net_cidr }};
+          {{ gate_wifi_net_cidr }};
+          localhost;
+      };
+
+      options {
+        directory "/var/cache/bind";
+
+        forwarders {
+                8.8.4.4;
+                8.8.8.8;
+        };
+
+        allow-query { any; };
+        allow-recursion { trusted; };
+        allow-query-cache { trusted; };
+
+        //============================================================
+        // If BIND logs error messages about the root key being
+        // expired, you will need to update your keys.
+        // See https://www.isc.org/bind-keys
+        //============================================================
+        //dnssec-validation auto;
+        // If Secure DNS is too much of a headache...
+        dnssec-enable no;
+        dnssec-validation no;
+
+        auth-nxdomain no;    # conform to RFC1035
+        //listen-on-v6 { any; };
+        listen-on { {{ core_addr }}; };
+      };
+    dest: /etc/bind/named.conf.options
+  notify: Reload BIND9.
+
+- name: Configure BIND9 with named.conf.local.
+  become: yes
+  copy:
+    content: |
+      include "/etc/bind/zones.rfc1918";
+
+      zone "{{ domain_priv }}." {
+        type master;
+        file "/etc/bind/db.domain";
+      };
+
+      zone "{{ private_net_cidr | ipaddr('revdns')
+               | regex_replace('^0\.','') }}" {
+        type master;
+        file "/etc/bind/db.private";
+      };
+
+      zone "{{ public_vpn_net_cidr | ipaddr('revdns')
+               | regex_replace('^0\.','') }}" {
+        type master;
+        file "/etc/bind/db.public_vpn";
+      };
+
+      zone "{{ campus_vpn_net_cidr | ipaddr('revdns')
+               | regex_replace('^0\.','') }}" {
+        type master;
+        file "/etc/bind/db.campus_vpn";
+      };
+    dest: /etc/bind/named.conf.local
+  notify: Reload BIND9.
+
+- name: Install BIND9 zonefiles.
+  become: yes
+  copy:
+    src: ../private/db.{{ item }}
+    dest: /etc/bind/db.{{ item }}
+  loop: [ domain, private, public_vpn, campus_vpn ]
+  notify: Reload BIND9.
+
+- name: Enable/Start BIND9.
+  become: yes
+  systemd:
+    service: bind9
+    enabled: yes
+    state: started
+
+
+ +
+roles_t/core/handlers/main.yml
+- name: Reload BIND9.
+  become: yes
+  systemd:
+    service: bind9
+    state: reloaded
+
+
+ +

+Examples of the necessary zone files, for the "Install BIND9 +zonefiles." task above, are given below. If the campus ISP provided +one or more IP addresses for stable name servers, those should +probably be used as forwarders rather than Google. And SecureDNS just +craps up /var/log/ and the Systemd journal. +

+ +
+bind-options
acl "trusted" {
+    {{ private_net_cidr }};
+    {{ public_vpn_net_cidr }};
+    {{ campus_vpn_net_cidr }};
+    {{ gate_wifi_net_cidr }};
+    localhost;
+};
+
+options {
+        directory "/var/cache/bind";
+
+        forwarders {
+                8.8.4.4;
+                8.8.8.8;
+        };
+
+        allow-query { any; };
+        allow-recursion { trusted; };
+        allow-query-cache { trusted; };
+
+        //============================================================
+        // If BIND logs error messages about the root key being
+        // expired, you will need to update your keys.
+        // See https://www.isc.org/bind-keys
+        //============================================================
+        //dnssec-validation auto;
+        // If Secure DNS is too much of a headache...
+        dnssec-enable no;
+        dnssec-validation no;
+
+        auth-nxdomain no;    # conform to RFC1035
+        //listen-on-v6 { any; };
+        listen-on { {{ core_addr }}; };
+};
+
+
+ +
+bind-local
include "/etc/bind/zones.rfc1918";
+
+zone "{{ domain_priv }}." {
+        type master;
+        file "/etc/bind/db.domain";
+};
+
+zone "{{ private_net_cidr | ipaddr('revdns')
+         | regex_replace('^0\.','') }}" {
+        type master;
+        file "/etc/bind/db.private";
+};
+
+zone "{{ public_vpn_net_cidr | ipaddr('revdns')
+         | regex_replace('^0\.','') }}" {
+        type master;
+        file "/etc/bind/db.public_vpn";
+};
+
+zone "{{ campus_vpn_net_cidr | ipaddr('revdns')
+         | regex_replace('^0\.','') }}" {
+        type master;
+        file "/etc/bind/db.campus_vpn";
+};
+
+
+ +
+private/db.domain
;
+; BIND data file for a small institute's PRIVATE domain names.
+;
+$TTL    604800
+@       IN      SOA     small.private. root.small.private. (
+                              1         ; Serial
+                         604800         ; Refresh
+                          86400         ; Retry
+                        2419200         ; Expire
+                         604800 )       ; Negative Cache TTL
+;
+@       IN      NS      core.small.private.
+$TTL    7200
+mail    IN      CNAME   core.small.private.
+smtp    IN      CNAME   core.small.private.
+ns      IN      CNAME   core.small.private.
+www     IN      CNAME   core.small.private.
+test    IN      CNAME   core.small.private.
+live    IN      CNAME   core.small.private.
+ntp     IN      CNAME   core.small.private.
+sip     IN      A       10.177.86.1
+;
+core    IN      A       192.168.56.1
+gate    IN      A       192.168.56.2
+
+
+ +
+private/db.private
;
+; BIND reverse data file for a small institute's private Ethernet.
+;
+$TTL    604800
+@       IN      SOA     small.private. root.small.private. (
+                              1         ; Serial
+                         604800         ; Refresh
+                          86400         ; Retry
+                        2419200         ; Expire
+                         604800 )       ; Negative Cache TTL
+;
+@       IN      NS      core.small.private.
+$TTL    7200
+1       IN      PTR     core.small.private.
+2       IN      PTR     gate.small.private.
+
+
+ +
+private/db.public_vpn
;
+; BIND reverse data file for a small institute's public VPN.
+;
+$TTL    604800
+@       IN      SOA     small.private. root.small.private. (
+                              1         ; Serial
+                         604800         ; Refresh
+                          86400         ; Retry
+                        2419200         ; Expire
+                         604800 )       ; Negative Cache TTL
+;
+@       IN      NS      core.small.private.
+$TTL    7200
+1       IN      PTR     front-p.small.private.
+2       IN      PTR     core-p.small.private.
+
+
+ +
+private/db.campus_vpn
;
+; BIND reverse data file for a small institute's campus VPN.
+;
+$TTL    604800
+@       IN      SOA     small.private. root.small.private. (
+                              1         ; Serial
+                         604800         ; Refresh
+                          86400         ; Retry
+                        2419200         ; Expire
+                         604800 )       ; Negative Cache TTL
+;
+@       IN      NS      core.small.private.
+$TTL    7200
+1       IN      PTR     gate-c.small.private.
+
+
+
+
+
+

7.8. Add Administrator to System Groups

+
+

+The administrator often needs to read (directories of) log files owned +by groups root and adm. Adding the administrator's account to +these groups speeds up debugging. +

+ +
+roles_t/core/tasks/main.yml
+- name: Add {{ ansible_user }} to system groups.
+  become: yes
+  user:
+    name: "{{ ansible_user }}"
+    append: yes
+    groups: root,adm
+
+
+
+
+
+

7.9. Configure Monkey

+
+

+The small institute runs cron jobs and web scripts that generate +reports and perform checks. The un-privileged jobs are run by a +system account named monkey. One of Monkey's more important jobs on +Core is to run rsync to update the public web site on Front (as +described in *Configure Apache2). +

+ +
+roles_t/core/tasks/main.yml
+- name: Create monkey.
+  become: yes
+  user:
+    name: monkey
+    system: yes
+    append: yes
+    groups: staff
+
+- name: Add {{ ansible_user }} to staff groups.
+  become: yes
+  user:
+    name: "{{ ansible_user }}"
+    append: yes
+    groups: monkey,staff
+
+- name: Create /home/monkey/.ssh/.
+  become: yes
+  file:
+    path: /home/monkey/.ssh
+    state: directory
+    mode: u=rwx,g=,o=
+    owner: monkey
+    group: monkey
+
+- name: Configure monkey@core.
+  become: yes
+  copy:
+    src: ../Secret/ssh_monkey/{{ item.name }}
+    dest: /home/monkey/.ssh/{{ item.name }}
+    mode: "{{ item.mode }}"
+    owner: monkey
+    group: monkey
+  loop:
+  - { name: config,      mode: "u=rw,g=r,o=" }
+  - { name: id_rsa.pub,  mode: "u=rw,g=r,o=r" }
+  - { name: id_rsa,      mode: "u=rw,g=,o=" }
+
+- name: Configure Monkey SSH known hosts.
+  become: yes
+  vars:
+    pubkeypath: ../Secret/ssh_front/etc/ssh
+    pubkeyfile: "{{ pubkeypath }}/ssh_host_ecdsa_key.pub"
+    pubkey: "{{ lookup('file', pubkeyfile) }}"
+  lineinfile:
+    regexp: "^{{ domain_name }}"
+    line: "{{ domain_name }},{{ front_addr }} {{ pubkey }}"
+    path: /home/monkey/.ssh/known_hosts
+    create: yes
+    owner: monkey
+    group: monkey
+    mode: "u=rw,g=r,o="
+
+
+
+
+
+

7.10. Install unattended-upgrades

+
+

+The institute prefers to install security updates as soon as possible. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install basic software.
+  become: yes
+  apt: pkg=unattended-upgrades
+
+
+
+
+
+

7.11. Install Expect

+
+

+The expect program is used by The Institute Commands to interact +with Nextcloud on the command line. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install expect.
+  become: yes
+  apt: pkg=expect
+
+
+
+
+
+

7.12. Configure User Accounts

+
+

+User accounts are created immediately so that backups can begin +restoring as soon as possible. The Account Management chapter +describes the members and usernames variables. +

+ +
+roles_t/core/tasks/main.yml
+- name: Create user accounts.
+  become: yes
+  user:
+    name: "{{ item }}"
+    password: "{{ members[item].password_core }}"
+    update_password: always
+    home: /home/{{ item }}
+  loop: "{{ usernames }}"
+  when: members[item].status == 'current'
+  tags: accounts
+
+- name: Disable former users.
+  become: yes
+  user:
+    name: "{{ item }}"
+    password: "!"
+  loop: "{{ usernames }}"
+  when: members[item].status != 'current'
+  tags: accounts
+
+- name: Revoke former user authorized_keys.
+  become: yes
+  file:
+    path: /home/{{ item }}/.ssh/authorized_keys
+    state: absent
+  loop: "{{ usernames }}"
+  when: members[item].status != 'current'
+  tags: accounts
+
+
+
+
+
+

7.13. Trust Institute Certificate Authority

+
+

+Core should recognize the institute's Certificate Authority as +trustworthy, so its certificate is added to Core's set of trusted +CAs. More information about how the small institute manages its +X.509 certificates is available in Keys. +

+ +
+roles_t/core/tasks/main.yml
+- name: Trust the institute CA.
+  become: yes
+  copy:
+    src: ../Secret/CA/pki/ca.crt
+    dest: /usr/local/share/ca-certificates/{{ domain_name }}.crt
+    mode: u=r,g=r,o=r
+    owner: root
+    group: root
+  notify: Update CAs.
+
+
+ +
+roles_t/core/handlers/main.yml
+- name: Update CAs.
+  become: yes
+  command: update-ca-certificates
+
+
+
+
+
+

7.14. Install Server Certificate

+
+

+The servers on Core use the same certificate (and key) to authenticate +themselves to institute clients. They share the /etc/server.crt and +/etc/server.key files, the latter only readable by root. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install server certificate/key.
+  become: yes
+  copy:
+    src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }}
+    dest: /etc/server.{{ item.typ }}
+    mode: "{{ item.mode }}"
+  loop:
+  - { path: "issued/core.{{ domain_priv }}", typ: crt,
+      mode: "u=r,g=r,o=r" }
+  - { path: "private/core.{{ domain_priv }}", typ: key,
+      mode: "u=r,g=,o=" }
+  notify:
+  - Restart Postfix.
+  - Restart Dovecot.
+  - Restart OpenVPN.
+
+
+
+
+
+

7.15. Install NTP

+
+

+Core uses NTP to provide a time synchronization service to the campus. +The default daemon's default configuration is fine. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install NTP.
+  become: yes
+  apt: pkg=ntp
+
+
+
+
+
+

7.16. Configure Postfix on Core

+
+

+Core uses Postfix to provide SMTP service to the campus. The default +Debian configuration (for an "Internet Site") is nearly sufficient. +Manual installation may prompt for configuration type and mail name. +The appropriate answers are listed here but will be checked +(corrected) by Ansible tasks below. +

+ +
    +
  • General type of mail configuration: Internet Site
  • +
  • System mail name: core.small.private
  • +
+ +

+As discussed in The Email Service above, Core delivers email addressed +to any internal domain name locally, and uses its smarthost Front to +relay the rest. Core is reachable only on institute networks, so +there is little benefit in enabling TLS, but it does need to handle +larger messages and respect the institute's expectation of shortened +queue times. +

+ +

+Core relays messages from any institute network. +

+ +
+postfix-core-networks
- p: mynetworks
+  v: >-
+     {{ private_net_cidr }}
+     {{ public_vpn_net_cidr }}
+     {{ campus_vpn_net_cidr }}
+     127.0.0.0/8
+     [::ffff:127.0.0.0]/104
+     [::1]/128
+
+
+ +

+Core uses Front to relay messages to the Internet. +

+ +
+postfix-core-relayhost
- { p: relayhost, v: "[{{ front_private_addr }}]" }
+
+
+ +

+Core uses a Postfix transport file, /etc/postfix/transport, to +specify local delivery for email addressed to any internal domain +name. Note the leading dot at the beginning of the sole line in the +file. +

+ +
+postfix-transport
.{{ domain_name }}      local:$myhostname
+.{{ domain_priv }}      local:$myhostname
+
+
+ +

+The complete list of Core's Postfix settings for + /etc/postfix/main.cf follow. +

+ +
+postfix-core
- p: smtpd_relay_restrictions
+  v: permit_mynetworks reject_unauth_destination
+- { p: smtpd_tls_security_level, v: none }
+- { p: smtp_tls_security_level, v: none }
+- { p: message_size_limit, v: 104857600 }
+- { p: delay_warning_time, v: 1h }
+- { p: maximal_queue_lifetime, v: 4h }
+- { p: bounce_queue_lifetime, v: 4h }
+- { p: home_mailbox, v: Maildir/ }
+- p: mynetworks
+  v: >-
+     {{ private_net_cidr }}
+     {{ public_vpn_net_cidr }}
+     {{ campus_vpn_net_cidr }}
+     127.0.0.0/8
+     [::ffff:127.0.0.0]/104
+     [::1]/128
+- { p: relayhost, v: "[{{ front_private_addr }}]" }
+- { p: inet_interfaces, v: "127.0.0.1 {{ core_addr }}" }
+
+
+ +

+The following Ansible tasks install Postfix, modify +/etc/postfix/main.cf, create /etc/postfix/transport, and start and +enable the service. Whenever /etc/postfix/transport is changed, the +postmap transport command must also be run. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install Postfix.
+  become: yes
+  apt: pkg=postfix
+
+- name: Configure Postfix.
+  become: yes
+  lineinfile:
+    path: /etc/postfix/main.cf
+    regexp: "^ *{{ item.p }} *="
+    line: "{{ item.p }} = {{ item.v }}"
+  loop:
+  - p: smtpd_relay_restrictions
+    v: permit_mynetworks reject_unauth_destination
+  - { p: smtpd_tls_security_level, v: none }
+  - { p: smtp_tls_security_level, v: none }
+  - { p: message_size_limit, v: 104857600 }
+  - { p: delay_warning_time, v: 1h }
+  - { p: maximal_queue_lifetime, v: 4h }
+  - { p: bounce_queue_lifetime, v: 4h }
+  - { p: home_mailbox, v: Maildir/ }
+  - p: mynetworks
+    v: >-
+       {{ private_net_cidr }}
+       {{ public_vpn_net_cidr }}
+       {{ campus_vpn_net_cidr }}
+       127.0.0.0/8
+       [::ffff:127.0.0.0]/104
+       [::1]/128
+  - { p: relayhost, v: "[{{ front_private_addr }}]" }
+  - { p: inet_interfaces, v: "127.0.0.1 {{ core_addr }}" }
+  - { p: transport_maps, v: "hash:/etc/postfix/transport" }
+  notify: Restart Postfix.
+
+- name: Configure Postfix transport.
+  become: yes
+  copy:
+    content: |
+      .{{ domain_name }}        local:$myhostname
+      .{{ domain_priv }}        local:$myhostname
+    dest: /etc/postfix/transport
+  notify: Postmap transport.
+
+- name: Enable/Start Postfix.
+  become: yes
+  systemd:
+    service: postfix
+    enabled: yes
+    state: started
+
+
+ +
+roles_t/core/handlers/main.yml
+- name: Restart Postfix.
+  become: yes
+  systemd:
+    service: postfix
+    state: restarted
+
+- name: Postmap transport.
+  become: yes
+  command:
+    chdir: /etc/postfix/
+    cmd: postmap transport
+  notify: Restart Postfix.
+
+
+
+
+
+

7.17. Configure Private Email Aliases

+
+

+The institute's Core needs to deliver email addressed to institute +aliases including those advertised on the campus web site, in VPN +certificates, etc. System daemons like cron(8) may also send email +to e.g. monkey. The following aliases are installed in +/etc/aliases with a special marker so that additional blocks can be +installed by more specialized roles. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install institute email aliases.
+  become: yes
+  blockinfile:
+    block: |
+        webmaster:      root
+        admin:          root
+        www-data:       root
+        monkey:         root
+        root:           {{ ansible_user }}
+    path: /etc/aliases
+    marker: "# {mark} INSTITUTE MANAGED BLOCK"
+  notify: New aliases.
+
+
+ +
+roles_t/core/handlers/main.yml
+- name: New aliases.
+  become: yes
+  command: newaliases
+
+
+
+
+
+

7.18. Configure Dovecot IMAPd

+
+

+Core uses Dovecot's IMAPd to store and serve member emails. As on +Front, Core's Dovecot configuration is largely the Debian default with +POP and IMAP (without TLS) support disabled. This is a bit "over the +top" given that Core is only accessed from private (encrypted) +networks, but helps to ensure privacy even when members accidentally +attempt connections from outside the private networks. For more +information about Core's role in the institute's email services, see +The Email Service. +

+ +

+The institute follows the recommendation in the package +README.Debian (in /usr/share/dovecot-core/) but replaces the +default "snake oil" certificate with another, signed by the institute. +(For more information about the institute's X.509 certificates, see +Keys.) +

+ +

+The following Ansible tasks install Dovecot's IMAP daemon and its +/etc/dovecot/local.conf configuration file, then starts the service +and enables it to start at every reboot. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install Dovecot IMAPd.
+  become: yes
+  apt: pkg=dovecot-imapd
+
+- name: Configure Dovecot IMAPd.
+  become: yes
+  copy:
+    content: |
+      protocols = imap
+      ssl = required
+      ssl_cert = </etc/server.crt
+      ssl_key = </etc/server.key
+      mail_location = maildir:~/Maildir
+    dest: /etc/dovecot/local.conf
+  notify: Restart Dovecot.
+
+- name: Enable/Start Dovecot.
+  become: yes
+  systemd:
+    service: dovecot
+    enabled: yes
+    state: started
+
+
+ +
+roles_t/core/handlers/main.yml
+- name: Restart Dovecot.
+  become: yes
+  systemd:
+    service: dovecot
+    state: restarted
+
+
+
+
+
+

7.19. Configure Fetchmail

+
+

+Core runs a fetchmail for each member of the institute. Individual +fetchmail jobs can run with the --idle option and thus can +download new messages instantly. The jobs run as Systemd services and +so are monitored and started at boot. +

+ +

+In the ~/.fetchmailrc template below, the item variable is a +username, and members[item] is the membership record associated with +the username. The template is only used when the record has a +password_fetchmail key providing the member's plain-text password. +

+ +
+fetchmail-config
# Permissions on this file may be no greater than 0600.
+
+set no bouncemail
+set no spambounce
+set no syslog
+#set logfile /home/{{ item }}/.fetchmail.log
+
+poll {{ front_private_addr }} protocol imap timeout 15
+    username {{ item }}
+    password "{{ members[item].password_fetchmail }}" fetchall
+    ssl sslproto tls1.2+ sslcertck sslcommonname {{ domain_name }}
+
+
+ +

+The Systemd service description. +

+ +
+fetchmail-service
[Unit]
+Description=Fetchmail --idle task for {{ item }}.
+AssertPathExists=/home/{{ item }}/.fetchmailrc
+Requires=sys-devices-virtual-net-ovpn.device
+After=sys-devices-virtual-net-ovpn.device
+
+[Service]
+User={{ item }}
+ExecStart=/usr/bin/fetchmail --idle
+Restart=always
+RestartSec=1m
+NoNewPrivileges=true
+
+[Install]
+WantedBy=default.target
+
+
+ +

+The following tasks install fetchmail, a ~/.fetchmailrc and Systemd +.service file for each current member, start the services, and +enable them to start on boot. To accommodate any member of the +institute who may wish to run their own fetchmail job on their +notebook, only members with a fetchmail_password key will be +provided the Core service. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install fetchmail.
+  become: yes
+  apt: pkg=fetchmail
+
+- name: Configure user fetchmails.
+  become: yes
+  copy:
+    content: |
+      # Permissions on this file may be no greater than 0600.
+
+      set no bouncemail
+      set no spambounce
+      set no syslog
+      #set logfile /home/{{ item }}/.fetchmail.log
+
+      poll {{ front_private_addr }} protocol imap timeout 15
+          username {{ item }}
+          password "{{ members[item].password_fetchmail }}" fetchall
+          ssl sslproto tls1.2+ sslcertck sslcommonname {{ domain_name }}
+    dest: /home/{{ item }}/.fetchmailrc
+    owner: "{{ item }}"
+    group: "{{ item }}"
+    mode: u=rw,g=,o=
+  loop: "{{ usernames }}"
+  when:
+  - members[item].status == 'current'
+  - members[item].password_fetchmail is defined
+  tags: accounts
+
+- name: Create user fetchmail services.
+  become: yes
+  copy:
+    content: |
+      [Unit]
+      Description=Fetchmail --idle task for {{ item }}.
+      AssertPathExists=/home/{{ item }}/.fetchmailrc
+      Requires=sys-devices-virtual-net-ovpn.device
+      After=sys-devices-virtual-net-ovpn.device
+
+      [Service]
+      User={{ item }}
+      ExecStart=/usr/bin/fetchmail --idle
+      Restart=always
+      RestartSec=1m
+      NoNewPrivileges=true
+
+      [Install]
+      WantedBy=default.target
+    dest: /etc/systemd/system/fetchmail-{{ item }}.service
+  loop: "{{ usernames }}"
+  when:
+  - members[item].status == 'current'
+  - members[item].password_fetchmail is defined
+  tags: accounts
+
+- name: Enable/Start user fetchmail services.
+  become: yes
+  systemd:
+    service: fetchmail-{{ item }}.service
+    enabled: yes
+    state: started
+  loop: "{{ usernames }}"
+  when:
+  - members[item].status == 'current'
+  - members[item].password_fetchmail is defined
+  tags: accounts
+
+
+ +

+Finally, any former member's Fetchmail service on Core should be +stopped and disabled from restarting at boot, deleted even. +

+ +
+roles_t/core/tasks/main.yml
+- name: Stop former user fetchmail services.
+  become: yes
+  systemd:
+    service: fetchmail-{{ item }}
+    state: stopped
+    enabled: no
+  loop: "{{ usernames }}"
+  when:
+  - members[item].status != 'current'
+  - members[item].password_fetchmail is defined
+  tags: accounts
+
+
+ +

+If the .service file is deleted, then Ansible cannot use the +systemd module to stop it, nor check that it is still stopped. +Otherwise the following task might be appropriate. +

+ +
+
+- name: Delete former user fetchmail services.
+  become: yes
+  file:
+    path: /etc/systemd/system/fetchmail-{{ item }}.service
+    state: absent
+  loop: "{{ usernames }}"
+  when:
+  - members[item].status != 'current'
+  - members[item].password_fetchmail is defined
+  tags: accounts
+
+
+
+
+
+

7.20. Configure Apache2

+
+

+This is the small institute's campus web server. It hosts several web +sites as described in The Web Services. +

+ + + + +++ ++ ++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
URLDoc.RootDescription
http://live//WWW/live/The live, public site.
http://test//WWW/test/The next public site.
http://www//WWW/campus/Campus home page.
http://core//var/www/whatnot, e.g. Nextcloud
+ +

+The live (and test) web site content (eventually) is intended to be +copied to Front, so the live and test sites are configured as +identically to Front's as possible. The directories and files are +owned by monkey but are world readable, thus readable by www-data, +the account running Apache2. +

+ +

+The campus web site is much more permissive. Its directories are +owned by root but writable by the staff group. It runs CGI +scripts found in any of its directories, any executable with a .cgi +file name. It runs them as www-data so CGI scripts that need access +to private data must Set-UID to the appropriate account. +

+ +

+The UserDir directives for all of Core's web sites are the same, and +punt the indirection through a /home/www-users/ directory, simply +naming a sub-directory in the member's home directory on Core. The +<Directory> block is the same as the one used on Front. +

+ +
+apache-userdir-core
UserDir Public/HTML
+<Directory /home/*/Public/HTML/>
+        Require all granted
+        AllowOverride None
+</Directory>
+
+
+ +

+The virtual host for the live web site is given below. It should look +like Front's top-level web configuration without the permanent +redirect, the encryption ciphers and certificates. +

+ +
+apache-live
<VirtualHost *:80>
+        ServerName live
+        ServerAlias live.{{ domain_priv }}
+        ServerAdmin webmaster@core.{{ domain_priv }}
+
+        DocumentRoot /WWW/live
+        <Directory /WWW/live/>
+                Require all granted
+                AllowOverride None
+        </Directory>
+
+        UserDir Public/HTML
+        <Directory /home/*/Public/HTML/>
+                Require all granted
+                AllowOverride None
+        </Directory>
+
+        ErrorLog ${APACHE_LOG_DIR}/live-error.log
+        CustomLog ${APACHE_LOG_DIR}/live-access.log combined
+
+        IncludeOptional /etc/apache2/sites-available/live-vhost.conf
+</VirtualHost>
+
+
+ +

+The virtual host for the test web site is given below. It should look +familiar. +

+ +
+apache-test
<VirtualHost *:80>
+        ServerName test
+        ServerAlias test.{{ domain_priv }}
+        ServerAdmin webmaster@core.{{ domain_priv }}
+
+        DocumentRoot /WWW/test
+        <Directory /WWW/test/>
+                Require all granted
+                AllowOverride None
+        </Directory>
+
+        UserDir Public/HTML
+        <Directory /home/*/Public/HTML/>
+                Require all granted
+                AllowOverride None
+        </Directory>
+
+        ErrorLog ${APACHE_LOG_DIR}/test-error.log
+        CustomLog ${APACHE_LOG_DIR}/test-access.log combined
+
+        IncludeOptional /etc/apache2/sites-available/test-vhost.conf
+</VirtualHost>
+
+
+ +

+The virtual host for the campus web site is given below. It too +should look familiar, but with a notably loose Directory directive. +It assumes /WWW/campus/ is secure, writable only by properly +trained staffers, monitored by a revision control system, etc. +

+ +
+apache-campus
<VirtualHost *:80>
+        ServerName www
+        ServerAlias www.{{ domain_priv }}
+        ServerAdmin webmaster@core.{{ domain_priv }}
+
+        DocumentRoot /WWW/campus
+        <Directory /WWW/campus/>
+                Options Indexes FollowSymLinks MultiViews ExecCGI
+                AddHandler cgi-script .cgi
+                Require all granted
+                AllowOverride None
+        </Directory>
+
+        UserDir Public/HTML
+        <Directory /home/*/Public/HTML/>
+                Require all granted
+                AllowOverride None
+        </Directory>
+
+        ErrorLog ${APACHE_LOG_DIR}/campus-error.log
+        CustomLog ${APACHE_LOG_DIR}/campus-access.log combined
+
+        IncludeOptional /etc/apache2/sites-available/www-vhost.conf
+</VirtualHost>
+
+
+ +

+The tasks below install Apache2 and edit its default configuration. +The global ServerName directive must be deleted because it seems to +interfere with mapping URLs to the correct virtual host. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install Apache2.
+  become: yes
+  apt: pkg=apache2
+
+- name: Disable Apache2 server name.
+  become: yes
+  lineinfile:
+    path: /etc/apache2/apache2.conf
+    regexp: "([^#]+)ServerName (.*)"
+    backrefs: yes
+    line: "# \\1ServerName \\2"
+  notify: Restart Apache2.
+
+- name: Enable Apache2 modules.
+  become: yes
+  apache2_module:
+    name: "{{ item }}"
+  loop: [ userdir, cgi ]
+  notify: Restart Apache2.
+
+
+ +

+With Apache installed there is a /etc/apache/sites-available/ +directory into which the above site configurations can be installed. +The a2ensite command enables them. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install live web site.
+  become: yes
+  copy:
+    content: |
+      <VirtualHost *:80>
+        ServerName live
+        ServerAlias live.{{ domain_priv }}
+        ServerAdmin webmaster@core.{{ domain_priv }}
+
+        DocumentRoot /WWW/live
+        <Directory /WWW/live/>
+                Require all granted
+                AllowOverride None
+        </Directory>
+
+        UserDir Public/HTML
+        <Directory /home/*/Public/HTML/>
+                Require all granted
+                AllowOverride None
+        </Directory>
+
+        ErrorLog ${APACHE_LOG_DIR}/live-error.log
+        CustomLog ${APACHE_LOG_DIR}/live-access.log combined
+
+        IncludeOptional /etc/apache2/sites-available/live-vhost.conf
+      </VirtualHost>
+    dest: /etc/apache2/sites-available/live.conf
+    mode: u=rw,g=r,o=r
+  notify: Restart Apache2.
+
+- name: Install test web site.
+  become: yes
+  copy:
+    content: |
+      <VirtualHost *:80>
+        ServerName test
+        ServerAlias test.{{ domain_priv }}
+        ServerAdmin webmaster@core.{{ domain_priv }}
+
+        DocumentRoot /WWW/test
+        <Directory /WWW/test/>
+                Require all granted
+                AllowOverride None
+        </Directory>
+
+        UserDir Public/HTML
+        <Directory /home/*/Public/HTML/>
+                Require all granted
+                AllowOverride None
+        </Directory>
+
+        ErrorLog ${APACHE_LOG_DIR}/test-error.log
+        CustomLog ${APACHE_LOG_DIR}/test-access.log combined
+
+        IncludeOptional /etc/apache2/sites-available/test-vhost.conf
+      </VirtualHost>
+    dest: /etc/apache2/sites-available/test.conf
+    mode: u=rw,g=r,o=r
+  notify: Restart Apache2.
+
+- name: Install campus web site.
+  become: yes
+  copy:
+    content: |
+      <VirtualHost *:80>
+        ServerName www
+        ServerAlias www.{{ domain_priv }}
+        ServerAdmin webmaster@core.{{ domain_priv }}
+
+        DocumentRoot /WWW/campus
+        <Directory /WWW/campus/>
+                Options Indexes FollowSymLinks MultiViews ExecCGI
+                AddHandler cgi-script .cgi
+                Require all granted
+                AllowOverride None
+        </Directory>
+
+        UserDir Public/HTML
+        <Directory /home/*/Public/HTML/>
+                Require all granted
+                AllowOverride None
+        </Directory>
+
+        ErrorLog ${APACHE_LOG_DIR}/campus-error.log
+        CustomLog ${APACHE_LOG_DIR}/campus-access.log combined
+
+        IncludeOptional /etc/apache2/sites-available/www-vhost.conf
+      </VirtualHost>
+    dest: /etc/apache2/sites-available/www.conf
+    mode: u=rw,g=r,o=r
+  notify: Restart Apache2.
+
+- name: Enable web sites.
+  become: yes
+  command:
+    cmd: a2ensite -q {{ item }}
+    creates: /etc/apache2/sites-enabled/{{ item }}.conf
+  loop: [ live, test, www ]
+  notify: Restart Apache2.
+
+- name: Enable/Start Apache2.
+  become: yes
+  systemd:
+    service: apache2
+    enabled: yes
+    state: started
+
+
+ +
+roles_t/core/handlers/main.yml
+- name: Restart Apache2.
+  become: yes
+  systemd:
+    service: apache2
+    state: restarted
+
+
+
+
+
+

7.21. Configure Website Updates

+
+

+Monkey on Core runs /usr/local/sbin/webupdate every 15 minutes via a +cron job. The example script mirrors /WWW/live/ on Core to +/home/www/ on Front. +

+ +
+private/webupdate
#!/bin/bash -e
+#
+# DO NOT EDIT.  This file was tangled from institute.org.
+
+cd /WWW/live/
+
+rsync -avz --delete --chmod=g-w         \
+        --filter='exclude *~'           \
+        --filter='exclude .git*'        \
+        ./ {{ domain_name }}:/home/www/
+
+
+ +

+The following tasks install the webupdate script from private/, +and create Monkey's cron job. An example webupdate script is +provided here. +

+ +
+roles_t/core/tasks/main.yml
+- name: "Install Monkey's webupdate script."
+  become: yes
+  copy:
+    src: ../private/webupdate
+    dest: /usr/local/sbin/webupdate
+    mode: u=rx,g=rx,o=
+    owner: monkey
+    group: staff
+
+- name: "Create Monkey's webupdate job."
+  become: yes
+  cron:
+    minute: "*/15"
+    job: "[ -d /WWW/live ] && /usr/local/sbin/webupdate"
+    name: webupdate
+    user: monkey
+
+
+
+
+
+

7.22. Configure OpenVPN Connection to Front

+
+

+Core connects to Front's public VPN to provide members abroad with a +route to the campus networks. As described in the configuration of +Front's OpenVPN service, Front expects Core to connect using a client +certificate with Common Name Core. +

+ +

+Core's OpenVPN client configuration uses the Debian default Systemd +service unit to keep Core connected to Front. The configuration +is installed in /etc/openvpn/front.conf so the Systemd service is +called openvpn@front. +

+ +
+openvpn-core
client
+dev-type tun
+dev ovpn
+remote {{ front_addr }}
+nobind
+user nobody
+group nogroup
+persist-key
+persist-tun
+cipher AES-256-GCM
+auth SHA256
+remote-cert-tls server
+verify-x509-name {{ domain_name }} name
+verb 3
+ca /usr/local/share/ca-certificates/{{ domain_name }}.crt
+cert client.crt
+key client.key
+tls-auth ta.key 1
+
+
+ +

+The tasks that install and configure the OpenVPN client configuration +for Core. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install OpenVPN.
+  become: yes
+  apt: pkg=openvpn
+
+- name: Enable IP forwarding.
+  become: yes
+  sysctl:
+    name: net.ipv4.ip_forward
+    value: "1"
+    state: present
+
+- name: Install OpenVPN secret.
+  become: yes
+  copy:
+    src: ../Secret/front-ta.key
+    dest: /etc/openvpn/ta.key
+    mode: u=r,g=,o=
+  notify: Restart OpenVPN.
+
+- name: Install OpenVPN client certificate/key.
+  become: yes
+  copy:
+    src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }}
+    dest: /etc/openvpn/client.{{ item.typ }}
+    mode: "{{ item.mode }}"
+  loop:
+  - { path: "issued/core", typ: crt, mode: "u=r,g=r,o=r" }
+  - { path: "private/core", typ: key, mode: "u=r,g=,o=" }
+  notify: Restart OpenVPN.
+
+- name: Configure OpenVPN.
+  become: yes
+  copy:
+    content: |
+      client
+      dev-type tun
+      dev ovpn
+      remote {{ front_addr }}
+      nobind
+      user nobody
+      group nogroup
+      persist-key
+      persist-tun
+      cipher AES-256-GCM
+      auth SHA256
+      remote-cert-tls server
+      verify-x509-name {{ domain_name }} name
+      verb 3
+      ca /usr/local/share/ca-certificates/{{ domain_name }}.crt
+      cert client.crt
+      key client.key
+      tls-auth ta.key 1
+    dest: /etc/openvpn/front.conf
+    mode: u=r,g=r,o=
+  notify: Restart OpenVPN.
+
+- name: Enable/Start OpenVPN.
+  become: yes
+  systemd:
+    service: openvpn@front
+    state: started
+    enabled: yes
+
+
+ +
+roles_t/core/handlers/main.yml
+- name: Restart OpenVPN.
+  become: yes
+  systemd:
+    service: openvpn@front
+    state: restarted
+
+
+
+
+
+

7.23. Configure NAGIOS

+
+

+Core runs a nagios4 server to monitor "services" on institute hosts. +The following tasks install the necessary packages and configure the +server. The last task installs the monitoring configuration in +/etc/nagios4/conf.d/institute.cfg. This configuration file, +nagios.cfg, is tangled from code blocks described in subsequent +subsections. +

+ +

+The institute NAGIOS configuration includes a customized version of +the check_sensors plugin named inst_sensors. Both versions rely +on the sensors command (from the lm-sensors package). The custom +version (below) is installed in /usr/local/sbin/inst_sensors on both +Core and Campus (and thus Gate) machines. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install NAGIOS4.
+  become: yes
+  apt:
+    pkg: [ nagios4, monitoring-plugins-basic, nagios-nrpe-plugin,
+           lm-sensors ]
+
+- name: Install inst_sensors NAGIOS plugin.
+  become: yes
+  copy:
+    src: inst_sensors
+    dest: /usr/local/sbin/inst_sensors
+    mode: u=rwx,g=rx,o=rx
+
+- name: Configure NAGIOS4.
+  become: yes
+  lineinfile:
+    path: /etc/nagios4/nagios.cfg
+    regexp: "{{ item.regexp }}"
+    line: "{{ item.line }}"
+    backrefs: yes
+  loop:
+  - { regexp: "^( *cfg_file *= *localhost.cfg)", line: "# \\1" }
+  - { regexp: "^( *admin_email *= *)", line: "\\1{{ ansible_user }}@localhost" }
+  notify: Reload NAGIOS4.
+
+- name: Configure NAGIOS4 contacts.
+  become: yes
+  lineinfile:
+    path: /etc/nagios4/objects/contacts.cfg
+    regexp: "^( *email +)"
+    line: "\\1sysadm@localhost"
+    backrefs: yes
+  notify: Reload NAGIOS4.
+
+- name: Configure NAGIOS4 monitors.
+  become: yes
+  template:
+    src: nagios.cfg
+    dest: /etc/nagios4/conf.d/institute.cfg
+  notify: Reload NAGIOS4.
+
+- name: Enable/Start NAGIOS4.
+  become: yes
+  systemd:
+    service: nagios4
+    enabled: yes
+    state: started
+
+
+ +
+roles_t/core/handlers/main.yml
+- name: Reload NAGIOS4.
+  become: yes
+  systemd:
+    service: nagios4
+    state: reloaded
+
+
+
+
+

7.23.1. Configure NAGIOS Monitors for Core

+
+

+The first block in nagios.cfg specifies monitors for services on +Core. The monitors are simple, local plugins, and the block is very +similar to the default objects/localhost.cfg file. The commands +used here may specify plugin arguments. +

+ +
+roles_t/core/templates/nagios.cfg
define host {
+    use                     linux-server
+    host_name               core
+    address                 127.0.0.1
+}
+
+define service {
+    use                     local-service
+    host_name               core
+    service_description     Root Partition
+    check_command           check_local_disk!20%!10%!/
+}
+
+define service {
+    use                     local-service
+    host_name               core
+    service_description     Current Users
+    check_command           check_local_users!20!50
+}
+
+define service {
+    use                     local-service
+    host_name               core
+    service_description     Zombie Processes
+    check_command           check_local_procs!5!10!Z
+}
+
+define service {
+    use                     local-service
+    host_name               core
+    service_description     Total Processes
+    check_command           check_local_procs!150!200!RSZDT
+}
+
+define service {
+    use                     local-service
+    host_name               core
+    service_description     Current Load
+    check_command           check_local_load!5.0,4.0,3.0!10.0,6.0,4.0
+}
+
+define service {
+    use                     local-service
+    host_name               core
+    service_description     Swap Usage
+    check_command           check_local_swap!20%!10%
+}
+
+define service {
+    use                     local-service
+    host_name               core
+    service_description     SSH
+    check_command           check_ssh
+}
+
+define service {
+    use                     local-service
+    host_name               core
+    service_description     HTTP
+    check_command           check_http
+}
+
+
+
+
+
+

7.23.2. Custom NAGIOS Monitor inst_sensors

+
+

+The check_sensors plugin is included in the package +monitoring-plugins-basic, but it does not report any readings. The +small institute substitutes a slightly modified version, +inst_sensors, that reports core CPU temperatures. +

+ +
+roles_t/core/files/inst_sensors
#!/bin/sh
+
+PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin"
+export PATH
+PROGNAME=`basename $0`
+REVISION="2.3.1"
+
+. /usr/lib/nagios/plugins/utils.sh
+
+print_usage() {
+        echo "Usage: $PROGNAME" [--ignore-fault]
+}
+
+print_help() {
+        print_revision $PROGNAME $REVISION
+        echo ""
+        print_usage
+        echo ""
+        echo "This plugin checks hardware status using the lm_sensors package."
+        echo ""
+        support
+        exit $STATE_OK
+}
+
+brief_data() {
+    echo "$1" | sed -n -E -e '
+  /^ *Core [0-9]+:/ { s/^ *Core [0-9]+: +([-+]?[0-9.]+).*/ \1/; H }
+  $ { x; s/\n//g; p }'
+}
+
+case "$1" in
+        --help)
+                print_help
+                exit $STATE_OK
+                ;;
+        -h)
+                print_help
+                exit $STATE_OK
+                ;;
+        --version)
+                print_revision $PROGNAME $REVISION
+                exit $STATE_OK
+                ;;
+        -V)
+                print_revision $PROGNAME $REVISION
+                exit $STATE_OK
+                ;;
+        *)
+                sensordata=`sensors 2>&1`
+                status=$?
+                if test ${status} -eq 127; then
+                        text="SENSORS UNKNOWN - command not found"
+                        text="$text (did you install lmsensors?)"
+                        exit=$STATE_UNKNOWN
+                elif test ${status} -ne 0; then
+                        text="WARNING - sensors returned state $status"
+                        exit=$STATE_WARNING
+                elif echo ${sensordata} | egrep ALARM > /dev/null; then
+                        text="SENSOR CRITICAL -`brief_data "${sensordata}"`"
+                        exit=$STATE_CRITICAL
+                elif echo ${sensordata} | egrep FAULT > /dev/null \
+                    && test "$1" != "-i" -a "$1" != "--ignore-fault"; then
+                        text="SENSOR UNKNOWN - Sensor reported fault"
+                        exit=$STATE_UNKNOWN
+                else
+                        text="SENSORS OK -`brief_data "${sensordata}"`"
+                        exit=$STATE_OK
+                fi
+
+                echo "$text"
+                if test "$1" = "-v" -o "$1" = "--verbose"; then
+                        echo ${sensordata}
+                fi
+                exit $exit
+                ;;
+esac
+
+
+ +

+The following block defines the command and monitors it (locally) on +Core. +

+ +
+roles_t/core/templates/nagios.cfg
+define command {
+    command_name            inst_sensors
+    command_line            /usr/local/sbin/inst_sensors
+}
+
+define service {
+    use                     local-service
+    host_name               core
+    service_description     Temperature Sensors
+    check_command           inst_sensors
+}
+
+
+
+
+
+

7.23.3. Configure NAGIOS Monitors for Remote Hosts

+
+

+The following sections contain code blocks specifying monitors for +services on other campus hosts. The NAGIOS server on Core will +contact the NAGIOS Remote Plugin Executor (NRPE) servers on the other +campus hosts and request the results of several commands. For +security reasons, the NRPE servers do not accept command arguments. +

+ +

+The institute defines several NRPE commands, using a inst_ prefix to +distinguish their names. The commands take no arguments but execute a +plugin with pre-defined arguments appropriate for the institute. The +commands are defined in code blocks interleaved with the blocks that +monitor them. The command blocks are appended to nrpe.cfg and the +monitoring blocks to nagios.cfg. The nrpe.cfg file is installed +on each campus host by the campus role's Configure NRPE tasks. +

+
+
+
+

7.23.4. Configure NAGIOS Monitors for Gate

+
+

+Define the monitored host, gate. Monitor its response to network +pings. +

+ +
+roles_t/core/templates/nagios.cfg
+define host {
+    use                     linux-server
+    host_name               gate
+    address                 {{ gate_addr }}
+}
+
+define service {
+    use                     local-service
+    host_name               gate
+    service_description     PING
+    check_command           check_ping!100.0,20%!500.0,60%
+}
+
+
+ +

+For all campus NRPE servers: an inst_root command to check the free +space on the root partition. +

+ +
+roles_t/campus/files/nrpe.cfg
command[inst_root]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /
+
+
+ +

+Monitor inst_root on Gate. +

+ +
+roles_t/core/templates/nagios.cfg
+define service {
+    use                     generic-service
+    host_name               gate
+    service_description     Root Partition
+    check_command           check_nrpe!inst_root
+}
+
+
+ +

+Monitor check_load on Gate. +

+ +
+roles_t/core/templates/nagios.cfg
+define service {
+    use                     generic-service
+    host_name               gate
+    service_description     Current Load
+    check_command           check_nrpe!check_load
+}
+
+
+ +

+Monitor check_zombie_procs and check_total_procs on Gate. +

+ +
+roles_t/core/templates/nagios.cfg
+define service {
+    use                     generic-service
+    host_name               gate
+    service_description     Zombie Processes
+    check_command           check_nrpe!check_zombie_procs
+}
+
+define service {
+    use                     generic-service
+    host_name               gate
+    service_description     Total Processes
+    check_command           check_nrpe!check_total_procs
+}
+
+
+ +

+For all campus NRPE servers: an inst_swap command to check the swap +usage. +

+ +
+roles_t/campus/files/nrpe.cfg
command[inst_swap]=/usr/lib/nagios/plugins/check_swap -w 20% -c 10%
+
+
+ +

+Monitor inst_swap on Gate. +

+ +
+roles_t/core/templates/nagios.cfg
+define service {
+    use                     generic-service
+    host_name               gate
+    service_description     Swap Usage
+    check_command           check_nrpe!inst_swap
+}
+
+
+ +

+Monitor Gate's SSH service. +

+ +
+roles_t/core/templates/nagios.cfg
+define service {
+    use                     generic-service
+    host_name               gate
+    service_description     SSH
+    check_command           check_ssh
+}
+
+
+ +

+For all campus NRPE servers: an inst_sensors command to report core +CPU temperatures. +

+ +
+roles_t/campus/files/nrpe.cfg
command[inst_sensors]=/usr/local/sbin/inst_sensors
+
+
+ +

+Monitor inst_sensors on Gate. +

+ +
+roles_t/core/templates/nagios.cfg
+define service {
+    use                     generic-service
+    host_name               gate
+    service_description     Temperature Sensors
+    check_command           check_nrpe!inst_sensors
+}
+
+
+
+
+
+
+

7.24. Configure Backups

+
+

+The following task installs the backup script from private/. An +example script is provided in here. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install backup script.
+  become: yes
+  copy:
+    src: ../private/backup
+    dest: /usr/local/sbin/backup
+    mode: u=rx,g=r,o=
+
+
+
+
+
+

7.25. Configure Nextcloud

+
+

+Core runs Nextcloud to provide a private institute cloud, as described +in The Cloud Service. Installing, restoring (from backup), and +upgrading Nextcloud are manual processes documented in The Nextcloud +Admin Manual, Maintenance. However Ansible can help prepare Core +before an install or restore, and perform basic security checks +afterwards. +

+
+
+

7.25.1. Prepare Core For Nextcloud

+
+

+The Ansible code contained herein prepares Core to run Nextcloud by +installing required software packages, configuring the web server, and +installing a cron job. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install packages required by Nextcloud.
+  become: yes
+  apt:
+    pkg: [ apache2, mariadb-server, php, php-apcu, php-bcmath,
+           php-curl, php-gd, php-gmp, php-json, php-mysql,
+           php-mbstring, php-intl, php-imagick, php-xml, php-zip,
+           libapache2-mod-php ]
+
+
+ +

+Next, a number of Apache2 modules are enabled. +

+ +
+roles_t/core/tasks/main.yml
+- name: Enable Apache2 modules for Nextcloud.
+  become: yes
+  apache2_module:
+    name: "{{ item }}"
+  loop: [ rewrite, headers, env, dir, mime ]
+
+
+ +

+The Apache2 configuration is then extended with the following +/etc/apache2/sites-available/nextcloud.conf file, which is installed +and enabled with a2ensite. The same configuration lines are given +in the "Installation on Linux" section of the Nextcloud Server +Administration Guide (sub-section Apache Web server configuration). +

+ +
+roles_t/core/files/nextcloud.conf
Alias /nextcloud "/var/www/nextcloud/"
+
+<Directory /var/www/nextcloud/>
+    Require all granted
+    AllowOverride All
+    Options FollowSymlinks MultiViews
+
+    <IfModule mod_dav.c>
+        Dav off
+    </IfModule>
+</Directory>
+
+
+ +
+roles_t/core/tasks/main.yml
+- name: Install Nextcloud web configuration.
+  become: yes
+  copy:
+    src: nextcloud.conf
+    dest: /etc/apache2/sites-available/nextcloud.conf
+  notify: Restart Apache2.
+
+- name: Enable Nextcloud web configuration.
+  become: yes
+  command:
+    cmd: a2ensite nextcloud
+    creates: /etc/apache2/sites-enabled/nextcloud.conf
+  notify: Restart Apache2.
+
+
+ +

+The institute supports "Service discovery" as recommended at the end +of the "Apache Web server configuration" subsection. The prescribed +rewrite rules are included in a Directory block for the default +virtual host's document root. +

+ +
+roles_t/core/files/nextcloud.conf
+<Directory /var/www/html/>
+    <IfModule mod_rewrite.c>
+        RewriteEngine on
+        # LogLevel alert rewrite:trace3
+        RewriteRule ^\.well-known/carddav \
+            /nextcloud/remote.php/dav [R=301,L]
+        RewriteRule ^\.well-known/caldav \
+            /nextcloud/remote.php/dav [R=301,L]
+        RewriteRule ^\.well-known/webfinger \
+            /nextcloud/index.php/.well-known/webfinger [R=301,L]
+        RewriteRule ^\.well-known/nodeinfo \
+            /nextcloud/index.php/.well-known/nodeinfo [R=301,L]
+      </IfModule>
+</Directory>
+
+
+ +

+The institute also includes additional Apache2 configuration +recommended by Nextcloud 20's Settings > Administration > Overview web +page. The following portion of nextcloud.conf sets a +Strict-Transport-Security header with a max-age of 6 months. +

+ +
+roles_t/core/files/nextcloud.conf
+<IfModule mod_headers.c>
+    Header always set \
+        Strict-Transport-Security "max-age=15552000; includeSubDomains"
+</IfModule>
+
+
+ +

+Nextcloud's directories and files are typically readable only by the +web server's user www-data and the www-data group. The +administrator is added to this group to ease (speed) the debugging of +cloud FUBARs. +

+ +
+roles_t/core/tasks/main.yml
+- name: Add {{ ansible_user }} to web server group.
+  become: yes
+  user:
+    name: "{{ ansible_user }}"
+    append: yes
+    groups: www-data
+
+
+ +

+Nextcloud is configured with a cron job to run periodic background +jobs. +

+ +
+roles_t/core/tasks/main.yml
+- name: Create Nextcloud cron job.
+  become: yes
+  cron:
+    minute: 11,26,41,56
+    job: >-
+      [ -r /var/www/nextcloud/cron.php ]
+      && /usr/bin/php -f /var/www/nextcloud/cron.php
+    name: Nextcloud
+    user: www-data
+
+
+ +

+Nextcloud's MariaDB database (and user) are created by the following +tasks. The user's password is taken from the nextcloud_dbpass +variable, kept in private/vars.yml, and generated e.g. with +the apg -n 1 -x 12 -m 12 command. +

+ +
+private/vars.yml
nextcloud_dbpass:           ippAgmaygyob
+
+
+ +

+When the mysql_db Ansible module supports check_implicit_admin, +the following task can create Nextcloud's DB. +

+ +
+
+- name: Create Nextcloud DB.
+  become: yes
+  mysql_db:
+    check_implicit_admin: yes
+    name: nextcloud
+    collation: utf8mb4_general_ci
+    encoding: utf8mb4
+
+
+ +

+Unfortunately it does not currently, yet the institute prefers the +more secure Unix socket authentication method. Rather than create +such a user, the nextcloud database and nextclouduser user are +created manually. +

+ +

+The following task would work (mysql_user supports +check_implicit_admin) but the nextcloud database was not created +above. Thus both database and user are created manually, with SQL +given in the 7.25.5 subsection below, before occ +maintenance:install can run. +

+ +
+
+- name: Create Nextcloud DB user.
+  become: yes
+  mysql_user:
+    check_implicit_admin: yes
+    name: nextclouduser
+    password: "{{ nextcloud_dbpass }}"
+    update_password: always
+    priv: 'nextcloud.*:all'
+
+
+ +

+Finally, a symbolic link positions /Nextcloud/nextcloud/ at +/var/www/nextcloud/ as expected by the Apache2 configuration above. +Nextcloud itself should always believe that /var/www/nextcloud/ is +its document root. +

+ +
+roles_t/core/tasks/main.yml
+- name: Link /var/www/nextcloud.
+  become: yes
+  file:
+    path: /var/www/nextcloud
+    src: /Nextcloud/nextcloud
+    state: link
+    force: yes
+    follow: no
+
+
+
+
+
+

7.25.2. Configure PHP

+
+

+The following tasks set a number of PHP parameters for better +performance, as recommended by Nextcloud. +

+ +
+roles_t/core/tasks/main.yml
+- name: Set PHP memory_limit for Nextcloud.
+  become: yes
+  lineinfile:
+    path: /etc/php/7.4/apache2/php.ini
+    regexp: memory_limit *=
+    line: memory_limit = 512M
+
+- name: Include PHP parameters for Nextcloud.
+  become: yes
+  copy:
+    content: |
+      ; priority=20
+      apc.enable_cli=1
+      opcache.enable=1
+      opcache.enable_cli=1
+      opcache.interned_strings_buffer=8
+      opcache.max_accelerated_files=10000
+      opcache.memory_consumption=128
+      opcache.save_comments=1
+      opcache.revalidate_freq=1
+    dest: /etc/php/7.4/mods-available/nextcloud.ini
+  notify: Restart Apache2.
+
+- name: Enable Nextcloud PHP modules.
+  become: yes
+  command:
+    cmd: phpenmod {{ item }}
+    creates: /etc/php/7.4/apache2/conf.d/20-{{ item }}.ini
+  loop: [ nextcloud, apcu ]
+  notify: Restart Apache2.
+
+
+
+
+
+

7.25.3. Create /Nextcloud/

+
+

+The Ansible tasks up to this point have completed Core's LAMP stack +and made Core ready to run Nextcloud, but they have not installed +Nextcloud. Nextcloud must be manually installed or restored from a +backup copy. Until then, attempts to access the institute cloud will +just produce errors. +

+ +

+Installing or restoring Nextcloud starts by creating the +/Nextcloud/ directory. It may be a separate disk or just a new +directory on an existing partition. The commands involved will vary +greatly depending on circumstances, but the following examples might +be helpful. +

+ +

+The following command line creates /Nextcloud/ in the root +partition. This is appropriate for one-partition machines like the +test machines. +

+ +
+
sudo mkdir /Nextcloud
+sudo chmod 775 /Nextcloud
+
+
+ +

+The following command lines create /Nextcloud/ on an existing, +large, separate (from the root) partition. A popular choice for a +second partition is mounted at /home/. +

+ +
+
sudo mkdir /home/nextcloud
+sudo chmod 775 /home/nextcloud
+sudo ln -s /home/nextcloud /Nextcloud
+
+
+ +

+These commands create /Nextcloud/ on an entire (without +partitioning) second hard drive, /dev/sdb. +

+ +
+
sudo mkfs -t ext4 /dev/sdb
+sudo mkdir /Nextcloud
+echo "/dev/sdb  /Nextcloud  ext4  errors=remount-ro  0  2" \
+| sudo tee -a /etc/fstab >/dev/null
+sudo mount /Nextcloud
+
+
+
+
+
+

7.25.4. Restore Nextcloud

+
+

+Restoring Nextcloud in the newly created /Nextcloud/ presumably +starts with plugging in the portable backup drive and unlocking it so +that it is automounted at /media/sysadm/Backup per its drive label: +Backup. Assuming this, the following command restores /Nextcloud/ +from the backup (and can be repeated as many times as necessary to get +a successful, complete copy). +

+ +
+
rsync -a /media/sysadm/Backup/Nextcloud/ /Nextcloud/
+
+
+ +

+Mirroring a backup onto a new server may cause UID/GID mismatches. +All of the files in /Nextcloud/nextcloud/ must be owned by user +www-data and group www-data. If not, the following command will +make it so. +

+ +
+
sudo chown -R www-data.www-data /Nextcloud/nextcloud/
+
+
+ +

+The database is restored with the following commands, which assume the +last dump was made February 20th 2022 and thus was saved in +/Nextcloud/20220220.bak. The database will need to be +created first as when installing Nextcloud. The appropriate SQL are +given in Install Nextcloud below. +

+ +
+
cd /Nextcloud/
+sudo mysql --defaults-file=dbbackup.cnf nextcloud < 20220220.bak
+cd nextcloud/
+sudo -u www-data php occ maintenance:data-fingerprint
+
+
+ +

+Finally the administrator surfs to http://core/nextcloud/, +authenticates, and addresses any warnings on the Administration > +Overview web page. +

+
+
+
+

7.25.5. Install Nextcloud

+
+

+Installing Nextcloud in the newly created /Nextcloud/ starts with +downloading and verifying a recent release tarball. The following +example command lines unpacked Nextcloud 23 in nextcloud/ in +/Nextcloud/ and set the ownerships and permissions of the new +directories and files. +

+ +
+
cd /Nextcloud/
+tar xzf ~/Downloads/nextcloud-23.0.0.tar.bz2
+sudo chown -R www-data.www-data nextcloud
+sudo find nextcloud -type d -exec chmod 750 {} \;
+sudo find nextcloud -type f -exec chmod 640 {} \;
+
+
+ +

+According to the latest installation instructions in version 24's +administration guide, after unpacking and setting file permissions, +the following occ command takes care of everything. This command +currently expects Nextcloud's database and user to exist. The +following SQL commands create the database and user (entered at the +SQL prompt of the sudo mysql command). The shell command then runs +occ. +

+ +
+
create database nextcloud
+    character set utf8mb4
+    collate utf8mb4_general_ci;
+grant all on nextcloud.*
+    to 'nextclouduser'@'localhost'
+    identified by 'ippAgmaygyobwyt5';
+flush privileges;
+
+
+ +
+
cd /var/www/nextcloud/
+sudo -u www-data php occ maintenance:install \
+     --data-dir=/var/www/nextcloud/data \
+     --database=mysql --database-name=nextcloud \
+     --database-user=nextclouduser \
+     --database-pass=ippAgmaygyobwyt5 \
+     --admin-user=sysadm --admin-pass=PASSWORD
+
+
+ +

+The nextcloud/config/config.php is created by the above command, but +gets the trusted_domains and overwrite.cli.url settings wrong, +using localhost where core.small.private is wanted. The +only way the institute cloud should be accessed is by that name, so +adjusting the config.php file is straightforward. The settings +should be corrected by hand for immediate testing, but the +"Afterwards" tasks (below) will check (or update) these settings when +Core is next checked (or updated) e.g. with ./inst config -n core. +

+ +

+Before calling Nextcloud "configured", the administrator runs ./inst +config core, surfs to http://core.small.private/nextcloud/, +logins in as sysadm, and follows any reasonable +instructions (reasonable for a small organization) on the +Administration > Overview page. +

+
+
+
+

7.25.6. Afterwards

+
+

+Whether Nextcloud was restored or installed, there are a few things +Ansible can do to bolster reliability and security (aka privacy). +These Nextcloud "Afterwards" tasks would fail if they executed before +Nextcloud was installed, so the first "afterwards" task probes for +/Nextcloud/nextcloud and registers the file status with the +nextcloud variable. The nextcloud.stat.exists condition on the +afterwards tasks causes them to skip rather than fail. +

+ +
+roles_t/core/tasks/main.yml
+- name: Test for /Nextcloud/nextcloud/.
+  stat:
+    path: /Nextcloud/nextcloud
+  register: nextcloud
+- debug:
+    msg: "/Nextcloud/ does not yet exist"
+  when: not nextcloud.stat.exists
+
+
+ +

+The institute installed Nextcloud with the occ maintenance:install +command, which produced a simple nextcloud/config/config.php with +incorrect trusted_domains and overwrite.cli.url settings. These +are fixed during installation, but the institute may also have +restored Nextcloud, including the config.php file. (This file is +edited by the web scripts and so is saved/restored in the backup +copy.) The restored settings may be different from those Ansible used +to create the database user. +

+ +

+The following task checks (or updates) the trusted_domains and +dbpassword settings, to ensure they are consistent with the Ansible +variables domain_priv and nextcloud_dbpass. The +overwrite.cli.url setting is fixed by the tasks that implement +Pretty URLs (below). +

+ +
+roles_t/core/tasks/main.yml
+- name: Configure Nextcloud trusted domains.
+  become: yes
+  replace:
+    path: /var/www/nextcloud/config/config.php
+    regexp: "^( *)'trusted_domains' *=>[^)]*[)],$"
+    replace: |-
+      \1'trusted_domains' => 
+      \1array (
+      \1  0 => 'core.{{ domain_priv }}',
+      \1),
+  when: nextcloud.stat.exists
+
+- name: Configure Nextcloud dbpasswd.
+  become: yes
+  lineinfile:
+    path: /var/www/nextcloud/config/config.php
+    regexp: "^ *'dbpassword' *=> *'.*', *$"
+    line: "  'dbpassword' => '{{ nextcloud_dbpass }}',"
+    insertbefore: "^[)];"
+    firstmatch: yes
+  when: nextcloud.stat.exists
+
+
+ +

+The institute uses the php-apcu package to provide Nextcloud with a +local memory cache. The following memcache.local Nextcloud setting +enables it. +

+ +
+roles_t/core/tasks/main.yml
+- name: Configure Nextcloud memcache.
+  become: yes
+  lineinfile:
+    path: /var/www/nextcloud/config/config.php
+    regexp: "^ *'memcache.local' *=> *'.*', *$"
+    line: "  'memcache.local' => '\\\\OC\\\\Memcache\\\\APCu',"
+    insertbefore: "^[)];"
+    firstmatch: yes
+  when: nextcloud.stat.exists
+
+
+ +

+The institute implements Pretty URLs as described in the Pretty URLs +subsection of the "Installation on Linux" section of the "Installation +and server configuration" chapter in the Nextcloud 22 Server +Administration Guide. Two settings are updated: overwrite.cli.url +and htaccess.RewriteBase. +

+ +
+roles_t/core/tasks/main.yml
+- name: Configure Nextcloud for Pretty URLs.
+  become: yes
+  lineinfile:
+    path: /var/www/nextcloud/config/config.php
+    regexp: "{{ item.regexp }}"
+    line: "{{ item.line }}"
+    insertbefore: "^[)];"
+    firstmatch: yes
+  vars:
+    url: http://core.{{ domain_priv }}/nextcloud
+  loop:
+  - regexp: "^ *'overwrite.cli.url' *=>"
+    line: "  'overwrite.cli.url' => '{{ url }}',"
+  - regexp: "^ *'htaccess.RewriteBase' *=>"
+    line: "  'htaccess.RewriteBase' => '/nextcloud',"
+  when: nextcloud.stat.exists
+
+
+ +

+The institute sets Nextcloud's default_phone_region mainly to avoid +a complaint on the Settings > Administration > Overview web page. +

+ +
+private/vars.yml
nextcloud_region:           US
+
+
+ +
+roles_t/core/tasks/main.yml
+- name: Configure Nextcloud phone region.
+  become: yes
+  lineinfile:
+    path: /var/www/nextcloud/config/config.php
+    regexp: "^ *'default_phone_region' *=> *'.*', *$"
+    line: "  'default_phone_region' => '{{ nextcloud_region }}',"
+    insertbefore: "^[)];"
+    firstmatch: yes
+  when: nextcloud.stat.exists
+
+
+ +

+The next two tasks create /Nextcloud/dbbackup.cnf if it does not +exist, and checks the password setting in it when it does. It +should never be world readable (and probably shouldn't be group +readable). This file is needed by the institute's backup command, +so ./inst config and in particular these next two tasks need to +run before the next backup. +

+ +
+roles_t/core/tasks/main.yml
+- name: Create /Nextcloud/dbbackup.cnf.
+  no_log: yes
+  become: yes
+  copy:
+    content: |
+      [mysqldump]
+      no-tablespaces
+      single-transaction
+      host=localhost
+      user=nextclouduser
+      password={{ nextcloud_dbpass }}
+    dest: /Nextcloud/dbbackup.cnf
+    mode: g=,o=
+    force: no
+  when: nextcloud.stat.exists
+
+- name: Update /Nextcloud/dbbackup.cnf password.
+  become: yes
+  lineinfile:
+    path: /Nextcloud/dbbackup.cnf
+    regexp: password=
+    line: password={{ nextcloud_dbpass }}
+  when: nextcloud.stat.exists
+
+
+
+
+
+
+
+

8. The Gate Role

+
+

+The gate role configures the services expected at the campus gate: a +VPN into the campus network via a campus Wi-Fi access point, and +Internet access via NAT to the Internet. The gate machine uses +three network interfaces (see The Gate Machine) configured with +persistent names used in its firewall rules. +

+ +
+
lan
The campus Ethernet.
+
wifi
The campus Wi-Fi AP.
+
isp
The campus ISP.
+
+ +

+Requiring a VPN to access the campus network from the campus Wi-Fi +bolsters the native Wi-Fi encryption and frustrates non-RYF (Respects +Your Freedom) wireless equipment. +

+ +

+Gate is also a campus machine, so the more generic campus role is +applied first, by which Gate gets a campus machine's DNS and Postfix +configurations, etc. +

+
+
+

8.1. Include Particulars

+
+

+The following should be familiar boilerplate by now. +

+ +
+roles_t/gate/tasks/main.yml
---
+- name: Include public variables.
+  include_vars: ../public/vars.yml
+  tags: accounts
+- name: Include private variables.
+  include_vars: ../private/vars.yml
+  tags: accounts
+- name: Include members.
+  include_vars: "{{ lookup('first_found', membership_rolls) }}"
+  tags: accounts
+
+
+
+
+
+

8.2. Configure Netplan

+
+

+Gate's network interfaces are configured using Netplan and two files. +/etc/netplan/60-gate.yaml describes the static interfaces, to the +campus Ethernet and WiFi. /etc/netplan/60-isp.yaml is expected to +be revised more frequently as the campus ISP changes. +

+ +

+Netplan is configured to identify the interfaces by their MAC +addresses, which must be provided in private/vars.yml, as in the +example code here. +

+ +
+private/vars.yml
gate_lan_mac:               ff:ff:ff:ff:ff:ff
+gate_wifi_mac:              ff:ff:ff:ff:ff:ff
+gate_isp_mac:               ff:ff:ff:ff:ff:ff
+
+
+ +

+The following tasks install the two configuration files and apply the +new network plan. +

+ +
+roles_t/gate/tasks/main.yml
+- name: Install netplan (gate).
+  become: yes
+  apt: pkg=netplan.io
+
+- name: Configure netplan (gate).
+  become: yes
+  copy:
+    content: |
+      network:
+        ethernets:
+          lan:
+            match:
+              macaddress: {{ gate_lan_mac }}
+            addresses: [ {{ gate_addr_cidr }} ]
+            set-name: lan
+            dhcp4: false
+            nameservers:
+              addresses: [ {{ core_addr }} ]
+              search: [ {{ domain_priv }} ]
+            routes:
+              - to: {{ public_vpn_net_cidr }}
+                via: {{ core_addr }}
+          wifi:
+            match:
+              macaddress: {{ gate_wifi_mac }}
+            addresses: [ {{ gate_wifi_addr_cidr }} ]
+            set-name: wifi
+            dhcp4: false
+    dest: /etc/netplan/60-gate.yaml
+    mode: u=rw,g=r,o=
+  notify: Apply netplan.
+
+- name: Install netplan (ISP).
+  become: yes
+  copy:
+    content: |
+      network:
+        ethernets:
+          isp:
+            match:
+              macaddress: {{ gate_isp_mac }}
+            set-name: isp
+            dhcp4: true
+            dhcp4-overrides:
+              use-dns: false
+    dest: /etc/netplan/60-isp.yaml
+    mode: u=rw,g=r,o=
+    force: no
+  notify: Apply netplan.
+
+
+ +
+roles_t/gate/handlers/main.yml
---
+- name: Apply netplan.
+  become: yes
+  command: netplan apply
+
+
+ +

+Note that the 60-isp.yaml file is only updated (created) if it does +not already exists, so that it can be easily modified to debug a new +campus ISP without interference from Ansible. +

+
+
+
+

8.3. UFW Rules

+
+

+Gate uses the Uncomplicated FireWall (UFW) to install its packet +filters at boot-time. The institute does not use a firewall except to +configure Network Address Translation (NAT) and forwarding. Members +expect to be able to exercise experimental services on random ports. +The default policy settings in /etc/default/ufw are ACCEPT and +ACCEPT for input and output, and DROP for forwarded packets. +Forwarding was enabled in the kernel previously (when configuring +OpenVPN) using Ansible's sysctl module. It does not need to be set +in /etc/ufw/sysctl.conf. +

+ +

+NAT is enabled per the ufw-framework(8) manual page, by introducing +nat table rules in a block at the end of /etc/ufw/before.rules. +They translate packets going to the ISP. These can come from the +private Ethernet or campus Wi-Fi. Hosts on the other institute +networks (the two VPNs) should not be routing their Internet traffic +through their VPN. +

+ +
+ufw-nat
-A POSTROUTING -s {{   private_net_cidr }} -o isp -j MASQUERADE
+-A POSTROUTING -s {{ gate_wifi_net_cidr }} -o isp -j MASQUERADE
+
+
+ +

+Forwarding rules are also needed. The nat table is a post routing +rule set, so the default routing policy (DENY) will drop packets +before NAT can translate them. The following rules are added to allow +packets to be forwarded from the campus Ethernet or Gate-WiFi subnet +to an ISP on the isp interface, and back (if related to an outgoing +packet). +

+ +
+ufw-forward-nat
-A FORWARD -i lan  -o isp  -j ACCEPT
+-A FORWARD -i wifi -o isp  -j ACCEPT
+-A FORWARD -i isp  -o lan  {{ ACCEPT_RELATED }}
+-A FORWARD -i isp  -o wifi {{ ACCEPT_RELATED }}
+
+
+ +

+To keep the above code lines short, the template references an +ACCEPT_RELATED variable, provided by the task, whose value includes +the following iptables(8) rule specification parameters. +

+ +
+-m state --state ESTABLISHED,RELATED -j ACCEPT
+
+ + +

+If "the standard iptables-restore syntax" as it is described in the +ufw-framework manual page, allows continuation lines, please let us +know! +

+ +

+Forwarding rules are also needed to route packets from the campus VPN +(the ovpn tunnel device) to the institute's LAN and back. The +public VPN on Front will also be included since its packets arrive at +Gate's lan interface, coming from Core. Thus forwarding between +public and campus VPNs is also allowed. +

+ +
+ufw-forward-private
-A FORWARD -i lan  -o ovpn -j ACCEPT
+-A FORWARD -i ovpn -o lan  -j ACCEPT
+
+
+ +

+Note that there are no forwarding rules to allow packets to pass from +the wifi device to the lan device, just the ovpn device. +

+
+
+
+

8.4. Install UFW

+
+

+The following tasks install the Uncomplicated Firewall (UFW), set its +policy in /etc/default/ufw, and install the above rules in +/etc/ufw/before.rules. When Gate is configured by ./abbey config +gate as in the example bootstrap, enabling the firewall should not be +a problem. But when configuring a new gate with ./abbey config +new-gate, enabling the firewall could break Ansible's current and +future ssh sessions. For this reason, Ansible does not enable the +firewall. The administrator must login and execute the following +command after Gate is configured or new gate is "in position" +(connected to old Gate's wifi and isp networks). +

+ +
+sudo ufw enable
+
+ + +
+roles_t/gate/tasks/main.yml
+- name: Install UFW.
+  become:
+  apt: pkg=ufw
+
+- name: Configure UFW policy.
+  become: yes
+  lineinfile:
+    path: /etc/default/ufw
+    line: "{{ item.line }}"
+    regexp: "{{ item.regexp }}"
+  loop:
+  - { line: "DEFAULT_INPUT_POLICY=\"ACCEPT\"",
+      regexp: "^DEFAULT_INPUT_POLICY=" }
+  - { line: "DEFAULT_OUTPUT_POLICY=\"ACCEPT\"",
+      regexp: "^DEFAULT_OUTPUT_POLICY=" }
+  - { line: "DEFAULT_FORWARD_POLICY=\"DROP\"",
+      regexp: "^DEFAULT_FORWARD_POLICY=" }
+
+- name: Configure UFW rules.
+  become: yes
+  vars:
+    ACCEPT_RELATED: -m state --state ESTABLISHED,RELATED -j ACCEPT
+  blockinfile:
+    path: /etc/ufw/before.rules
+    block: |
+      *nat
+      :POSTROUTING ACCEPT [0:0]
+      -A POSTROUTING -s {{   private_net_cidr }} -o isp -j MASQUERADE
+      -A POSTROUTING -s {{ gate_wifi_net_cidr }} -o isp -j MASQUERADE
+      COMMIT
+
+      *filter
+      -A FORWARD -i lan  -o isp  -j ACCEPT
+      -A FORWARD -i wifi -o isp  -j ACCEPT
+      -A FORWARD -i isp  -o lan  {{ ACCEPT_RELATED }}
+      -A FORWARD -i isp  -o wifi {{ ACCEPT_RELATED }}
+      -A FORWARD -i lan  -o ovpn -j ACCEPT
+      -A FORWARD -i ovpn -o lan  -j ACCEPT
+      COMMIT
+    insertafter: EOF
+
+
+
+
+
+

8.5. Configure DHCP For The Gate-WiFi Ethernet

+
+

+To accommodate commodity Wi-Fi access points without re-configuring +them, the institute attempts to look like an up-link, an ISP, e.g. a +cable modem. Thus it expects the wireless AP to route non-local +traffic out its WAN Ethernet port, and to get an IP address for the +WAN port using DHCP. Thus Gate runs ISC's DHCP daemon configured to +listen on one network interface, recognize exactly one client host, +and provide that one client with an IP address and customary network +parameters (default route, time server, etc.). +

+ +

+Two Ansible variables are needed to configure Gate's DHCP service, +specifically the sole subnet host: wifi_wan_name is any word +appropriate for identifying the Wi-Fi AP, and wifi_wan_mac is the +AP's MAC address. +

+ +
+private/vars.yml
wifi_wan_mac:               94:83:c4:19:7d:57
+wifi_wan_name:              campus-wifi-ap
+
+
+ +

+If Gate is configured with ./abbey config gate and then connected to +actual networks (i.e. not rebooted), the following command is +executed. If a new gate was configured with ./abbey config new-gate +and not rebooted, the following command would also be executed. +

+ +
+sudo systemctl start isc-dhcp-server
+
+ + +

+If physically moved or rebooted for some other reason, the above +command would not be necessary. +

+ +

+Installation and configuration of the DHCP daemon follows. Note that +the daemon listens only on the Gate-WiFi network interface. +

+ +
+roles_t/gate/tasks/main.yml
+- name: Install DHCP server.
+  become: yes
+  apt: pkg=isc-dhcp-server
+
+- name: Configure DHCP interface.
+  become: yes
+  lineinfile:
+    path: /etc/default/isc-dhcp-server
+    line: INTERFACESv4="wifi"
+    regexp: ^INTERFACESv4=
+  notify: Restart DHCP server.
+
+- name: Configure DHCP for WiFiAP service.
+  become: yes
+  copy:
+    content: |
+      default-lease-time 3600;
+      max-lease-time 7200;
+      ddns-update-style none;
+      authoritative;
+      log-facility daemon;
+
+      subnet {{ gate_wifi_net }} netmask {{ gate_wifi_net_mask }} {
+        option subnet-mask {{ gate_wifi_net_mask }};
+        option broadcast-address {{ gate_wifi_broadcast }};
+        option routers {{ gate_wifi_addr }};
+      }
+
+      host {{ wifi_wan_name }} {
+        hardware ethernet {{ wifi_wan_mac }};
+        fixed-address {{ wifi_wan_addr }};
+      }
+    dest: /etc/dhcp/dhcpd.conf
+  notify: Restart DHCP server.
+
+- name: Enable DHCP server.
+  become: yes
+  systemd:
+    service: isc-dhcp-server
+    enabled: yes
+
+
+ +
+roles_t/gate/handlers/main.yml
+- name: Restart DHCP server.
+  become: yes
+  systemd:
+    service: isc-dhcp-server
+    state: restarted
+
+
+
+
+
+

8.6. Install Server Certificate

+
+

+The (OpenVPN) server on Gate uses an institute certificate (and key) +to authenticate itself to its clients. It uses the /etc/server.crt +and /etc/server.key files just because the other servers (on Core +and Front) do. +

+ +
+roles_t/gate/tasks/main.yml
+- name: Install server certificate/key.
+  become: yes
+  copy:
+    src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }}
+    dest: /etc/server.{{ item.typ }}
+    mode: "{{ item.mode }}"
+  loop:
+  - { path: "issued/gate.{{ domain_priv }}", typ: crt,
+      mode: "u=r,g=r,o=r" }
+  - { path: "private/gate.{{ domain_priv }}", typ: key,
+      mode: "u=r,g=,o=" }
+  notify: Restart OpenVPN.
+
+
+
+
+
+

8.7. Configure OpenVPN

+
+

+Gate uses OpenVPN to provide the institute's campus VPN service. Its +clients are not configured to route all of their traffic through +the VPN, so Gate pushes routes to the other institute networks. Gate +itself is on the private Ethernet and thereby learns about the route +to Front. +

+ +
+openvpn-gate-routes
push "route {{ private_net_and_mask }}"
+push "route {{ public_vpn_net_and_mask }}"
+
+
+ +

+The complete OpenVPN configuration for Gate includes a server +option, the pushed routes mentioned above, and the common options +discussed in The VPN Services. +

+ +
+openvpn-gate
server {{ campus_vpn_net_and_mask }}
+client-config-dir /etc/openvpn/ccd
+push "route {{ private_net_and_mask }}"
+push "route {{ public_vpn_net_and_mask }}"
+dev-type tun
+dev ovpn
+topology subnet
+client-to-client
+keepalive 10 120
+push "dhcp-option DOMAIN {{ domain_priv }}"
+push "dhcp-option DNS {{ core_addr }}"
+user nobody
+group nogroup
+persist-key
+persist-tun
+cipher AES-256-GCM
+auth SHA256
+max-clients 20
+ifconfig-pool-persist ipp.txt
+status openvpn-status.log
+verb 3
+ca /usr/local/share/ca-certificates/{{ domain_name }}.crt
+cert /etc/server.crt
+key /etc/server.key
+dh dh2048.pem
+tls-auth ta.key 0
+
+
+ +

+Finally, here are the tasks (and handler) required to install and +configure the OpenVPN server on Gate. +

+ +
+roles_t/gate/tasks/main.yml
+- name: Install OpenVPN.
+  become: yes
+  apt: pkg=openvpn
+
+- name: Enable IP forwarding.
+  become: yes
+  sysctl:
+    name: net.ipv4.ip_forward
+    value: "1"
+    state: present
+
+- name: Create OpenVPN client configuration directory.
+  become: yes
+  file:
+    path: /etc/openvpn/ccd
+    state: directory
+  notify: Restart OpenVPN.
+
+- name: Disable former VPN clients.
+  become: yes
+  copy:
+    content: "disable\n"
+    dest: /etc/openvpn/ccd/{{ item }}
+  loop: "{{ revoked }}"
+  notify: Restart OpenVPN.
+  tags: accounts
+
+- name: Install OpenVPN secrets.
+  become: yes
+  copy:
+    src: ../Secret/{{ item.src }}
+    dest: /etc/openvpn/{{ item.dest }}
+    mode: u=r,g=,o=
+  loop:
+  - { src: gate-dh2048.pem, dest: dh2048.pem }
+  - { src: gate-ta.key, dest: ta.key }
+  notify: Restart OpenVPN.
+
+- name: Configure OpenVPN.
+  become: yes
+  copy:
+    content: |
+      server {{ campus_vpn_net_and_mask }}
+      client-config-dir /etc/openvpn/ccd
+      push "route {{ private_net_and_mask }}"
+      push "route {{ public_vpn_net_and_mask }}"
+      dev-type tun
+      dev ovpn
+      topology subnet
+      client-to-client
+      keepalive 10 120
+      push "dhcp-option DOMAIN {{ domain_priv }}"
+      push "dhcp-option DNS {{ core_addr }}"
+      user nobody
+      group nogroup
+      persist-key
+      persist-tun
+      cipher AES-256-GCM
+      auth SHA256
+      max-clients 20
+      ifconfig-pool-persist ipp.txt
+      status openvpn-status.log
+      verb 3
+      ca /usr/local/share/ca-certificates/{{ domain_name }}.crt
+      cert /etc/server.crt
+      key /etc/server.key
+      dh dh2048.pem
+      tls-auth ta.key 0
+    dest: /etc/openvpn/server.conf
+    mode: u=r,g=r,o=
+  notify: Restart OpenVPN.
+
+
+ +
+roles_t/gate/handlers/main.yml
+- name: Restart OpenVPN.
+  become: yes
+  systemd:
+    service: openvpn@server
+    state: restarted
+
+
+
+
+
+
+

9. The Campus Role

+
+

+The campus role configures generic campus server machines: network +NAS, DVRs, wireless sensors, etc. These are simple Debian machines +administered remotely via Ansible. They should use the campus name +server, sync with the campus time server, trust the institute +certificate authority, and deliver email addressed to root to the +system administrator's account on Core. +

+ +

+Wireless campus devices can get a key to the campus VPN from the +./inst client campus command, but their OpenVPN client must be +configured manually. +

+
+
+

9.1. Include Particulars

+
+

+The following should be familiar boilerplate by now. +

+ +
+roles_t/campus/tasks/main.yml
---
+- name: Include public variables.
+  include_vars: ../public/vars.yml
+- name: Include private variables.
+  include_vars: ../private/vars.yml
+
+
+
+
+
+

9.2. Configure Hostname

+
+

+Clients should be using the expected host name. +

+ +
+roles_t/campus/tasks/main.yml
+- name: Configure hostname.
+  become: yes
+  copy:
+    content: "{{ item.content }}"
+    dest: "{{ item.file }}"
+  loop:
+  - { file: /etc/hostname,
+      content: "{{ inventory_hostname }}" }
+  - { file: /etc/mailname,
+      content: "{{ inventory_hostname }}.{{ domain_priv }}" }
+  when: inventory_hostname != ansible_hostname
+  notify: Update hostname.
+
+
+
+ +
+roles_t/campus/handlers/main.yml
---
+- name: Update hostname.
+  become: yes
+  command: hostname -F /etc/hostname
+
+
+
+
+
+

9.3. Enable Systemd Resolved

+
+

+Campus machines start the systemd-networkd and systemd-resolved +service units on boot. See Enable Systemd Resolved. +

+ +
+roles_t/campus/tasks/main.yml
+- name: Install systemd-resolved.
+  become: yes
+  apt: pkg=systemd-resolved
+  when:
+  - ansible_distribution == 'Debian'
+  - 11 < ansible_distribution_major_version|int
+
+- name: Enable/Start systemd-networkd.
+  become: yes
+  systemd:
+    service: systemd-networkd
+    enabled: yes
+    state: started
+
+- name: Enable/Start systemd-resolved.
+  become: yes
+  systemd:
+    service: systemd-resolved
+    enabled: yes
+    state: started
+
+- name: Link /etc/resolv.conf.
+  become: yes
+  file:
+    path: /etc/resolv.conf
+    src: /run/systemd/resolve/resolv.conf
+    state: link
+    force: yes
+  when:
+  - ansible_distribution == 'Debian'
+  - 12 > ansible_distribution_major_version|int
+
+
+
+
+
+

9.4. Configure Systemd Resolved

+
+

+Campus machines use the campus name server on Core (or dns.google), +and include the institute's private domain in their search lists. +

+ +
+roles_t/campus/tasks/main.yml
+- name: Configure resolved.
+  become: yes
+  lineinfile:
+    path: /etc/systemd/resolved.conf
+    regexp: "{{ item.regexp }}"
+    line: "{{ item.line }}"
+  loop:
+  - { regexp: '^ *DNS *=', line: "DNS={{ core_addr }}" }
+  - { regexp: '^ *FallbackDNS *=', line: "FallbackDNS=8.8.8.8" }
+  - { regexp: '^ *Domains *=', line: "Domains={{ domain_priv }}" }
+  notify:
+  - Reload Systemd.
+  - Restart Systemd resolved.
+
+
+ +
+roles_t/campus/handlers/main.yml
+- name: Reload Systemd.
+  become: yes
+  command: systemctl daemon-reload
+
+- name: Restart Systemd resolved.
+  become: yes
+  systemd:
+    service: systemd-resolved
+    state: restarted
+
+
+
+
+
+

9.5. Configure Systemd Timesyncd

+
+

+The institute uses a common time reference throughout the campus. +This is essential to campus security, improving the accuracy of log +and file timestamps. +

+ +
+roles_t/campus/tasks/main.yml
+- name: Configure timesyncd.
+  become: yes
+  lineinfile:
+    path: /etc/systemd/timesyncd.conf
+    line: NTP=ntp.{{ domain_priv }}
+  notify: Restart systemd-timesyncd.
+
+
+ +
+roles_t/campus/handlers/main.yml
+- name: Restart systemd-timesyncd.
+  become: yes
+  systemd:
+    service: systemd-timesyncd
+    state: restarted
+
+
+
+
+
+

9.6. Add Administrator to System Groups

+
+

+The administrator often needs to read (directories of) log files owned +by groups root and adm. Adding the administrator's account to +these groups speeds up debugging. +

+ +
+roles_t/campus/tasks/main.yml
+- name: Add {{ ansible_user }} to system groups.
+  become: yes
+  user:
+    name: "{{ ansible_user }}"
+    append: yes
+    groups: root,adm
+
+
+
+
+
+

9.7. Trust Institute Certificate Authority

+
+

+Campus hosts should recognize the institute's Certificate Authority as +trustworthy, so its certificate is added to the host's set of trusted +CAs. (For more information about how the small institute manages its +keys, certificates and passwords, see Keys.) +

+ +
+roles_t/campus/tasks/main.yml
+- name: Trust the institute CA.
+  become: yes
+  copy:
+    src: ../Secret/CA/pki/ca.crt
+    dest: /usr/local/share/ca-certificates/{{ domain_name }}.crt
+    mode: u=r,g=r,o=r
+    owner: root
+    group: root
+  notify: Update CAs.
+
+
+ +
+roles_t/campus/handlers/main.yml
+- name: Update CAs.
+  become: yes
+  command: update-ca-certificates
+
+
+
+
+
+

9.8. Install Unattended Upgrades

+
+

+The institute prefers to install security updates as soon as possible. +

+ +
+roles_t/campus/tasks/main.yml
+- name: Install basic software.
+  become: yes
+  apt: pkg=unattended-upgrades
+
+
+
+
+
+

9.9. Configure Postfix on Campus

+
+

+The Postfix settings used by the campus include message size, queue +times, and the relayhost Core. The default Debian configuration +(for an "Internet Site") is otherwise sufficient. Manual installation +may prompt for configuration type and mail name. The appropriate +answers are listed here but will be checked (corrected) by Ansible +tasks below. +

+ +
    +
  • General type of mail configuration: Internet Site
  • +
  • System mail name: new.small.private
  • +
+ +
+roles_t/campus/tasks/main.yml
+- name: Install Postfix.
+  become: yes
+  apt: pkg=postfix
+
+- name: Configure Postfix.
+  become: yes
+  lineinfile:
+    path: /etc/postfix/main.cf
+    regexp: "^ *{{ item.p }} *="
+    line: "{{ item.p }} = {{ item.v }}"
+  loop:
+  - p: smtpd_relay_restrictions
+    v: permit_mynetworks reject_unauth_destination
+  - { p: message_size_limit, v: 104857600 }
+  - { p: delay_warning_time, v: 1h }
+  - { p: maximal_queue_lifetime, v: 4h }
+  - { p: bounce_queue_lifetime, v: 4h }
+  - { p: home_mailbox, v: Maildir/ }
+  - { p: myhostname,
+      v: "{{ inventory_hostname }}.{{ domain_priv }}" }
+  - { p: mydestination,
+      v: "{{ postfix_mydestination | default('') }}" }
+  - { p: relayhost, v: "[smtp.{{ domain_priv }}]" }
+  - { p: inet_interfaces, v: loopback-only }
+  notify: Restart Postfix.
+
+- name: Enable/Start Postfix.
+  become: yes
+  systemd:
+    service: postfix
+    enabled: yes
+    state: started
+
+
+ +
+roles_t/campus/handlers/main.yml
+- name: Restart Postfix.
+  become: yes
+  systemd:
+    service: postfix
+    state: restarted
+
+
+
+
+
+

9.10. Hard-wire Important IP Addresses

+
+

+For the edification of programs consulting the /etc/hosts file, the +institute's domain name and public IP address are added. The Debian +custom of translating the host name into 127.0.1.1 is also followed. +

+ +
+roles_t/campus/tasks/main.yml
+- name: Hard-wire important IP addresses.
+  become: yes
+  lineinfile:
+    path: /etc/hosts
+    regexp: "{{ item.regexp }}"
+    line: "{{ item.line }}"
+    insertafter: EOF
+  vars:
+    name: "{{ inventory_hostname }}"
+  loop:
+  - regexp: "^{{ front_addr }}[         ].*"
+    line: "{{ front_addr }}     {{ domain_name }}"
+  - regexp: "^127.0.1.1[        ].*"
+    line: "127.0.1.1    {{ name }}.localdomain {{ name }}"
+
+
+
+
+
+

9.11. Configure NRPE

+
+

+Each campus host runs an NRPE (a NAGIOS Remote Plugin Executor) +server so that the NAGIOS4 server on Core can collect statistics. The +NAGIOS service is discussed in the Configure NRPE section of The Core +Role. +

+ +
+roles_t/campus/tasks/main.yml
+- name: Install NRPE.
+  become: yes
+  apt:
+    pkg: [ nagios-nrpe-server, lm-sensors ]
+
+- name: Install inst_sensors NAGIOS plugin.
+  become: yes
+  copy:
+    src: ../core/files/inst_sensors
+    dest: /usr/local/sbin/inst_sensors
+    mode: u=rwx,g=rx,o=rx
+
+- name: Configure NRPE server.
+  become: yes
+  copy:
+    content: |
+      allowed_hosts=127.0.0.1,::1,{{ core_addr }}
+    dest: /etc/nagios/nrpe_local.cfg
+  notify: Reload NRPE server.
+
+- name: Configure NRPE commands.
+  become: yes
+  copy:
+    src: nrpe.cfg
+    dest: /etc/nagios/nrpe.d/institute.cfg
+  notify: Reload NRPE server.
+
+- name: Enable/Start NRPE server.
+  become: yes
+  systemd:
+    service: nagios-nrpe-server
+    enabled: yes
+    state: started
+
+
+ +
+roles_t/campus/handlers/main.yml
+- name: Reload NRPE server.
+  become: yes
+  systemd:
+    service: nagios-nrpe-server
+    state: reloaded
+
+
+
+
+
+
+

10. The Ansible Configuration

+
+

+The small institute uses Ansible to maintain the configuration of its +servers. The administrator keeps an Ansible inventory in hosts, and +runs the playbook site.yml to apply the appropriate institutional +role(s) to each host. Examples of these files are included here, and +are used to test the roles. The example configuration applies the +institutional roles to VirtualBox machines prepared according to +chapter Testing. +

+ +

+The actual Ansible configuration is kept in a Git "superproject" +containing replacements for the example hosts inventory and +site.yml playbook, as well as the public/ and private/ +particulars. Thus changes to this document and its tangle are easily +merged with git pull --recurse-submodules or git submodule update, +while changes to the institute's particulars are committed to a +separate revision history. +

+
+
+

10.1. ansible.cfg

+
+

+The Ansible configuration file ansible.cfg contains just a handful +of settings, some included just to create a test jig as described in +Testing. +

+ +
    +
  • interpreter_python is set to suppress a warning from Ansible's +"automatic interpreter discovery" (described here). It declares +that Python 3 can be expected on all institute hosts.
  • +
  • vault_password_file is set to suppress prompts for the vault +password. The institute keeps its vault password in Secret/ (as +described in Keys) and thus sets this parameter to +Secret/vault-password.
  • +
  • inventory is set to avoid specifying it on the command line.
  • +
  • roles_path is set to the recently tangled roles files in +roles_t/ which are preferred in the test configuration.
  • +
+ +
+ansible.cfg
[defaults]
+interpreter_python=/usr/bin/python3
+vault_password_file=Secret/vault-password
+inventory=hosts
+roles_path=roles_t
+
+
+
+
+
+

10.2. hosts

+
+

+The Ansible inventory file hosts describes all of the institute's +machines starting with the main servers Front, Core and Gate. It +provides the IP addresses, administrator account names and passwords +for each machine. The IP addresses are all private, campus network +addresses except Front's public IP. The following example host file +describes three test servers named front, core and gate. +

+ +
+hosts
all:
+  vars:
+    ansible_user: sysadm
+    ansible_ssh_extra_args: -i Secret/ssh_admin/id_rsa
+  hosts:
+    front:
+      ansible_host: 192.168.57.3
+      ansible_become_password: "{{ become_front }}"
+    core:
+      ansible_host: 192.168.56.1
+      ansible_become_password: "{{ become_core }}"
+    gate:
+      ansible_host: 192.168.56.2
+      ansible_become_password: "{{ become_gate }}"
+  children:
+    campus:
+      hosts:
+        gate:
+
+
+ +

+The values of the ansible_become_password key are references to +variables defined in Secret/become.yml, which is loaded as +"extra" variables by a -e option on the ansible-playbook command +line. +

+ +
+Secret/become.yml
become_front: !vault |
+        $ANSIBLE_VAULT;1.1;AES256
+        3563626131333733666466393166323135383838666338666131336335326
+        3656437663032653333623461633866653462636664623938356563306264
+        3438660a35396630353065383430643039383239623730623861363961373
+        3376663366566326137386566623164313635303532393335363063333632
+        363163316436380a336562323739306231653561613837313435383230313
+        1653565653431356362
+become_core: !vault |
+        $ANSIBLE_VAULT;1.1;AES256
+        3464643665363937393937633432323039653530326465346238656530303
+        8633066663935316365376438353439333034666366363739616130643261
+        3232380a66356462303034636332356330373465623337393938616161386
+        4653864653934373766656265613636343334356361396537343135393663
+        313562613133380a373334393963623635653264663538656163613433383
+        5353439633234666134
+become_gate: !vault |
+        $ANSIBLE_VAULT;1.1;AES256
+        3138306434313739626461303736666236336666316535356561343566643
+        6613733353434333962393034613863353330623761623664333632303839
+        3838350a37396462343738303331356134373634306238633030303831623
+        0636537633139366333373933396637633034383132373064393939363231
+        636264323132370a393135666335303361326330623438613630333638393
+        1303632663738306634
+
+
+ +

+The passwords are individually encrypted just to make it difficult to +acquire a list of all institute privileged account passwords in one +glance. The multi-line values are generated by the ansible-vault +encrypt_string command, which uses the ansible.cfg file and thus +the Secret/vault-password file. +

+
+
+
+

10.3. playbooks/site.yml

+
+

+The example playbooks/site.yml playbook (below) applies the +appropriate institutional role(s) to the hosts and groups defined in +the example inventory: hosts. +

+ +
+playbooks/site.yml
---
+- name: Configure Front
+  hosts: front
+  roles: [ front ]
+
+- name: Configure Gate
+  hosts: gate
+  roles: [ gate ]
+
+- name: Configure Core
+  hosts: core
+  roles: [ core ]
+
+- name: Configure Campus
+  hosts: campus
+  roles: [ campus ]
+
+
+
+
+
+

10.4. Secret/vault-password

+
+

+As already mentioned, the small institute keeps its Ansible vault +password, a "master secret", on the encrypted partition mounted at +Secret/ in a file named vault-password. The administrator +generated a 16 character pronounceable password with gpw 1 16 and +saved it like so: gpw 1 16 >Secret/vault-password. The following +example password matches the example encryptions above. +

+ +
+Secret/vault-password
alitysortstagess
+
+
+
+
+
+

10.5. Creating A Working Ansible Configuration

+
+

+A working Ansible configuration can be "tangled" from this document to +produce the test configuration described in the Testing chapter. The +tangling is done by Emacs's org-babel-tangle function and has +already been performed with the resulting tangle included in the +distribution with this document. +

+ +

+An institution using the Ansible configuration herein can include this +document and its tangle as a Git submodule, e.g. in institute/, and +thus safely merge updates while keeping public and private particulars +separate, in sibling subdirectories public/ and private/. +The following example commands create a new Git repo in ~/net/ +and add an Institute/ submodule. +

+ +
+
cd
+mkdir network
+cd network
+git init
+git submodule add git://birchwood-abbey.net/~puck/Institute
+git add Institute
+
+
+ +

+An institute administrator would then need to add several more files. +

+ +
    +
  • A top-level Ansible configuration file, ansible.cfg, would be +created by copying Institute/ansible.cfg and changing the +roles_path to roles:Institute/roles.
  • +
  • A host inventory, hosts, would be created, perhaps by copying +Institute/hosts and changing its IP addresses.
  • +
  • A site playbook, site.yml, would be created in a new playbooks/ +subdirectory by copying Institute/playbooks/site.yml with +appropriate changes.
  • +
  • All of the files in Institute/public/ and Institute/private/ +would be copied, with appropriate changes, into new subdirectories +public/ and private/.
  • +
  • ~/net/Secret would be a symbolic link to the (auto-mounted?) +location of the administrator's encrypted USB drive, as described in +section Keys.
  • +
+ +

+The files in Institute/roles_t/ were "tangled" from this document +and must be copied to Institute/roles/ for reasons discussed in the +next section. This document does not "tangle" directly into +roles/ to avoid clobbering changes to a working (debugged!) +configuration. +

+ +

+The playbooks/ directory must include the institutional playbooks, +which find their settings and templates relative to this directory, +e.g. in ../private/vars.yml. Running institutional playbooks from +~/net/playbooks/ means they will use ~/net/private/ rather than +the example ~/net/Institute/private/. +

+ +
+
cp -r Institute/roles_t Institute/roles
+( cd playbooks; ln -s ../Institute/playbooks/* . )
+
+
+ +

+Given these preparations, the inst script should work in the +super-project's directory. +

+ +
+
./Institute/inst config -n
+
+
+
+
+
+

10.6. Maintaining A Working Ansible Configuration

+
+

+The Ansible roles currently tangle into the roles_t/ directory to +ensure that debugged Ansible code in roles/ is not clobbered by code +tangled from this document. Comparing roles_t/ with roles/ will +reveal any changes made to roles/ during debugging that need to be +reconciled with this document as well as any policy changes in this +document that require changes to the current roles/. +

+ +

+When debugging literate programs becomes A Thing, then this document +can tangle directly into roles/, and literate debuggers can find +their way back to the code block in this document. +

+
+
+
+
+

11. The Institute Commands

+
+

+The institute's administrator uses a convenience script to reliably +execute standard procedures. The script is run with the command name +./inst because it is intended to run "in" the same directory as the +Ansible configuration. The Ansible commands it executes are expected +to get their defaults from ./ansible.cfg. +

+
+
+

11.1. Sub-command Blocks

+
+

+The code blocks in this chapter tangle into the inst script. Each +block examines the script's command line arguments to determine +whether its sub-command was intended to run, and exits with an +appropriate code when it is done. +

+ +

+The first code block is the header of the ./inst script. +

+ +
+inst
#!/usr/bin/perl -w
+#
+# DO NOT EDIT.  This file was tangled from an institute.org file.
+
+use strict;
+use IO::File;
+
+
+
+
+
+

11.2. Sanity Check

+
+

+The next code block does not implement a sub-command; it implements +part of all ./inst sub-commands. It performs a "sanity check" on +the current directory, warning of missing files or directories, and +especially checking that all files in private/ have appropriate +permissions. It probes past the Secret/ mount point (probing for +Secret/become.yml) to ensure the volume is mounted. +

+ +
+inst
+sub note_missing_file_p ($);
+sub note_missing_directory_p ($);
+
+{
+  my $missing = 0;
+  if (note_missing_file_p "ansible.cfg") { $missing += 1; }
+  if (note_missing_file_p "hosts") { $missing += 1; }
+  if (note_missing_directory_p "Secret") { $missing += 1; }
+  if (note_missing_file_p "Secret/become.yml") { $missing += 1; }
+  if (note_missing_directory_p "playbooks") { $missing += 1; }
+  if (note_missing_file_p "playbooks/site.yml") { $missing += 1; }
+  if (note_missing_directory_p "roles") { $missing += 1; }
+  if (note_missing_directory_p "public") { $missing += 1; }
+  if (note_missing_directory_p "private") { $missing += 1; }
+
+  for my $filename (glob "private/*") {
+    my $perm = (stat $filename)[2];
+    if ($perm & 077) {
+      print "$filename: not private\n";
+    }
+  }
+  die "$missing missing files\n" if $missing != 0;
+}
+
+sub note_missing_file_p ($) {
+  my ($filename) = @_;
+  if (! -f $filename) {
+    print "$filename: missing\n";
+    return 1;
+  } else {
+    return 0;
+  }
+}
+
+sub note_missing_directory_p ($) {
+  my ($dirname) = @_;
+  if (! -d $dirname) {
+    print "$dirname: missing\n";
+    return 1;
+  } else {
+    return 0;
+  }
+}
+
+
+
+
+
+

11.3. Importing Ansible Variables

+
+

+To ensure that Ansible and ./inst are sympatico vis-a-vi certain +variable values (esp. private values like network addresses), a +check-inst-vars.yml playbook is used to update the Perl syntax file +private/vars.pl before ./inst loads it. The Perl code in inst +declares the necessary global variables and private/vars.pl sets +them. +

+ +
+inst
+sub mysystem (@) {
+  my $line = join (" ", @_);
+  print "$line\n";
+  my $status = system $line;
+  die "status: $status\nCould not run $line: $!\n" if $status != 0;
+}
+
+mysystem "ansible-playbook playbooks/check-inst-vars.yml >/dev/null";
+
+our ($domain_name, $domain_priv, $front_addr, $gate_wifi_addr);
+do "./private/vars.pl";
+
+
+ +

+The playbook that updates private/vars.pl: +

+ +
+playbooks/check-inst-vars.yml
- hosts: localhost
+  gather_facts: no
+  tasks:
+  - include_vars: ../public/vars.yml
+  - include_vars: ../private/vars.yml
+  - copy:
+      content: |
+        $domain_name = "{{ domain_name }}";
+        $domain_priv = "{{ domain_priv }}";
+        $front_addr = "{{ front_addr }}";
+        $gate_wifi_addr = "{{ gate_wifi_addr }}";
+      dest: ../private/vars.pl
+      mode: u=rw,g=,o=
+
+
+
+
+
+

11.4. The CA Command

+
+

+The next code block implements the CA sub-command, which creates a +new CA (certificate authority) in Secret/CA/ as well as SSH and PGP +keys for the administrator, Monkey, Front and root, also in +sub-directories of Secret/. The CA is created with the "common +name" provided by the full_name variable. An example is given +here. +

+ +
+public/vars.yml
full_name: Small Institute LLC
+
+
+ +

+The Secret/ directory is on an off-line, encrypted volume plugged in +just for the duration of ./inst commands, so Secret/ is actually a +symbolic link to a volume's automount location. +

+ +
+ln -s /media/sysadm/ADE7-F866/ Secret
+
+ + +

+The Secret/CA/ directory is prepared using Easy RSA's make-cadir +command. The Secret/CA/vars file thus created is edited to contain +the appropriate names (or just to set EASYRSA_DN to cn_only). +

+ +
+sudo apt install easy-rsa
+( cd Secret/; make-cadir CA )
+./inst CA
+
+ + +

+Running ./inst CA creates the new CA and keys. The command prompts +for the Common Name (or several levels of Organizational names) of the +certificate authority. The full_name is given: Small Institute +LLC. The CA is used to issue certificates for front, gate and +core, which are installed on the servers during the next ./inst +config. +

+ +
+inst
+if (defined $ARGV[0] && $ARGV[0] eq "CA") {
+  die "usage: $0 CA" if @ARGV != 1;
+  die "Secret/CA/easyrsa: not an executable\n"
+    if ! -x "Secret/CA/easyrsa";
+  die "Secret/CA/pki/: already exists\n" if -e "Secret/CA/pki";
+  mysystem "cd Secret/CA; ./easyrsa init-pki";
+  mysystem "cd Secret/CA; ./easyrsa build-ca nopass";
+  # Common Name: small.example.org
+
+  my $dom = $domain_name;
+  my $pvt = $domain_priv;
+  mysystem "cd Secret/CA; ./easyrsa build-server-full $dom nopass";
+  mysystem "cd Secret/CA; ./easyrsa build-server-full gate.$pvt nopass";
+  mysystem "cd Secret/CA; ./easyrsa build-server-full core.$pvt nopass";
+  mysystem "cd Secret/CA; ./easyrsa build-client-full core nopass";
+  umask 077;
+  mysystem "openvpn --genkey --secret Secret/front-ta.key";
+  mysystem "openvpn --genkey --secret Secret/gate-ta.key";
+  mysystem "openssl dhparam -out Secret/front-dh2048.pem 2048";
+  mysystem "openssl dhparam -out Secret/gate-dh2048.pem 2048";
+
+  mysystem "mkdir --mode=700 Secret/root.gnupg";
+  mysystem ("gpg --homedir Secret/root.gnupg",
+            " --batch --quick-generate-key --passphrase ''",
+            " root\@core.$pvt");
+  mysystem ("gpg --homedir Secret/root.gnupg",
+            " --export --armor --output root-pub.pem",
+            " root\@core.$pvt");
+  chmod 0440, "root-pub.pem";
+  mysystem ("gpg --homedir Secret/root.gnupg",
+            " --export-secret-key --armor --output root-sec.pem",
+            " root\@core.$pvt");
+  chmod 0400, "root-sec.pem";
+
+  mysystem "mkdir Secret/ssh_admin";
+  chmod 0700, "Secret/ssh_admin";
+  mysystem ("ssh-keygen -q -t rsa"
+            ." -C A\\ Small\\ Institute\\ Administrator",
+            " -N '' -f Secret/ssh_admin/id_rsa");
+
+  mysystem "mkdir Secret/ssh_monkey";
+  chmod 0700, "Secret/ssh_monkey";
+  mysystem "echo 'HashKnownHosts  no' >Secret/ssh_monkey/config";
+  mysystem ("ssh-keygen -q -t rsa -C monkey\@core",
+            " -N '' -f Secret/ssh_monkey/id_rsa");
+
+  mysystem "mkdir Secret/ssh_front";
+  chmod 0700, "Secret/ssh_front";
+  mysystem "ssh-keygen -A -f Secret/ssh_front -C $dom";
+  exit;
+}
+
+
+
+
+
+

11.5. The Config Command

+
+

+The next code block implements the config sub-command, which +provisions network services by running the site.yml playbook +described in playbooks/site.yml. It recognizes an optional -n +flag indicating that the service configurations should just be +checked. Given an optional host name, it provisions (or checks) just +the named host. +

+ +

+Example command lines: +

+
+./inst config
+./inst config -n
+./inst config HOST
+./inst config -n HOST
+
+ + +
+inst
+if (defined $ARGV[0] && $ARGV[0] eq "config") {
+  die "Secret/CA/easyrsa: not executable\n"
+    if ! -x "Secret/CA/easyrsa";
+  shift;
+  my $cmd = "ansible-playbook -e \@Secret/become.yml";
+  if (defined $ARGV[0] && $ARGV[0] eq "-n") {
+    shift;
+    $cmd .= " --check --diff"
+  }
+  if (@ARGV == 0) {
+    ;
+  } elsif (defined $ARGV[0]) {
+    my $hosts = lc $ARGV[0];
+    die "$hosts: contains illegal characters"
+      if $hosts !~ /^!?[a-z][-a-z0-9,!]+$/;
+    $cmd .= " -l $hosts";
+  } else {
+    die "usage: $0 config [-n] [HOSTS]\n";
+  }
+  $cmd .= " playbooks/site.yml";
+  mysystem $cmd;
+  exit;
+}
+
+
+
+
+
+

11.6. Account Management

+
+

+For general information about members and their Unix accounts, see +Accounts. The account management sub-commands maintain a mapping +associating member "usernames" (Unix account names) with their +records. The mapping is stored among other things in +private/members.yml as the value associated with the key members. +

+ +

+A new member's record in the members mapping will have the status +key value current. That key gets value former when the member +leaves.4 Access by former members is revoked by invalidating the +Unix account passwords, removing any authorized SSH keys from Front +and Core, and disabling their VPN certificates. +

+ +

+The example file (below) contains a membership roll with one +membership record, for an account named dick, which was issued +client certificates for devices named dick-note, dick-phone and +dick-razr. dick-phone appears to be lost because its certificate +was revoked. Dick's membership record includes a vault-encrypted +password (for Fetchmail) and the two password hashes installed on +Front and Core. (The example hashes are truncated versions.) +

+ +
+private/members.yml
---
+members:
+  dick:
+    status: current
+    clients:
+    - dick-note
+    - dick-phone
+    - dick-razr
+    password_front:
+      $6$17h49U76$c7TsH6eMVmoKElNANJU1F1LrRrqzYVDreNu.QarpCoSt9u0gTHgiQ
+    password_core:
+      $6$E9se3BoSilq$T.W8IUb/uSlhrVEWUQsAVBweiWB4xb3ebQ0tguVxJaeUkqzVmZ
+    password_fetchmail: !vault |
+      $ANSIBLE_VAULT;1.1;AES256
+      38323138396431323564366136343431346562633965323864633938613363336
+      4333334333966363136613264636365383031376466393432623039653230390a
+      39366232633563646361616632346238333863376335633639383162356661326
+      4363936393530633631616630653032343465383032623734653461323331310a
+      6535633263656434393030333032343533626235653332626330666166613833
+usernames:
+- dick
+revoked:
+- dick-phone
+
+
+ +

+The test campus starts with the empty membership roll found in +private/members-empty.yml and saved in private/members.yml +(which is not tangled from this document, thus not over-written +during testing). If members.yml is not found, members-empty.yml +is used instead. +

+ +
+private/members-empty.yml
---
+members:
+usernames: []
+revoked: []
+
+
+ +

+Both locations go on the membership_rolls variable used by the +include_vars tasks. +

+ +
+private/vars.yml
membership_rolls:
+- "../private/members.yml"
+- "../private/members-empty.yml"
+
+
+ +

+Using the standard Perl library YAML::XS, the subroutine for +reading the membership roll is simple, returning the top-level hash +read from the file. The dump subroutine is another story (below). +

+ +
+inst
+use YAML::XS qw(LoadFile DumpFile);
+
+sub read_members_yaml () {
+  my $path;
+  $path = "private/members.yml";
+  if (-e $path) { return LoadFile ($path); }
+  $path = "private/members-empty.yml";
+  if (-e $path) { return LoadFile ($path); }
+  die "private/members.yml: not found\n";
+}
+
+sub write_members_yaml ($) {
+  my ($yaml) = @_;
+  my $old_umask = umask 077;
+  my $path = "private/members.yml";
+  print "$path: "; STDOUT->flush;
+  eval { #DumpFile ("$path.tmp", $yaml);
+         dump_members_yaml ("$path.tmp", $yaml);
+         rename ("$path.tmp", $path)
+           or die "Could not rename $path.tmp: $!\n"; };
+  my $err = $@;
+  umask $old_umask;
+  if ($err) {
+    print "ERROR\n";
+  } else {
+    print "updated\n";
+  }
+  die $err if $err;
+}
+
+sub dump_members_yaml ($$) {
+  my ($pathname, $yaml) = @_;
+  my $O = new IO::File;
+  open ($O, ">$pathname") or die "Could not open $pathname: $!\n";
+  print $O "---\n";
+  if (keys %{$yaml->{"members"}}) {
+    print $O "members:\n";
+    for my $user (sort keys %{$yaml->{"members"}}) {
+      print_member ($O, $yaml->{"members"}->{$user});
+    }
+    print $O "usernames:\n";
+    for my $user (sort keys %{$yaml->{"members"}}) {
+      print $O "- $user\n";
+    }
+  } else {
+    print $O "members:\n";
+    print $O "usernames: []\n";
+  }
+  if (@{$yaml->{"revoked"}}) {
+    print $O "revoked:\n";
+    for my $name (@{$yaml->{"revoked"}}) {
+      print $O "- $name\n";
+    }
+  } else {
+    print $O "revoked: []\n";
+  }
+  close $O or die "Could not close $pathname: $!\n";
+}
+
+
+ +

+The first implementation using YAML::Tiny balked at the !vault +data type. The current version using YAML::XS (Simonov's libyaml) +does not support local data types neither, but does not abort. It +just produces a multi-line string. Luckily the structure of +members.yml is relatively simple and fixed, so a purpose-built +printer can add back the !vault data types at appropriate points. +YAML::XS thus provides only a borked parser. Also luckily, the YAML +produced by the for-the-purpose printer makes the resulting membership +roll easier to read, with the username and status at the top of +each record. +

+ +
+inst
+sub print_member ($$) {
+  my ($out, $member) = @_;
+  print $out "  ", $member->{"username"}, ":\n";
+  print $out "    username: ", $member->{"username"}, "\n";
+  print $out "    status: ", $member->{"status"}, "\n";
+  if (@{$member->{"clients"} || []}) {
+    print $out "    clients:\n";
+    for my $name (@{$member->{"clients"} || []}) {
+      print $out "    - ", $name, "\n";
+    }
+  } else {
+    print $out "    clients: []\n";
+  }
+  print $out "    password_front: ", $member->{"password_front"}, "\n";
+  print $out "    password_core: ", $member->{"password_core"}, "\n";
+  if (defined $member->{"password_fetchmail"}) {
+    print $out "    password_fetchmail: !vault |\n";
+    for my $line (split /\n/, $member->{"password_fetchmail"}) {
+      print $out "      $line\n";
+    }
+  }
+  my @standard_keys = ( "username", "status", "clients",
+                        "password_front", "password_core",
+                        "password_fetchmail" );
+  my @other_keys = (sort
+                    grep { my $k = $_;
+                           ! grep { $_ eq $k } @standard_keys }
+                    keys %$member);
+  for my $key (@other_keys) {
+    print $out "    $key: ", $member->{$key}, "\n";
+  }
+}
+
+
+
+
+
+

11.7. The New Command

+
+

+The next code block implements the new sub-command. It adds a new +member to the institute's membership roll. It runs an Ansible +playbook to create the member's Nextcloud user, updates +private/members.yml, and runs the site.yml playbook. The site +playbook (re)creates the member's accounts on Core and Front, +(re)installs the member's personal homepage on Front, and the member's +Fetchmail service on Core. All services are configured with an +initial, generated password. +

+ +
+inst
+sub valid_username (@);
+sub shell_escape ($);
+sub strip_vault ($);
+
+if (defined $ARGV[0] && $ARGV[0] eq "new") {
+  my $user = valid_username (@ARGV);
+  my $yaml = read_members_yaml ();
+  my $members = $yaml->{"members"};
+  die "$user: already exists\n" if defined $members->{$user};
+
+  my $pass = `apg -n 1 -x 12 -m 12`; chomp $pass;
+  print "Initial password: $pass\n";
+  my $epass = shell_escape $pass;
+  my $front = `mkpasswd -m sha-512 "$epass"`; chomp $front;
+  my $core = `mkpasswd -m sha-512 "$epass"`; chomp $core;
+  my $vault = strip_vault `ansible-vault encrypt_string "$epass"`;
+  mysystem ("ansible-playbook -e \@Secret/become.yml",
+            " playbooks/nextcloud-new.yml",
+            " -e user=$user", " -e pass=\"$epass\"");
+  $members->{$user} = { "username" => $user,
+                        "status" => "current",
+                        "password_front" => $front,
+                        "password_core" => $core,
+                        "password_fetchmail" => $vault };
+  write_members_yaml
+    { "members" => $members,
+      "revoked" => $yaml->{"revoked"} };
+  mysystem ("ansible-playbook -e \@Secret/become.yml",
+             " -t accounts -l core,front playbooks/site.yml");
+  exit;
+}
+
+sub valid_username (@) {
+  my $sub = $_[0];
+  die "usage: $0 $sub USER\n"
+    if @_ != 2;
+  my $username = lc $_[1];
+  die "$username: does not begin with an alphabetic character\n"
+    if $username !~ /^[a-z]/;
+  die "$username: contains non-alphanumeric character(s)\n"
+    if $username !~ /^[a-z0-9]+$/;
+  return $username;
+}
+
+sub shell_escape ($) {
+  my ($string) = @_;
+  my $result = "$string";
+  $result =~ s/([\$`"\\ ])/\\$1/g;
+  return ($result);
+}
+
+sub strip_vault ($) {
+  my ($string) = @_;
+  die "Unexpected result from ansible-vault: $string\n"
+    if $string !~ /^ *!vault [|]/;
+  my @lines = split /^ */m, $string;
+  return (join "", @lines[1..$#lines]);
+}
+
+
+ +
+playbooks/nextcloud-new.yml
- hosts: core
+  no_log: yes
+  tasks:
+  - name: Run occ user:add.
+    shell: |
+      spawn sudo -u www-data /usr/bin/php occ user:add {{ user }}
+      expect {
+        "Enter password:" {}
+        timeout { exit 1 }
+      }
+      send "{{ pass|quote }}\n";
+      expect {
+        "Confirm password:" {}
+        timeout { exit 2 }
+      }
+      send "{{ pass|quote }}\n";
+      expect {
+        "The user \"{{ user }}\" was created successfully" {}
+        timeout { exit 3 }
+      }
+    args:
+      chdir: /var/www/nextcloud/
+      executable: /usr/bin/expect
+
+
+
+
+
+

11.8. The Pass Command

+
+

+The institute's passwd command on Core securely emails root with a +member's desired password (hashed). The command may update the +servers immediately or let the administrator do that using the ./inst +pass command. In either case, the administrator needs to update the +membership roll, and so receives an encrypted email, which gets piped +into ./inst pass. This command decrypts the message, parses the +(YAML) content, updates private/members.yml, and runs the full +Ansible site.yml playbook to update the servers. If all goes well a +message is sent to member@core. +

+
+
+

11.8.1. Less Aggressive passwd.

+
+

+The next code block implements the less aggressive passwd command. +It is less aggressive because it just emails root. It does not +update the servers, so it does not need an SSH key and password to +root (any privileged account) on Front, nor a set-UID root script +(nor equivalent) on Core. It is a set-UID shadow script so it can +read /etc/shadow. The member will need to wait for confirmation +from the administrator, but all keys to root at the institute stay +in Secret/. +

+ +
+roles_t/core/templates/passwd
#!/bin/perl -wT
+
+use strict;
+
+$ENV{PATH} = "/usr/sbin:/usr/bin:/bin";
+
+my ($username) = getpwuid $<;
+if ($username ne "{{ ansible_user }}") {
+  { exec ("sudo", "-u", "{{ ansible_user }}",
+          "/usr/local/bin/passwd", $username) };
+  print STDERR "Could not exec sudo: $!\n";
+  exit 1;
+}
+
+$username = $ARGV[0];
+my $passwd;
+{
+  my $SHADOW = new IO::File;
+  open $SHADOW, "</etc/shadow" or die "Cannot read /etc/shadow: $!\n";
+  my ($line) = grep /^$username:/, <$SHADOW>;
+  close $SHADOW;
+  die "No /etc/shadow record found: $username\n" if ! defined $line;
+  (undef, $passwd) = split ":", $line;
+}
+
+system "stty -echo";
+END { system "stty echo"; }
+
+print "Current password: ";
+my $pass = <STDIN>; chomp $pass;
+print "\n";
+my $hash = crypt($pass, $passwd);
+die "Sorry...\n" if $hash ne $passwd;
+
+print "New password: ";
+$pass = <STDIN>; chomp($pass);
+die "Passwords must be at least 10 characters long.\n"
+  if length $pass < 10;
+print "\nRetype password: ";
+my $pass2 = <STDIN>; chomp($pass2);
+print "\n";
+die "New passwords do not match!\n"
+  if $pass2 ne $pass;
+
+use MIME::Base64;
+my $epass = encode_base64 $pass;
+
+use File::Temp qw(tempfile);
+my ($TMP, $tmp) = tempfile;
+close $TMP;
+
+my $O = new IO::File;
+open $O, ("| gpg --encrypt --armor"
+          ." --trust-model always --recipient root\@core"
+          ." > $tmp") or die "Error running gpg > $tmp: $!\n";
+print $O <<EOD;
+username: $username
+password: $epass
+EOD
+close $O or die "Error closing pipe to gpg: $!\n";
+
+use File::Copy;
+open ($O, "| sendmail root");
+print $O <<EOD;
+From: root
+To: root
+Subject: New password.
+
+EOD
+$O->flush;
+copy $tmp, $O;
+#print $O `cat $tmp`;
+close $O or die "Error closing pipe to sendmail: $!\n";
+
+print "
+Your request was sent to Root.  PLEASE WAIT for email confirmation
+that the change was completed.\n";
+exit;
+
+
+
+
+
+

11.8.2. Less Aggressive Pass Command

+
+

+The following code block implements the ./inst pass command, used by +the administrator to update private/members.yml before running +playbooks/site.yml and emailing the concerned member. +

+ +
+inst
+use MIME::Base64;
+
+if (defined $ARGV[0] && $ARGV[0] eq "pass") {
+  my $I = new IO::File;
+  open $I, "gpg --homedir Secret/root.gnupg --quiet --decrypt |"
+    or die "Error running gpg: $!\n";
+  my $msg_yaml = LoadFile ($I);
+  close $I or die "Error closing pipe from gpg: $!\n";
+
+  my $user = $msg_yaml->{"username"};
+  die "Could not find a username in the decrypted input.\n"
+    if ! defined $user;
+  my $pass64 = $msg_yaml->{"password"};
+  die "Could not find a password in the decrypted input.\n"
+    if ! defined $pass64;
+
+  my $mem_yaml = read_members_yaml ();
+  my $members = $mem_yaml->{"members"};
+  my $member = $members->{$user};
+  die "No such member: $user\n" if ! defined $member;
+
+  my $pass = decode_base64 $pass64;
+  my $epass = shell_escape $pass;
+  my $front = `mkpasswd -m sha-512 "$epass"`; chomp $front;
+  my $core = `mkpasswd -m sha-512 "$epass"`; chomp $core;
+  my $vault = strip_vault `ansible-vault encrypt_string "$epass"`;
+  $member->{"password_front"} = $front;
+  $member->{"password_core"} = $core;
+  $member->{"password_fetchmail"} = $vault;
+
+  mysystem ("ansible-playbook -e \@Secret/become.yml",
+            "playbooks/nextcloud-pass.yml",
+            "-e user=$user", "-e \"pass=$epass\"");
+  write_members_yaml $mem_yaml;
+  mysystem ("ansible-playbook -e \@Secret/become.yml",
+            "-t accounts playbooks/site.yml");
+  my $O = new IO::File;
+  open ($O, "| sendmail $user\@$domain_priv")
+    or die "Could not pipe to sendmail: $!\n";
+  print $O "From: <root>
+To: <$user>
+Subject: Password change.
+
+Your new password has been distributed to the servers.
+
+As always: please email root with any questions or concerns.\n";
+  close $O or die "pipe to sendmail failed: $!\n";
+  exit;
+}
+
+
+ +

+And here is the playbook that interacts with Nextcloud's occ +users:resetpassword command using expect(1). +

+ +
+playbooks/nextcloud-pass.yml
- hosts: core
+  no_log: yes
+  tasks:
+  - name: Run occ user:resetpassword.
+    shell: |
+      spawn sudo -u www-data \
+            /usr/bin/php occ user:resetpassword {{ user }}
+      expect {
+        "Enter a new password:" {}
+        timeout { exit 1 }
+      }
+      send "{{ pass|quote }}\n"
+      expect {
+        "Confirm the new password:" {}
+        timeout { exit 2 }
+      }
+      send "{{ pass|quote }}\n"
+      expect {
+        "Successfully reset password for {{ user }}" {}
+        "Please choose a different password." { exit 3 }
+        timeout { exit 4 }
+      }
+    args:
+      chdir: /var/www/nextcloud/
+      executable: /usr/bin/expect
+
+
+
+
+
+

11.8.3. Installing the Less Aggressive passwd

+
+

+The following Ansible tasks install the less aggressive passwd +script in /usr/local/bin/passwd on Core, and a sudo policy file +declaring that any user can run the script as the admin user. The +admin user is added to the shadow group so that the script can read +/etc/shadow and verify a member's current password. The public PGP +key for root@core is also imported into the admin user's GnuPG +configuration so that the email to root can be encrypted. +

+ +
+roles_t/core/tasks/main.yml
+- name: Install institute passwd command.
+  become: yes
+  template:
+   src: passwd
+   dest: /usr/local/bin/passwd
+   mode: u=rwx,g=rx,o=rx
+
+- name: Authorize institute passwd command as {{ ansible_user }}.
+  become: yes
+  copy:
+    content: |
+      ALL ALL=({{ ansible_user }}) NOPASSWD: /usr/local/bin/passwd
+    dest: /etc/sudoers.d/01passwd
+    mode: u=r,g=r,o=
+    owner: root
+    group: root
+
+- name: Authorize {{ ansible_user }} to read /etc/shadow.
+  become: yes
+  user:
+    name: "{{ ansible_user }}"
+    append: yes
+    groups: shadow
+
+- name: Authorize {{ ansible_user }} to run /usr/bin/php as www-data.
+  become: yes
+  copy:
+    content: |
+      {{ ansible_user }} ALL=(www-data) NOPASSWD: /usr/bin/php
+    dest: /etc/sudoers.d/01www-data-php
+    mode: u=r,g=r,o=
+    owner: root
+    group: root
+
+- name: Install root PGP key file.
+  become: no
+  copy:
+    src: ../Secret/root-pub.pem
+    dest: ~/.gnupg-root-pub.pem
+    mode: u=r,g=r,o=r
+  notify: Import root PGP key.
+
+
+ +
+roles_t/core/handlers/main.yml
+- name: Import root PGP key.
+  become: no
+  command: gpg --import ~/.gnupg-root-pub.pem
+
+
+
+
+
+
+

11.9. The Old Command

+
+

+The old command disables a member's accounts and clients. +

+ +
+inst
+if (defined $ARGV[0] && $ARGV[0] eq "old") {
+  my $user = valid_username (@ARGV);
+  my $yaml = read_members_yaml ();
+  my $members = $yaml->{"members"};
+  my $member = $members->{$user};
+  die "$user: does not exist\n" if ! defined $member;
+
+  mysystem ("ansible-playbook -e \@Secret/become.yml",
+            "playbooks/nextcloud-old.yml -e user=$user");
+  $member->{"status"} = "former";
+  write_members_yaml { "members" => $members,
+                       "revoked" => [ sort @{$member->{"clients"}},
+                                           @{$yaml->{"revoked"}} ] };
+  mysystem ("ansible-playbook -e \@Secret/become.yml",
+            "-t accounts playbooks/site.yml");
+  exit;
+}
+
+
+ +
+playbooks/nextcloud-old.yml
- hosts: core
+  tasks:
+  - name: Run occ user:disable.
+    shell: |
+      spawn sudo -u www-data /usr/bin/php occ user:disable {{ user }}
+      expect {
+        "The specified user is disabled" {}
+        timeout { exit 1 }
+      }
+    args:
+      chdir: /var/www/nextcloud/
+      executable: /usr/bin/expect
+
+
+
+
+
+

11.10. The Client Command

+
+

+The client command creates an OpenVPN configuration (.ovpn) file +authorizing wireless devices to connect to the institute's VPNs. The +command uses the EasyRSA CA in Secret/. The generated configuration +is slightly different depending on the type of host, given as the +first argument to the command. +

+ +
    +
  • ./inst client android NEW USER
    +An android host runs OpenVPN for Android or work-alike. Two files +are generated. campus.ovpn configures a campus VPN connection, +and public.ovpn configures a connection to the institute's public +VPN.
  • + +
  • ./inst client debian NEW USER
    +A debian host runs a Debian desktop with Network Manager. Again +two files are generated, for the campus and public VPNs.
  • + +
  • ./inst client campus NEW
    +A campus host is an Debian host (with or without desktop) that is +used by the institute generally, is not the property of a member, +never roams off campus, and so is remotely administered with +Ansible. One file is generated, campus.ovpn.
  • +
+ +

+The administrator uses encrypted email to send .ovpn files to new +members. New members install the network-manager-openvpn-gnome and +openvpn-systemd-resolved packages, and import the .ovpn files into +Network Manager on their desktops. The .ovpn files for an +Android device are transferred by USB stick and should automatically +install when "opened". On campus hosts, the system administrator +copies the campus.ovpn file to /etc/openvpn/campus.conf. +

+ +

+The OpenVPN configurations generated for Debian hosts specify an up +script, update-systemd-resolved, installed in /etc/openvpn/ by the +openvpn-systemd-resolved package. The following configuration lines +instruct the OpenVPN clients to run this script whenever the +connection is restarted. +

+ +
+openvpn-up
script-security 2
+up /etc/openvpn/update-systemd-resolved
+up-restart
+
+
+ +
+inst
sub write_template ($$$$$$$$$);
+sub read_file ($);
+sub add_client ($$$);
+
+if (defined $ARGV[0] && $ARGV[0] eq "client") {
+  die "Secret/CA/easyrsa: not found\n" if ! -x "Secret/CA/easyrsa";
+  my $type = $ARGV[1]||"";
+  my $name = $ARGV[2]||"";
+  my $user = $ARGV[3]||"";
+  if ($type eq "campus") {
+    die "usage: $0 client campus NAME\n" if @ARGV != 3;
+    die "$name: invalid host name\n" if $name !~ /^[a-z][-a-z0-9]+$/;
+  } elsif ($type eq "android" || $type eq "debian") {
+    die "usage: $0 client $type NAME USER\n" if @ARGV != 4;
+    die "$name: invalid host name\n" if $name !~ /^[a-z][-a-z0-9]+$/;
+  } else {
+    die "usage: $0 client [debian|android|campus]\n" if @ARGV != 4;
+  }
+  my $yaml;
+  my $member;
+  if ($type ne "campus") {
+    $yaml = read_members_yaml;
+    my $members = $yaml->{"members"};
+    if (@ARGV == 4) {
+      $member = $members->{$user};
+      die "$user: does not exist\n" if ! defined $member;
+    }
+    if (defined $member) {
+      my ($owner) = grep { grep { $_ eq $name } @{$_->{"clients"}} }
+                    values %{$members};
+      die "$name: owned by $owner->{username}\n"
+        if defined $owner && $owner->{username} ne $member->{username};
+    }
+  }
+
+  die "Secret/CA: no certificate authority found"
+    if ! -d "Secret/CA/pki/issued";
+
+  if (! -f "Secret/CA/pki/issued/$name.crt") {
+    mysystem "cd Secret/CA; ./easyrsa build-client-full $name nopass";
+  } else {
+    print "Using existing key/cert...\n";
+  }
+
+  if ($type ne "campus") {
+    my $clients = $member->{"clients"};
+    if (! grep { $_ eq $name } @$clients) {
+      $member->{"clients"} = [ $name, @$clients ];
+      write_members_yaml $yaml;
+    }
+  }
+
+  umask 077;
+  my $DEV = $type eq "android" ? "tun" : "ovpn";
+  my $CA = read_file "Secret/CA/pki/ca.crt";
+  my $CRT = read_file "Secret/CA/pki/issued/$name.crt";
+  my $KEY = read_file "Secret/CA/pki/private/$name.key";
+  my $UP = $type eq "android" ? "" : "
+script-security 2
+up /etc/openvpn/update-systemd-resolved
+up-restart";
+
+  if ($type ne "campus") {
+    my $TA = read_file "Secret/front-ta.key";
+    write_template ($DEV,$UP,$CA,$CRT,$KEY,$TA, $front_addr,
+                    $domain_name, "public.ovpn");
+    print "Wrote public VPN configuration to public.ovpn.\n";
+  }
+  my $TA = read_file "Secret/gate-ta.key";
+  write_template ($DEV,$UP,$CA,$CRT,$KEY,$TA, $gate_wifi_addr,
+                  "gate.$domain_priv", "campus.ovpn");
+  print "Wrote campus VPN configuration to campus.ovpn.\n";
+
+  exit;
+}
+
+sub write_template ($$$$$$$$$) {
+  my ($DEV,$UP,$CA,$CRT,$KEY,$TA,$ADDR,$NAME,$FILE) = @_;
+  my $O = new IO::File;
+  open ($O, ">$FILE.tmp") or die "Could not open $FILE.tmp: $!\n";
+  print $O "client
+dev-type tun
+dev $DEV
+remote $ADDR
+nobind
+user nobody
+group nogroup
+persist-key
+persist-tun
+remote-cert-tls server
+verify-x509-name $NAME name
+cipher AES-256-GCM
+auth SHA256$UP
+verb 3
+key-direction 1
+<ca>\n$CA</ca>
+<cert>\n$CRT</cert>
+<key>\n$KEY</key>
+<tls-auth>\n$TA</tls-auth>\n";
+  close $O or die "Could not close $FILE.tmp: $!\n";
+  rename ("$FILE.tmp", $FILE)
+    or die "Could not rename $FILE.tmp: $!\n";
+}
+
+sub read_file ($) {
+  my ($path) = @_;
+  my $I = new IO::File;
+  open ($I, "<$path") or die "$path: could not read: $!\n";
+  local $/;
+  my $c = <$I>;
+  close $I or die "$path: could not close: $!\n";
+  return $c;
+}
+
+
+
+
+
+

11.11. Institute Command Help

+
+

+This should be the last block tangled into the inst script. It +catches any command lines that were not handled by a sub-command +above. +

+ +
+inst
+die "usage: $0 [CA|config|new|pass|old|client] ...\n";
+
+
+
+
+
+
+

12. Testing

+
+

+The example files in this document, ansible.cfg and hosts as +well as those in public/ and private/, along with the +matching EasyRSA certificate authority and GnuPG key-ring in +Secret/ (included in the distribution), can be used to configure +three VirtualBox VMs simulating Core, Gate and Front in a test network +simulating a campus Ethernet, campus ISP, and commercial cloud. With +the test network up and running, a simulated member's notebook can be +created, and alternately attached to the simulated campus Wi-Fi or the +simulated Internet (as though abroad). The administrator's notebook +in this simulation is the VirtualBox host. +

+ +

+The next two sections list the steps taken to create the simulated +Core, Gate and Front, and connect them to a simulated campus Ethernet, +campus ISP, and commercial cloud. The process is similar to that +described in The (Actual) Hardware, but is covered in detail here +where the VirtualBox hypervisor can be assumed and exact command lines +can be given (and copied during re-testing). The remaining sections +describe the manual testing process, simulating an administrator +adding and removing member accounts and devices, a member's desktop +sending and receiving email, etc. +

+ +

+For more information on the VirtualBox Hypervisor, the User Manual can +be found off-line in file:///usr/share/doc/virtualbox/UserManual.pdf. An +HTML version of the latest revision can be found on the official web +site at https://www.virtualbox.org/manual/UserManual.html. +

+
+
+

12.1. The Test Networks

+
+

+The networks used in the test: +

+ +
+
premises
A NAT Network, simulating the cloud provider's and +campus ISP's networks. This is the only network with DHCP and DNS +services provided by the hypervisor. It is not the default NAT +network because gate and front need to communicate.
+ +
vboxnet0
A Host-only network, simulating the institute's +private Ethernet switch. It has no services, no DHCP, just the host +machine at 192.168.56.10 pretending to be the administrator's +notebook.
+ +
vboxnet1

+Another Host-only network, simulating the tiny +Ethernet between Gate and the campus Wi-Fi access point. It has no +services, no DHCP, just the host at 192.168.57.2. It might one +day have a simulated access point at that address. Currently it is +just an interface for gate's DHCP server to listen on. +

+ +

+In this simulation the IP address for front is not a public +address but a private address on the NAT network premises. Thus +front is not accessible to the administrator's notebook (the +host). To work around this restriction, front gets a second +network interface connected to the vboxnet1 network and used only +for ssh access from the host.5 +

+
+ +

+As in The Hardware, all machines start with their primary Ethernet +adapters attached to the NAT Network premises so that they can +download additional packages. Later, core and gate are moved to +the simulated private Ethernet vboxnet0. +

+ +

+The networks described above are created and "started" with the +following VBoxManage commands. +

+ +
+
VBoxManage natnetwork add --netname premises \
+                          --network 192.168.15.0/24 \
+                          --enable --dhcp on --ipv6 off
+VBoxManage natnetwork start --netname premises
+VBoxManage hostonlyif create # vboxnet0
+VBoxManage hostonlyif ipconfig vboxnet0 --ip 192.168.56.10 \
+                                        --dhcp off --ipv6 off
+VBoxManage hostonlyif create # vboxnet1
+VBoxManage hostonlyif ipconfig vboxnet1 --ip 192.168.57.2 \
+                                        --dhcp off --ipv6 off
+
+
+ +

+Note that actual ISPs and clouds will provide Gate and Front with +public network addresses but in this simulation "they" provide +addresses in the private 192.168.15.0/24 network. +

+
+
+
+

12.2. The Test Machines

+
+

+The virtual machines are created by VBoxManage command lines in the +following sub-sections. They each start with a recent Debian release +(e.g. debian-11.3.0-amd64-netinst.iso) on the NAT network +premises. As in The Hardware preparation process being simulated, a +few additional software packages are installed and remote access is +authorized before the machines are moved to their final networks, +prepared for Ansible. +

+
+
+

12.2.1. A Test Machine

+
+

+The following shell function contains most of the VBoxManage +commands needed to create the test machines. The name of the machine +is taken from the NAME shell variable and the quantity of RAM and +disk space from the RAM and DISK variables. The function creates +a DVD drive on each machine and loads it with a simulated CD of a +recent Debian release. The path to the CD disk image (.iso file) is +taken from the ISO shell variable. +

+ +
+
function create_vm {
+  VBoxManage createvm --name $NAME --ostype Debian_64 --register
+  VBoxManage modifyvm $NAME --memory $RAM
+  VBoxManage createhd --size $DISK \
+                      --filename ~/VirtualBox\ VMs/$NAME/$NAME.vdi
+  VBoxManage storagectl $NAME --name "SATA Controller" \
+                        --add sata --controller IntelAHCI
+  VBoxManage storageattach $NAME --storagectl "SATA Controller" \
+                           --port 0 --device 0 --type hdd \
+                           --medium ~/VirtualBox\ VMs/$NAME/$NAME.vdi
+
+  VBoxManage storagectl $NAME --name "IDE Controller" --add ide
+  VBoxManage storageattach $NAME --storagectl "IDE Controller" \
+      --port 0 --device 0 --type dvddrive --medium $ISO
+  VBoxManage modifyvm $NAME --boot1 dvd --boot2 disk
+  VBoxManage unattended install $NAME --iso=$ISO \
+      --locale en_US --country US \
+      --hostname $NAME.small.private \
+      --user=sysadm --password=fubar \
+      --full-user-name=System\ Administrator
+}
+
+
+ +

+After this shell function creates a VM, its network interface is +typically attached to the NAT network premises, simulating the +Internet connected network where actual hardware will be prepared. +

+ +

+Here are the commands needed to create the test machine front with +512MiB of RAM and 4GiB of disk and the Debian 11.3.0 release in its +CDROM drive, to put front on the Internet connected NAT network +premises, and to boot front into the Debian installer. +

+ +
+
NAME=front
+RAM=512
+DISK=4096
+ISO=~/Downloads/debian-11.3.0-amd64-netinst.iso
+create_vm
+VBoxManage modifyvm $NAME --nic1 natnetwork --natnetwork1 premises
+VBoxManage startvm $NAME --type headless
+
+
+ +

+The machine's console should soon show the installer's first prompt: +to choose a system language. (The prompts might be answered by +"preseeding" the Debian installer, but that process has yet to be +debugged.) The appropriate responses to the installer's prompts are +given in the list below. +

+ +
    +
  • Select a language +
      +
    • Language: English - English
    • +
  • +
  • Select your location +
      +
    • Country, territory or area: United States
    • +
  • +
  • Configure the keyboard +
      +
    • Keymap to use: American English
    • +
  • +
  • Configure the network +
      +
    • Hostname: front (gate, core, etc.)
    • +
    • Domain name: small.example.org (small.private)
    • +
  • +
  • Set up users and passwords. +
      +
    • Root password: <blank>
    • +
    • Full name for the new user: System Administrator
    • +
    • Username for your account: sysadm
    • +
    • Choose a password for the new user: fubar
    • +
  • +
  • Configure the clock +
      +
    • Select your time zone: Eastern
    • +
  • +
  • Partition disks +
      +
    • Partitioning method: Guided - use entire disk
    • +
    • Select disk to partition: SCSI3 (0,0,0) (sda) - …
    • +
    • Partitioning scheme: All files in one partition
    • +
    • Finish partitioning and write changes to disk: Continue
    • +
    • Write the changes to disks? Yes
    • +
  • +
  • Install the base system
  • +
  • Configure the package manager +
      +
    • Scan extra installation media? No
    • +
    • Debian archive mirror country: United States
    • +
    • Debian archive mirror: deb.debian.org
    • +
    • HTTP proxy information (blank for none): <blank>
    • +
  • +
  • Configure popularity-contest +
      +
    • Participate in the package usage survey? No
    • +
  • +
  • Software selection +
      +
    • SSH server
    • +
    • standard system utilities
    • +
  • +
  • Install the GRUB boot loader +
      +
    • Install the GRUB boot loader to your primary drive? Yes
    • +
    • Device for boot loader installation: /dev/sda (ata-VBOX…
    • +
  • +
+ +

+After the reboot (first boot into the installed OS) the machine's +console should produce a login: prompt. The administrator logs in +here, with username sysadm and password fubar, before continuing +with the specific machine's preparation (below). +

+
+
+
+

12.2.2. The Test Front Machine

+
+

+The front machine is created with 512MiB of RAM, 4GiB of disk, and +Debian 11.3.0 (recently downloaded) in its CDROM drive. The exact +command lines were given in the previous section. +

+ +

+After Debian is installed (as detailed in A Test Machine) and the +machine rebooted, the administrator logs in and installs several +additional software packages. +

+ +
+
sudo apt install netplan.io expect unattended-upgrades postfix \
+                 dovecot-imapd apache2 openvpn
+
+
+ +

+Note that the Postfix installation may prompt for a couple settings. +The defaults, listed below, are fine, but the system mail name should +be the same as the institute's domain name. +

+ +
    +
  • General type of mail configuration: Internet Site
  • +
  • System mail name: small.example.org
  • +
+ +

+To make front accessible to the simulated administrator's notebook, +it gets a second network interface attached to the host-only network +vboxnet1 and is given the local address 192.168.57.3. +

+ +
+
VBoxManage modifyvm front --nic2 hostonly --hostonlyadapter2 vboxnet1
+
+
+ +

+The second network interface is configured with an IP address via the +Netplan configuration file /etc/netplan/01-testing.yaml, which is +created with the following lines. +

+ +
+
network:
+  ethernets:
+    enp0s8:
+      dhcp4: false
+      addresses: [ 192.168.57.3/24 ]
+
+
+ +

+The amended Netplan is applied immediately with the following command, +or the machine is rebooted. +

+ +
+
sudo netplan apply
+
+
+ +

+Finally, the administrator authorizes remote access by following the +instructions in the final section: Ansible Test Authorization. +

+
+
+
+

12.2.3. The Test Gate Machine

+
+

+The gate machine is created with the same amount of RAM and disk as +front. Assuming the RAM, DISK, and ISO shell variables have +not changed, gate can be created with two commands, then connected +to NAT network premesis and booted with two more. +

+ +
+
NAME=gate
+create_vm
+VBoxManage modifyvm gate --nic1 natnetwork --natnetwork1 premises
+VBoxManage startvm gate --type headless
+
+
+ +

+After Debian is installed (as detailed in A Test Machine) and the +machine rebooted, the administrator logs in and installs several +additional software packages. +

+ +
+
sudo apt install netplan.io ufw unattended-upgrades postfix \
+                 isc-dhcp-server openvpn
+
+
+ +

+Again, the Postfix installation prompts for a couple settings. The +defaults, listed below, are fine. +

+ +
    +
  • General type of mail configuration: Internet Site
  • +
  • System mail name: gate.small.private
  • +
+ +

+gate can now move to the campus. It is shut down before the +following VBoxManage commands are executed. The commands disconnect +the primary Ethernet interface from premises and +connected it to vboxnet0. The isp and wifi interfaces are also +connected to the simulated ISP and campus wireless access point. +

+ +
+
VBoxManage modifyvm gate --nic1 hostonly
+VBoxManage modifyvm gate --hostonlyadapter1 vboxnet0
+VBoxManage modifyvm gate --nic2 natnetwork --natnetwork2 premises
+VBoxManage modifyvm gate --nic3 hostonly
+VBoxManage modifyvm gate --hostonlyadapter3 vboxnet1
+
+
+ +

+Before rebooting, the MAC addresses of the three network interfaces +should be compared to the example variable settings in hosts. The +values of the gate_lan_mac, gate_wifi_mac, and gate_isp_mac +variables must agree with the MAC addresses assigned to the virtual +machine's network interfaces. The following table assumes device +names that may vary depending on the hypervisor, version, etc. +

+ + + + +++ ++ ++ ++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
devicenetworksimulatingMAC address variable
enp0s3vboxnet0campus Ethernetgate_lan_mac
enp0s8premisescampus ISPgate_isp_mac
enp0s9vboxnet1campus wirelessgate_wifi_mac
+ +

+After gate boots up with its new network connections, the primary +Ethernet interface is temporarily configured with an IP address. +(Ansible will install a Netplan soon.) +

+ +
+
sudo ip address add 192.168.56.2/24 dev enp0s3
+
+
+ +

+Finally, the administrator authorizes remote access by following the +instructions in the final section: Ansible Test Authorization. +

+
+
+
+

12.2.4. The Test Core Machine

+
+

+The core machine is created with 1GiB of RAM and 6GiB of disk. +Assuming the ISO shell variable has not changed, core can be +created with following commands. +

+ +
+
NAME=core
+RAM=2048
+DISK=6144
+create_vm
+VBoxManage modifyvm core --nic1 natnetwork --natnetwork1 premises
+VBoxManage startvm core --type headless
+
+
+ +

+After Debian is installed (as detailed in A Test Machine) and the +machine rebooted, the administrator logs in and installs several +additional software packages. +

+ +
+
sudo apt install netplan.io unattended-upgrades postfix \
+                 isc-dhcp-server bind9 fetchmail gnupg \
+                 expect dovecot-imapd apache2 openvpn
+
+
+ +

+Again, the Postfix installation prompts for a couple settings. The +defaults, listed below, are fine. +

+ +
    +
  • General type of mail configuration: Internet Site
  • +
  • System mail name: core.small.private
  • +
+ +

+core can now move to the campus. It is shut down before the +following VBoxManage command is executed. The command connects the +machine's NIC to vboxnet0, which simulates the campus's private +Ethernet. +

+ +
+
VBoxManage modifyvm core --nic1 hostonly --hostonlyadapter1 vboxnet0
+
+
+ +

+After core boots up with its new network connection, its primary NIC +is temporarily configured with an IP address and default route (to +gate). (Ansible will install a Netplan soon.) +

+ +
+
sudo ip address add 192.168.56.1/24 dev enp0s3
+sudo ip route add default via 192.168.56.2 dev enp0s3
+
+
+ +

+Finally, the administrator authorizes remote access by following the +instructions in the next section: Ansible Test Authorization. +

+
+
+
+

12.2.5. Ansible Test Authorization

+
+

+Before Ansible can configure the three test machines, they must allow +remote access to their sysadm accounts. The administrator must use +IP addresses to copy the public key to each test machine. +

+ +
+
SRC=Secret/ssh_admin/id_rsa.pub
+scp $SRC sysadm@192.168.56.1:admin_key # Core
+scp $SRC sysadm@192.168.56.2:admin_key # Gate
+scp $SRC sysadm@192.168.57.3:admin_key # Front
+
+
+ +

+Then the key must be installed on each machine with the following +command line (entered at each console, or in an SSH session with +each machine). +

+ +
+
( cd; umask 077; mkdir .ssh; cp admin_key .ssh/authorized_keys )
+
+
+
+
+
+
+

12.3. The Test Ansible Configuration

+
+

+At this point the three test machines core, gate, and front are +running fresh Debian systems with select additional packages, on their +final networks, with a privileged account named sysadm that +authorizes password-less access from the administrator's notebook, +ready to be configured by Ansible. +

+
+
+
+

12.4. Configure Test Machines

+
+

+To configure the test machines, the ./inst config command is +executed and core restarted. Note that this first run should +exercise all of the handlers, and that subsequent runs probably do +not. +

+
+
+
+

12.5. Test Basics

+
+

+At this point the test institute is just core, gate and front, +no other campus servers, no members nor their VPN client devices. On +each machine, Systemd should assess the system's state as running +with 0 failed units. +

+ +
+
systemctl status
+
+
+ +

+gate and thus core should be able to reach the Internet and +front. If core can reach the Internet and front, then gate is +forwarding (and NATing). On core (and gate): +

+ +
+
ping -c 1 8.8.4.4      # dns.google
+ping -c 1 192.168.15.5 # front_addr
+
+
+ +

+gate and thus core should be able to resolve internal and public +domain names. (Front does not use the institute's internal domain +names yet.) On core (and gate): +

+ +
+
host dns.google
+host core.small.private
+host www
+
+
+ +

+The last resort email address, root, should deliver to the +administrator's account. On core, gate and front: +

+ +
+
/sbin/sendmail root
+Testing email to root.
+.
+
+
+ +

+Two messages, from core and gate, should appear in +/home/sysadm/Maildir/new/ on core in just a couple seconds. The +message from front should be delivered to the same directory but on +front. While members' emails are automatically fetched (with +fetchmail(1)) to core, the system administrator is expected to +fetch system emails directly to their desktop (and to give them +instant attention). +

+
+
+
+

12.6. The Test Nextcloud

+
+

+Further tests involve Nextcloud account management. Nextcloud is +installed on core as described in Configure Nextcloud. Once +/Nextcloud/ is created, ./inst config core will validate +or update its configuration files. +

+ +

+The administrator will need a desktop system in the test campus +networks (using the campus name server). The test Nextcloud +configuration requires that it be accessed with the domain name +core.small.private. The following sections describe how a client +desktop is simulated and connected to the test VPNs (and test campus +name server). Its browser can then connect to core.small.private to +exercise the test Nextcloud. +

+ +

+The process starts with enrolling the first member of the institute +using the ./inst new command and issuing client VPN keys with the +./inst client command. +

+
+
+
+

12.7. Test New Command

+
+

+A member must be enrolled so that a member's client machine can be +authorized and then test the VPNs, Nextcloud, and the web sites. +The first member enrolled in the simulated institute is New Hampshire +innkeeper Dick Loudon. Mr. Loudon's accounts on institute servers are +named dick, as is his notebook. +

+ +
+
./inst new dick
+
+
+ +

+Take note of Dick's initial password. +

+
+
+
+

12.8. The Test Member Notebook

+
+

+A test member's notebook is created next, much like the servers, +except with memory and disk space doubled to 2GiB and 8GiB, and a +desktop. This machine is not configured by Ansible. Rather, its +desktop VPN client and web browser test the OpenVPN configurations on +gate and front, and the Nextcloud installation on core. +

+ +
+
NAME=dick
+RAM=2048
+DISK=8192
+create_vm
+VBoxManage modifyvm $NAME --nic1 hostonly --hostonlyadapter1 vboxnet1
+VBoxManage modifyvm $NAME --macaddress1 080027dc54b5
+VBoxManage startvm $NAME --type headless
+
+
+ +

+Dick's notebook, dick, is initially connected to the host-only +network vboxnet1 as though it were the campus wireless access point. +It simulates a member's notebook on campus, connected to (NATed +behind) the access point. +

+ +

+Debian is installed much as detailed in A Test Machine except that +the SSH server option is not needed and the GNOME desktop option +is. When the machine reboots, the administrator logs into the +desktop and installs a couple additional software packages (which +require several more). +

+ +
+
sudo apt install network-manager-openvpn-gnome \
+		 openvpn-systemd-resolved \
+		 nextcloud-desktop evolution
+
+
+
+
+
+

12.9. Test Client Command

+
+

+The ./inst client command is used to issue keys for the institute's +VPNs. The following command generates two .ovpn (OpenVPN +configuration) files, small.ovpn and campus.ovpn, authorizing +access by the holder, identified as dick, owned by member dick, to +the test VPNs. +

+ +
+
./inst client debian dick dick
+
+
+
+
+
+

12.10. Test Campus VPN

+
+

+The campus.ovpn OpenVPN configuration file (generated in Test Client +Command) is transferred to dick, which is at the Wi-Fi access +point's wifi_wan_addr. +

+ +
+
scp *.ovpn sysadm@192.168.57.2:
+
+
+ +

+The file is installed using the Network tab of the desktop Settings +app. The administrator uses the "+" button, chooses "Import from +file…" and the campus.ovpn file. Importantly the administrator +checks the "Use this connection only for resources on its network" +checkbox in the IPv4 tab of the Add VPN dialog. The admin does the +same with the small.ovpn file, for use on the simulated Internet. +

+ +

+The administrator turns on the campus VPN on dick (which connects +instantly) and does a few basic tests in a terminal. +

+ +
+
systemctl status
+ping -c 1 8.8.4.4      # dns.google
+ping -c 1 192.168.56.1 # core
+host dns.google
+host core.small.private
+host www
+
+
+
+
+
+

12.11. Test Web Pages

+
+

+Next, the administrator copies Backup/WWW/ (included in the +distribution) to /WWW/ on core and sets the file permissions +appropriately. +

+ +
+
sudo chown -R sysadm.staff /WWW/campus
+sudo chown -R monkey.staff /WWW/live /WWW/test
+sudo chmod 02775 /WWW/*
+sudo chmod 664 /WWW/*/index.html
+
+
+ +

+then uses Firefox on dick to fetch the following URLs. They should +all succeed and the content should be a simple sentence identifying +the source file. +

+ +
    +
  • http://www/
  • +
  • http://www.small.private/
  • +
  • http://live/
  • +
  • http://live.small.private/
  • +
  • http://test/
  • +
  • http://test.small.private/
  • +
  • http://small.example.org/
  • +
+ +

+The last URL should re-direct to https://small.example.org/, which +uses a certificate (self-)signed by an unknown authority. Firefox +will warn but allow the luser to continue. +

+
+
+
+

12.12. Test Web Update

+
+

+Modify /WWW/live/index.html on core and wait 15 minutes for it to +appear as https://small.example.org/ (and in /home/www/index.html +on front). +

+ +

+Hack /home/www/index.html on front and observe the result at +https://small.example.org/. Wait 15 minutes for the correction. +

+
+
+
+

12.13. Test Nextcloud

+
+

+Nextcloud is typically installed and configured after the first +Ansible run, when core has Internet access via gate. Until the +installation directory /Nextcloud/nextcloud/ appears, the Ansible +code skips parts of the Nextcloud configuration. The same +installation (or restoration) process used on Core is used on core +to create /Nextcloud/. The process starts with Create +/Nextcloud/, involves Restore Nextcloud or Install Nextcloud, +and runs ./inst config core again 7.25.6. When the ./inst +config core command is happy with the Nextcloud configuration on +core, the administrator uses Dick's notebook to test it, performing +the following tests on dick's desktop. +

+ +
    +
  • Use a web browser to get http://core/nextcloud/. It should be a +warning about accessing Nextcloud by an untrusted name.
  • + +
  • Get http://core.small.private/nextcloud/. It should be a +login web page.
  • + +
  • Login as sysadm with password fubar.
  • + +
  • Examine the security & setup warnings in the Settings > +Administration > Overview web page. A few minor warnings are +expected (besides the admonishment about using http rather than +https).
  • + +
  • Download and enable Calendar and Contacts in the Apps > Featured web +page.
  • + +
  • Logout and login as dick with Dick's initial password (noted +above).
  • + +
  • Use the Nextcloud app to sync ~/nextCloud/ with the cloud. In the +Nextcloud app's Connection Wizard (the initial dialog), choose to +"Log in to your Nextcloud" with the URL +http://core.small.private/nextcloud. The web browser should pop +up with a new tab: "Connect to your account". Press "Log in" and +"Grant access". The Nextcloud Connection Wizard then prompts for +sync parameters. The defaults are fine. Presumably the Local +Folder is /home/sysadm/Nextcloud/.
  • + +
  • Drop a file in ~/Nextcloud/, use the app to force a sync, and find +the file in the Files web page.
  • + +
  • +Create a Mail account in Evolution. This step does not involve +Nextcloud, but placates Evolution's Welcome Wizard, and follows in +the steps of the newly institutionalized luser. CardDAV and CalDAV +accounts can be created in Evolution later. +

    + +

    +The account's full name is Dick Loudon and its email address is +dick@small.example.org. The Receiving Email Server Type is IMAP, +its name is mail.small.private and it uses the IMAPS port +(993). The Username on the server is dick. The encryption method +is TLS on a dedicated port. Authentication is by password. The +Receiving Option defaults are fine. The Sending Email Server Type +is SMTP with the name smtp.small.private using the default +SMTP port (25). It requires neither authentication nor encryption. +

    + +

    +At some point Evolution will find that the server certificate is +self-signed and unknown. It must be accepted (permanently). +

  • + +
  • Create a CardDAV account in Evolution. Choose Edit, Accounts, Add, +Address Book, Type CardDAV, name Small Institute, and user dick. +The URL starts with http://core.small.private/nextcloud/ and +ends with remote.php/dav/addressbooks/users/dick/contacts/ (yeah, +88 characters!). Create a contact in the new address book and see +it in the Contacts web page. At some point Evolution will need +Dick's password to access the address book.
  • + +
  • Create a CalDAV account in Evolution just like the CardDAV account +except add a Calendar account of Type CalDAV with a URL that ends +remote.php/dav/calendars/dick/personal/ (only 79 characters). +Create an event in the new calendar and see it in the Calendar web +page. At some point Evolution will need Dick's password to access +the calendar.
  • +
+
+
+
+

12.14. Test Email

+
+

+With Evolution running on the member notebook dick, one second email +delivery can be demonstrated. The administrator runs the following +commands on front +

+ +
+
/sbin/sendmail dick
+Subject: Hello, Dick.
+
+How are you?
+.
+
+
+ +

+and sees a notification on dick's desktop in a second or less. +

+ +

+Outgoing email is also tested. A message to +sysadm@small.example.org should be delivered to +/home/sysadm/Maildir/new/ on front just as fast. +

+
+
+
+

12.15. Test Public VPN

+
+

+At this point, dick can move abroad, from the campus Wi-Fi +(host-only network vboxnet1) to the broader Internet (the NAT +network premises). The following command makes the change. The +machine does not need to be shut down. +

+ +
+
VBoxManage modifyvm dick --nic1 natnetwork --natnetwork1 premises
+
+
+ +

+The administrator might wait to see evidence of the change in +networks. Evolution may start "Testing reachability of mail account +dick@small.example.org." Eventually, the campus VPN should +disconnect. After it does, the administrator turns on the small +VPN, which connects in a second or two. Again, some basics are +tested in a terminal. +

+ +
+
ping -c 1 8.8.4.4      # dns.google
+ping -c 1 192.168.56.1 # core
+host dns.google
+host core.small.private
+host www
+
+
+ +

+And these web pages are fetched with a browser. +

+ + + +

+The Nextcloud web pages too should still be refresh-able, editable, +and Evolution should still be able to edit messages, contacts and +calendar events. +

+
+
+
+

12.16. Test Pass Command

+
+

+To test the ./inst pass command, the administrator logs in to core +as dick and runs passwd. A random password is entered, more +obscure than fubar (else Nextcloud will reject it!). The +administrator then finds the password change request message in the +most recent file in /home/sysadm/Maildir/new/ and pipes it to the +./inst pass command. The administrator might do that by copying the +message to a more conveniently named temporary file on core, +e.g. ~/msg, copying that to the current directory on the notebook, +and feeding it to ./inst pass on its standard input. +

+ +

+On core, logged in as sysadm: +

+ +
+
( cd ~/Maildir/new/
+  cp `ls -1t | head -1` ~/msg )
+grep Subject: ~/msg
+
+
+ +

+To ensure that the most recent message is indeed the password change +request, the last command should find the line Subject: New +password.. Then on the administrator's notebook: +

+ +
+
scp sysadm@192.168.56.1:msg ./
+./inst pass < msg
+
+
+ +

+The last command should complete without error. +

+ +

+Finally, the administrator verifies that dick can login on core, +front and Nextcloud with the new password. +

+
+
+
+

12.17. Test Old Command

+
+

+One more institute command is left to exercise. The administrator +retires dick and his main device dick. +

+ +
+
./inst old dick
+
+
+ +

+The administrator tests Dick's access to core, front and +Nextcloud, and attempts to re-connect the small VPN. All of these +should fail. +

+
+
+
+
+

13. Future Work

+
+

+The small institute's network, as currently defined in this doocument, +is lacking in a number of respects. +

+
+
+

13.1. Deficiencies

+
+

+The current network monitoring is rudimentary. It could use some +love, like intrusion detection via Snort or similar. Services on +Front are not monitored except that the webupdate script should be +emailing sysadm whenever it cannot update Front. +

+ +

+Pro-active monitoring might include notifying root of any vandalism +corrected by Monkey's quarter-hourly web update. This is a +non-trivial task that must ignore intentional changes and save suspect +changes. +

+ +

+Monkey's cron jobs on Core should presumably become systemd.timer +and .service units. +

+ +

+The institute's private domain names (e.g. www.small.private) are +not resolvable on Front. Reverse domains (86.177.10.in-addr.arpa) +mapping institute network addresses back to names in the private +domain small.private work only on the campus Ethernet. These nits +might be picked when OpenVPN supports the DHCP option +rdnss-selection (RFC6731), or with hard-coded resolvectl commands. +

+ +

+The ./inst old dick command does not break VPN connections to Dick's +clients. New connections cannot be created, but old connections can +continue to work for some time. +

+ +

+The ./inst client android dick-phone dick command generates .ovpn +files that require the member to remember to check the "Use this +connection only for resources on its network" box in the IPv4 tab of +the Add VPN dialog. The ./inst client command should include a +setting in the Debian .ovpn files that NetworkManager will recognize +as the desired setting. +

+ +

+The VPN service is overly complex. The OpenVPN 2.4.7 clients allow +multiple server addresses, but the openvpn(8) manual page suggests +per connection parameters are a restricted set that does not include +the essential verify-x509-name. Use the same name on separate +certificates for Gate and Front? Use the same certificate and key on +Gate and Front? +

+ +

+Nextcloud should really be found at https://CLOUD.small.private/ +rather than https://core.small.private/nextcloud/, to ease +future expansion (moving services to additional machines). +

+ +

+HTTPS could be used for Nextcloud transactions even though they are +carried on encrypted VPNs. This would eliminate a big warning on the +Nextcloud Administration Overview page. +

+
+
+
+

13.2. More Tests

+
+

+The testing process described in the previous chapter is far from +complete. Additional tests are needed. +

+
+
+

13.2.1. Backup

+
+

+The backup command has not been tested. It needs an encrypted +partition with which to sync? And then some way to compare that to +Backup/? +

+
+
+
+

13.2.2. Restore

+
+

+The restore process has not been tested. It might just copy Backup/ +to core:/, but then it probably needs to fix up file ownerships, +perhaps permissions too. It could also use an example +Backup/Nextcloud/20220622.bak. +

+
+
+
+

13.2.3. Campus Disconnect

+
+

+Email access (IMAPS) on front is… difficult to test unless +core's fetchmails are disconnected, i.e. the whole campus is +disconnected, so that new email stays on front long enough to be +seen. +

+ +
    +
  • Disconnect gate's NIC #2.
  • +
  • Send email to dick@small.example.org.
  • +
  • Find it in /home/dick/Maildir/new/.
  • +
  • Re-configure Evolution on dick. Edit the dick@small.example.org +mail account (or create a new one?) so that the Receiving Email +Server name is 192.168.15.5, not mail.small.private. The +latter domain name will not work while the campus is disappeared. +In actual use (with Front, not front), the institute domain name +could be used.
  • +
+
+
+
+
+
+

14. Appendix: The Bootstrap

+
+

+Creating the private network from whole cloth (machines with recent +standard distributions installed) is not straightforward. +

+ +

+Standard distributions do not include all of the necessary server +software, esp. isc-dhcp-server and bind9 for critical localnet +services. These are typically downloaded from the Internet. +

+ +

+To access the Internet Core needs a default route to Gate, Gate needs +to forward with NAT to an ISP, Core needs to query the ISP for names, +etc.: quite a bit of temporary, manual localnet configuration just to +get to the additional packages. +

+
+
+

14.1. The Current Strategy

+
+

+The strategy pursued in The Hardware is two phase: prepare the servers +on the Internet where additional packages are accessible, then connect +them to the campus facilities (the private Ethernet switch, Wi-Fi AP, +ISP), manually configure IP addresses (while the DHCP client silently +fails), and avoid names until BIND9 is configured. +

+
+
+
+

14.2. Starting With Gate

+
+

+The strategy of Starting With Gate concentrates on configuring Gate's +connection to the campus ISP in hope of allowing all to download +additional packages. This seems to require manual configuration of +Core or a standard rendezvous. +

+ +
    +
  • Connect Gate to ISP, e.g. apartment WAN via Wi-Fi or Ethernet.
  • +
  • +Connect Gate to private Ethernet switch. +

    +
    +sudo ip address add GATE dev ISPDEV
    +
  • +
  • Configure Gate to NAT from private Ethernet.
  • +
  • +Configure Gate to serve DHCP on Ethernet, temporarily! +

    +
      +
    • Push default route through Gate, DNS from 8.8.8.8.
    • +
    +

    +Or statically configure Core with address, route, and name server. +

    +
    +sudo ip address add CORE dev PRIVETH
    +sudo ip route add default via GATE
    +sudo sh -c 'echo "nameserver 8.8.8.8" >/etc/resolve.conf'
    +
  • +
  • Configure admin's notebook similarly?
  • +
  • Test remote access from administrator's notebook.
  • +
  • +Finally, configure Gate and Core. +

    +
    +ansible-playbook -l gate site.yml
    +ansible-playbook -l core site.yml
    +
  • +
+
+
+
+

14.3. Pre-provision With Ansible

+
+

+A refinement of the current strategy might avoid the need to maintain +(and test!) lists of "additional" packages. With Core and Gate and +the admin's notebook all together on a café Wi-Fi, Ansible might be +configured (e.g. tasks tagged) to just install the necessary +packages. The administrator would put Core's and Gate's localnet IP +addresses in Ansible's inventory file, then run just the Ansible tasks +tagged base-install, leaving the new services in a decent (secure, +innocuous, disabled) default state. +

+ +
+ansible-playbook -l core -t base-install site.yml
+ansible-playbook -l gate -t base-install site.yml
+
+
+
+
+
+

Footnotes:

+
+ +
1

+The recommended private top-level domains are listed in +"Appendix G. Private DNS Namespaces" of RFC6762 (Multicast DNS). link +

+ +
2

+Why not create a role named all and put these tasks that are +the same on all machines in that role? If there were more than a +stable handful, and no tangling mechanism to do the duplication, a +catch-all role would be a higher priority. +

+ +
3

+The cipher set specified by Let's Encrypt is large enough to +turn orange many parts of an SSL Report from Qualys SSL Labs. +

+ +
4

+Presumably, eventually, a former member's home directories are +archived to external storage, their other files are given new +ownerships, and their Unix accounts are deleted. This has never been +done, and is left as a manual exercise. +

+ +
5

+Front is accessible via Gate but routing from the host address +on vboxnet0 through Gate requires extensive interference with the +routes on Front and Gate, making the simulation less… similar. +

+ + +
+
+
+

Author: Matt Birkholz

+

Created: 2023-12-17 Sun 16:05

+

Validate

+
+ + diff --git a/README.org b/README.org new file mode 100644 index 0000000..028b9b0 --- /dev/null +++ b/README.org @@ -0,0 +1,7680 @@ +#+TITLE: A Small Institute + +The Ansible scripts herein configure a small institute's hosts +according to their roles in the institute's network of public and +private servers. The network topology allows the institute to present +an expendable public face (easily wiped clean) while maintaining a +secure and private campus that can function with or without the +Internet. + +* Overview + +This small institute has a public server on the Internet, Front, that +handles the institute's email, web site, and cloud. Front is small, +cheap, and expendable, contains only public information, and functions +mostly as a VPN server relaying to a campus network. + +The campus network is one or more machines physically connected via +Ethernet (or a similarly secure medium) for private, un-encrypted +communication in a core locality. One of the machines on this +Ethernet is Core, the small institute's main server. Core provides a +number of essential localnet services (DHCP, DNS, NTP), and a private, +campus web site. It is also the home of the institute cloud and is +where all of the institute's data actually reside. When the campus +ISP (Internet Service Provider) is connected, a separate host, Gate, +routes campus traffic to the ISP (via NAT). Through Gate, Core +connects to Front making the institute email, cloud, etc. available to +members off campus. + +# Note that part of this diagram appears in The Gate Machine, which +# should be kept up-to-date with changes made to this diagram. + +#+BEGIN_EXAMPLE + = + _|||_ + =-The-Institute-= + = = = = + = = = = + =====-Front-===== + | + ----------------- + ( ) + ( The Internet(s) )----(Hotel Wi-Fi) + ( ) | + ----------------- +----Member's notebook off campus + | + =============== | ================================================== + | Premises + (Campus ISP) + | +----Member's notebook on campus + | | + | +----(Campus Wi-Fi) + | | + ============== Gate ================================================ + | Private + +----Ethernet switch + | + +----Core + +----Servers (NAS, DVR, etc.) +#+END_EXAMPLE + +Members of the institute use commodity notebooks and open source +desktops. When off campus, members access institute resources via the +VPN on Front (via hotel Wi-Fi). When /on/ campus, members can use the +much faster and always available (despite Internet connectivity +issues) VPN on Gate (via campus Wi-Fi). A member's Android phones and +devices can use the same Wi-Fis, VPNs (via the OpenVPN app) and +services. On a desktop or by phone, at home or abroad, members can +access their email and the institute's private web and cloud. + +The institute email service reliably delivers messages in seconds, so +it is the main mode of communication amongst the membership, which +uses OpenPGP encryption to secure message content. + + +* Caveats + +This small institute prizes its privacy, so there is little or no +accommodation for spyware (aka commercial software). The members of +the institute are dedicated to refining good tools, making the best +use of software that does not need nor want our hearts, our money, nor +even our attention. + +Unlike a commercial cloud service with redundant hardware and multiple +ISPs, Gate is a real choke point. When Gate cannot reach the +Internet, members abroad will not be able to reach Core, their email +folders, nor the institute cloud. They /can/ chat privately with +other members abroad or consult the public web site on Front. Members +/on/ campus will have their email and cloud, but no Internet and thus +no new email and no chat with members abroad. Keeping our data on +campus means we can keep operating without the Internet /if we are on +campus/. + +Keeping your data secure on campus, not on the Internet, means when +your campus goes up in smoke, so does your data, unless you made +an off-site (or at least fire-safe!) backup copy. + +Security and privacy are the focus of the network architecture and +configuration, /not/ anonymity. There is no support for Tor. The +VPNs do /not/ funnel /all/ Internet traffic through anonymizing +services. They do not try to defeat geo-fencing. + +This is not a showcase of the latest technologies. It is not expected +to change except slowly. + +The services are intended for the SOHO (small office, home office, 4-H +chapter, medical clinic, gun-running biker gang, etc.) with a small, +fairly static membership. Front can be small and cheap (10USD per +month) because of this assumption. + + +* The Services + +The small institute's network is designed to provide a number of +services. An understanding of how institute hosts co-operate is +essential to understanding the configuration of specific hosts. This +chapter covers institute services from a network wide perspective, and +gets right down in its subsections to the Ansible code that enforces +its policies. On first reading, those subsections should be skipped; +they reference particulars first introduced in the following chapter. + +** The Name Service + +The institute has a public domain, e.g. ~small.example.org~, and a +private domain, e.g. ~small.private~. The public has access only to +the former and, as currently configured, to only one address (~A~ +record): Front's public IP address. Members connected to the campus, +via wire or VPN, use the campus name server which can resolve +institute private domain names like ~core.small.private~. If +~small.private~ is also used as a search domain, members can use short +names like ~core~. + +** The Email Service + +Front provides the public SMTP (Simple Mail Transfer Protocol) service +that accepts email from the Internet, delivering messages addressed to +the institute's domain name, e.g. to ~postmaster@small.example.org~. +Its Postfix server accepts email for member accounts and any public +aliases (e.g. ~postmaster~). Messages are delivered to member +=~/Maildir/= directories via Dovecot. + +If the campus is connected to the Internet, the new messages are +quickly picked up by Core and stored in member =~/Maildir/= +directories there. Securely stored on Core, members can decrypt and +sort their email using common, IMAP-based tools. (Most mail apps can +use IMAP, the Internet Message Access Protocol.) + +Core transfers messages from Front using Fetchmail's ~--idle~ option, +which instructs Fetchmail to maintain a connection to Front so that it +can (with good campus connectivity) get notifications to pick up new +email. Members of the institute typically employ email apps that work +similarly, alerting them to new email on Core. Thus members enjoy +email messages that arrive as fast as text messages (but with the +option of real, end-to-end encryption). + +If the campus loses connectivity to the Internet, new email +accumulates in =~/Maildir/= directories on Front. If a member is +abroad, with Internet access, their /new/ emails can be accessed via +Front's IMAPS (IMAP Secured [with SSL/TLS]) service, available at the +institute domain name. When the campus regains Internet connectivity, +Core will collect the new email. + +Core is the campus mail hub, securely storing members' incoming +emails, and relaying their outgoing emails. It is the "smarthost" for +the campus. Campus machines send all outgoing email to Core, and +Core's Postfix server accepts messages from any of the institute's +networks. + +Core delivers messages addressed to internal host names locally. For +example ~webmaster@test.small.private~ is delivered to ~webmaster~ on +Core. Core relays other messages to its smarthost, Front, which is +declared by the institute's SPF (Sender Policy Framework) DNS record +to be the only legitimate sender of institute emails. Thus the +Internet sees the institute's outgoing email coming from a server at +an address matching the domain's SPF record. The institute does /not/ +sign outgoing emails per DKIM (Domain Keys Identified Mail), yet. + +#+CAPTION: Example Small Institute SPF Record +#+BEGIN_SRC conf +TXT v=spf1 ip4:159.65.75.60 -all +#+END_SRC + +There are a number of configuration settings that, for +interoperability, should be in agreement on the Postfix servers and +the campus clients. Policy also requires certain settings on both +Postfix or both Dovecot servers. To ensure that the same settings are +applied on both, the shared settings are defined here and included via +noweb reference in the server configurations. For example the Postfix +setting for the maximum message size is given in a code block labeled +~postfix-message-size~ below and then included in both Postfix +configurations wherever ~<>~ appears. + +*** The Postfix Configurations + +The institute aims to accommodate encrypted email containing short +videos, messages that can quickly exceed the default limit of 9.77MiB, +so the institute uses a limit 10 times greater than the default, +100MiB. Front should always have several gigabytes free to spool a +modest number (several 10s) of maximally sized messages. Furthermore +a maxi-message's time in the spool is nominally a few seconds, after +which it moves on to Core (the big disks). This Postfix setting +should be the same throughout the institute, so that all hosts can +handle maxi-messages. + +#+NAME: postfix-message-size +#+CAPTION: ~postfix-message-size~ +#+BEGIN_SRC conf +- { p: message_size_limit, v: 104857600 } +#+END_SRC + +Queue warning and bounce times were shortened at the institute. Email +should be delivered in seconds. If it cannot be delivered in an hour, +the recipient has been cut off, and a warning is appropriate. If it +cannot be delivered in 4 hours, the information in the message is +probably stale and further attempts to deliver it have limited and +diminishing value. The sender should decide whether to continue by +re-sending the bounce (or just grabbing the go-bag!). + +#+NAME: postfix-queue-times +#+CAPTION: ~postfix-queue-times~ +#+BEGIN_SRC conf +- { p: delay_warning_time, v: 1h } +- { p: maximal_queue_lifetime, v: 4h } +- { p: bounce_queue_lifetime, v: 4h } +#+END_SRC + +The Debian default Postfix configuration enables SASL authenticated +relaying and opportunistic TLS with a self-signed, "snake oil" +certificate. The institute substitutes its own certificates and +disables relaying (other than for the local networks). + +#+NAME: postfix-relaying +#+CAPTION: ~postfix-relaying~ +#+BEGIN_SRC conf +- p: smtpd_relay_restrictions + v: permit_mynetworks reject_unauth_destination +#+END_SRC + +Dovecot is configured to store emails in each member's =~/Maildir/=. +The same instruction is given to Postfix for the belt-and-suspenders +effect. + +#+NAME: postfix-maildir +#+CAPTION: ~postfix-maildir~ +#+BEGIN_SRC conf +- { p: home_mailbox, v: Maildir/ } +#+END_SRC + +The complete Postfix configurations for Front and Core use these +common settings as well as several host-specific settings as discussed +in the respective roles below. + +*** The Dovecot Configurations + +The Dovecot settings on both Front and Core disable POP and require +TLS. + +The official documentation for Dovecot once was a Wiki but now is +[[https://doc.dovecot.org]], yet the Wiki is still distributed in +=/usr/share/doc/dovecot-core/wiki/=. + +#+NAME: dovecot-tls +#+CAPTION: ~dovecot-tls~ +#+BEGIN_SRC conf +protocols = imap +ssl = required +#+END_SRC + +Both servers should accept only IMAPS connections. The following +configuration keeps them from even listening at the IMAP port +(e.g. for ~STARTTLS~ commands). + +#+CAPTION: ~dovecot-ports~ +#+NAME: dovecot-ports +#+BEGIN_SRC conf +service imap-login { + inet_listener imap { + port = 0 + } +} +#+END_SRC + +Both Dovecot servers store member email in members' local =~/Maildir/= +directories. + +#+NAME: dovecot-maildir +#+CAPTION: ~dovecot-maildir~ +#+BEGIN_SRC conf +mail_location = maildir:~/Maildir +#+END_SRC + +The complete Dovecot configurations for Front and Core use these +common settings with host specific settings for ~ssl_cert~ and +~ssl_key~. + +** The Web Services + +Front provides the public HTTP service that serves institute web pages +at e.g. ~https://small.example.org/~. The small institute initially +runs with a self-signed, "snake oil" server certificate, causing +browsers to warn of possible fraud, but this certificate is easily +replaced by one signed by a recognized authority, as discussed in [[*The Front Role][The +Front Role]]. + +The Apache2 server finds its web pages in the =/home/www/= directory +tree. Pages can /also/ come from member home directories. For +example the HTML for ~https://small.example.org/~member~ would come +from the =/home/member/Public/HTML/index.html= file. + +The server does not run CGI scripts. This keeps Front's CPU +requirements cheap. CGI scripts /can/ be used on Core. Indeed +Nextcloud on Core uses PHP and the whole LAMP (Linux, Apache, MySQL, +PHP) stack. + +Core provides a campus HTTP service with several virtual hosts. +These web sites can only be accessed via the campus Ethernet or an +institute VPN. In either situation Core's many private domain names +become available, e.g. =www.small.private=. In many cases these +domain names can be shortened e.g. to =www=. Thus the campus home +page is accessible in a dozen keystrokes: ~http://www/~ (plus Enter). + +Core's web sites: + + - ~http://www/~ :: is the small institute's campus web site. It + serves files from the staff-writable =/WWW/campus/= directory + tree. + - ~http://live/~ :: is a local copy of the institute's public web + site. It serves the files in the =/WWW/live/= directory tree, + which is mirrored to Front. + - ~http://test/~ :: is a test copy of the institute's public web + site. It tests new web designs in the =/WWW/test/= directory + tree. Changes here are merged into the live tree, =/WWW/live/=, + once they are complete and tested. + - ~http://core/~ :: is the Debian default site. The institute does + not munge this site, to avoid conflicts with Debian-packaged web + services (e.g. Nextcloud, Zoneminder, MythTV's MythWeb). + +Core runs a cron job under a system account named ~monkey~ that +mirrors =/WWW/live/= to Front's =/home/www/= every 15 minutes. +Vandalism on Front should not be possible, but if it happens Monkey +will automatically wipe it within 15 minutes. + +** The Cloud Service + +Core runs Nextcloud to provide a private institute cloud at +~http://core.small.private/nextcloud/~. It is managed manually per +[[https://docs.nextcloud.com/server/latest/admin_manual/][The Nextcloud Server Administration Guide]]. The code /and/ data, +including especially database dumps, are stored in =/Nextcloud/= which +is included in Core's backup procedure as described in [[*Backups][Backups]]. The +default Apache2 configuration expects to find the web scripts in +=/var/www/nextcloud/=, so the institute symbolically links this to +=/Nextcloud/nextcloud/=. + +Note that authenticating to a non-HTTPS URL like +~http://core.small.private/~ is often called out as insecure, but the +domain name is private and the service is on a directly connected +private network. + +** The VPN Services + +The institute's public and campus VPNs have many common configuration +options that are discussed here. These are included, with example +certificates and network addresses, in the complete server +configurations of [[*The Front Role][The Front Role]] and [[*The Gate Role][The Gate Role]], as well as the +matching client configurations in [[*The Core Role][The Core Role]] and the =.ovpn= files +generated by [[*The Client Command][The Client Command]]. The configurations are based on the +documentation for OpenVPN v2.4: the ~openvpn(8)~ manual page and [[https://openvpn.net/community-resources/reference-manual-for-openvpn-2-4/][this +web page]]. + +*** The VPN Configuration Options + +The institute VPNs use UDP on a subnet topology (rather than +point-to-point) with "split tunneling". The UDP support accommodates +real-time, connection-less protocols. The split tunneling is for +efficiency with frontier bandwidth. The subnet topology, with the +~client-to-client~ option, allows members to "talk" to each other on +the VPN subnets using any (experimental) protocol. + +#+NAME: openvpn-dev-mode +#+CAPTION: ~openvpn-dev-mode~ +#+BEGIN_SRC conf +dev-type tun +dev ovpn +topology subnet +client-to-client +#+END_SRC + +A ~keepalive~ option is included on the servers so that clients detect +an unreachable server and reset the TLS session. The option's default +is doubled to 2 minutes out of respect for frontier service +interruptions. + +#+NAME: openvpn-keepalive +#+CAPTION: ~openvpn-keepalive~ +#+BEGIN_SRC conf +keepalive 10 120 +#+END_SRC + +As mentioned in [[*The Name Service][The Name Service]], the institute uses a campus name +server. OpenVPN is instructed to push its address and the campus +search domain. + +#+NAME: openvpn-dns +#+CAPTION: ~openvpn-dns~ +#+BEGIN_SRC conf +push "dhcp-option DOMAIN {{ domain_priv }}" +push "dhcp-option DNS {{ core_addr }}" +#+END_SRC + +The institute does not put the OpenVPN server in a ~chroot~ jail, but +it does drop privileges to run as user ~nobody:nobody~. The +~persist-~ options are needed because ~nobody~ cannot open the tunnel +device nor the key files. + +#+NAME: openvpn-drop-priv +#+CAPTION: ~openvpn-drop-priv~ +#+BEGIN_SRC conf +user nobody +group nogroup +persist-key +persist-tun +#+END_SRC + +The institute does a little additional hardening, sacrificing some +compatibility with out-of-date clients. Such clients are generally +frowned upon at the institute. Here ~cipher~ is set to ~AES-256-GCM~, +the default for OpenVPN v2.4, and ~auth~ is upped to ~SHA256~ from +~SHA1~. + +#+NAME: openvpn-crypt +#+CAPTION: ~openvpn-crypt~ +#+BEGIN_SRC conf +cipher AES-256-GCM +auth SHA256 +#+END_SRC + +Finally, a ~max-client~ limit was chosen to frustrate flooding while +accommodating a few members with a handful of devices each. + +#+NAME: openvpn-max +#+CAPTION: ~openvpn-max~ +#+BEGIN_SRC conf +max-clients 20 +#+END_SRC + +The institute's servers are lightly loaded so a few debugging options +are appropriate. To help recognize host addresses in the logs, and +support direct client-to-client communication, host IP addresses are +made "persistent" in the =ipp.txt= file. The server's status is +periodically written to the =openvpn-status.log= and verbosity is +raised from the default level 1 to level 3 (just short of a deluge). + +#+NAME: openvpn-debug +#+CAPTION: ~openvpn-debug~ +#+BEGIN_SRC conf +ifconfig-pool-persist ipp.txt +status openvpn-status.log +verb 3 +#+END_SRC + +** Accounts + +A small institute has just a handful of members. For simplicity (and +thus security) static configuration files are preferred over complex +account management systems, LDAP, Active Directory, and the like. The +Ansible scripts configure the same set of user accounts on Core and +Front. [[*The Institute Commands][The Institute Commands]] (e.g. ~./inst new dick~) capture the +processes of enrolling, modifying and retiring members of the +institute. They update the administrator's membership roll, and run +Ansible to create (and disable) accounts on Core, Front, Nextcloud, +etc. + +The small institute does not use disk quotas nor access control lists. +It relies on Unix group membership and permissions. It is Debian +based and thus uses "user groups" by default. Sharing is typically +accomplished via the campus cloud and the resulting desktop files can +all be private (readable and writable only by the owner) by default. + +*** The Administration Accounts + +The institute avoids the use of the ~root~ account (~uid 0~) because +it is exempt from the normal Unix permissions checking. The ~sudo~ +command is used to consciously (conscientiously!) run specific scripts +and programs as ~root~. When installation of a Debian OS leaves the +host with no user accounts, just the ~root~ account, the next step is +to create a system administrator's account named ~sysadm~ and to give +it permission to use the ~sudo~ command (e.g. as described in [[*The Front Machine][The +Front Machine]]). When installation prompts for the name of an +initial, privileged user account the same name is given (e.g. as +described in [[*The Core Machine][The Core Machine]]). Installation may /not/ prompt and +still create an initial user account with a distribution specific name +(e.g. ~pi~). Any name can be used as long as it is provided as the +value of ~ansible_user~ in =hosts=. Its password is specified by a +vault-encrypted variable in the =Secret/become.yml= file. (The +=hosts= and =Secret/become.yml= files are described in [[*The Ansible Configuration][The Ansible +Configuration]].) + +*** The Monkey Accounts + +The institute's Core uses a special account named ~monkey~ to run +background jobs with limited privileges. One of those jobs is to keep +the public web site mirror up-to-date, so a corresponding ~monkey~ +account is created on Front as well. + +** Keys + +The institute keeps its "master secrets" in an encrypted +volume on an off-line hard drive, e.g. a LUKS (Linux Unified Key +Setup) format partition on a USB pen/stick. The =Secret/= +sub-directory is actually a symbolic link to this partition's +automatic mount point, e.g. =/media/sysadm/ADE7-F866/=. Unless this +volume is mounted (unlocked) at =Secret/=, none of the ~./inst~ +commands will work. + +Chief among the institute's master secrets is the SSH key to the +privileged accounts on /all/ of the institute servers. It is stored +in =Secret/ssh_admin/id_rsa=. The institute uses several more SSH +keys listed here: + + - =Secret/ssh_admin/= :: The SSH key pair for A Small Institute + Administrator. + - =Secret/ssh_monkey/= :: The key pair used by Monkey to update the + website on Front (and other unprivileged tasks). + - =Secret/ssh_front/= :: The host key pair used by Front to + authenticate itself. + +The institute uses a number of X.509 certificates to authenticate VPN +clients and servers. They are created by the EasyRSA Certificate +Authority stored in =Secret/CA/=. + + - =Secret/CA/pki/ca.crt= :: The institute CA (certificate + authority). + + - =Secret/CA/pki/issued/small.example.org.crt= :: The public Apache, + Postfix, and OpenVPN servers on Front. + + - =Secret/CA/pki/issued/gate.small.private.crt= :: The campus + OpenVPN server on Gate. + + - =Secret/CA/pki/issued/core.small.private.crt= :: The campus + Apache (thus Nextcloud), and Dovecot-IMAPd servers. + + - =Secret/CA/pki/issued/core.crt= :: Core's client certificate by + which it authenticates to Front. + +The ~./inst client~ command creates client certificates and keys, and +can generate OpenVPN configuration (=.ovpn=) files for Android and +Debian. The command updates the institute membership roll, requiring +the member's username, keeping a list of the member's clients (in case +all authorizations need to be revoked quickly). The list of client +certificates that have been revoked is stored along with the +membership roll (in =private/members.yml= as the value of ~revoked~). + +Finally, the institute uses an OpenPGP key to secure sensitive emails +(containing passwords or private keys) to Core. + + - =Secret/root.gnupg/= :: The "home directory" used to create the + public/secret key pair. + - =Secret/root-pub.pem= :: The ASCII armored OpenPGP public key for + e.g. ~root@core.small.private~. + - =Secret/root-sec.pem= :: The ASCII armored OpenPGP secret key. + +When [[*The CA Command][The CA Command]] sees an empty =Secret/CA/= directory, as +though just created by running the EasyRSA ~make-cadir~ command in +=Secret/= (a new, encrypted volume), the ~./inst CA~ command creates +all of the certificates and keys mentioned above. It may prompt for +the institute's full name. + +The institute administrator updates a couple encrypted copies of this +drive after enrolling new members, changing a password, issuing VPN +credentials, etc. + +: rsync -a Secret/ Secret2/ +: rsync -a Secret/ Secret3/ + +This is out of consideration for the fragility of USB drives, and the +importance of a certain SSH private key, without which the +administrator will have to login with a password, hopefully stored in +the administrator's password keep, to install a new SSH key. + +** Backups + +The small institute backs up its data, but not so much so that nothing +can be deleted. It actually mirrors user directories (=/home/=), the +web sites (=/WWW/=), Nextcloud (=/Nextcloud/=), and any capitalized +root directory entry, to a large off-line disk. Where incremental +backups are desired, a CMS like ~git~ is used. + +Off-site backups are not a priority due to cost and trust issues, and +the low return on the investment given the minuscule risk of a +catastrophe big enough to obliterate all local copies. And the +institute's public contributions are typically replicated in public +code repositories like GitHub and GNU Savannah. + +The following example =/usr/local/sbin/backup= script pauses +Nextcloud, dumps its database, rsyncs =/home/=, =/WWW/= and +=/Nextcloud/= to a =/backup/= volume (mounting and unmounting +=/backup/= if necessary), then continues Nextcloud. The script +assumes the backup volume is labeled ~Backup~ and formatted per LUKS +version 2. + +Given the ~-n~ flag, the script does a "pre-sync" which does not pause +Nextcloud nor dump its DB. A pre-sync gets the big file (video) +copies done while Nextcloud continues to run. A follow-up ~sudo +backup~ (/without/ ~-n~) produces the complete copy (with all the +files mentioned in the Nextcloud database dump). + +#+NAME: backup +#+CAPTION: =private/backup= +#+BEGIN_SRC sh :tangle private/backup :mkdirp yes :tangle-mode u=rw +#!/bin/bash -e +# +# DO NOT EDIT. Maintained (will be replaced) by Ansible. +# +# sudo backup [-n] + +if [ `id -u` != "0" ] +then + echo "This script must be run as root." + exit 1 +fi + +if [ "$1" = "-n" ] +then + presync=yes + shift +fi + +if [ "$#" != "0" ] +then + echo "usage: $0 [-n]" + exit 2 +fi + +function cleanup () { + sleep 2 + finish +} + +trap cleanup SIGHUP SIGINT SIGQUIT SIGPIPE SIGTERM + +function start () { + + if ! mountpoint -q /backup/ + then + echo "Mounting /backup/." + cryptsetup luksOpen /dev/disk/by-partlabel/Backup backup + mount /dev/mapper/backup /backup + mounted=indeed + else + echo "Found /backup/ already mounted." + mounted= + fi + + if [ ! -d /backup/home ] + then + echo "The backup device should be mounted at /backup/" + echo "yet there is no /backup/home/ directory." + exit 2 + fi + + if [ ! $presync ] + then + echo "Putting nextcloud into maintenance mode." + ( cd /Nextcloud/nextcloud/ + sudo -u www-data php occ maintenance:mode --on &>/dev/null ) + + echo "Dumping nextcloud database." + ( cd /Nextcloud/ + umask 07 + BAK=`date +"%Y%m%d"`-dbbackup.bak.gz + CNF=/Nextcloud/dbbackup.cnf + mysqldump --defaults-file=$CNF nextcloud | gzip > $BAK + chmod 440 $BAK ) + fi + +} + +function finish () { + + if [ ! $presync ] + then + echo "Putting nextcloud back into service." + ( cd /Nextcloud/nextcloud/ + sudo -u www-data php occ maintenance:mode --off &>/dev/null ) + fi + + if [ $mounted ] + then + echo "Unmounting /backup/." + umount /backup + cryptsetup luksClose backup + mounted= + fi + echo "Done." + echo "The backup device can be safely disconnected." + +} + +start + +for D in /home /[A-Z]*; do + echo "Updating /backup$D/." + ionice --class Idle --ignore \ + rsync -av --delete --exclude=.NoBackups $D/ /backup$D/ +done + +finish +#+END_SRC + + +* The Particulars + +This chapter introduces Ansible variables intended to simplify +changes, like customization for another institute's particulars. The +variables are separated into /public/ information (e.g. an institute's +name) or /private/ information (e.g. a network interface address), and +stored in separate files: =public/vars.yml= and =private/vars.yml=. + +The example settings in this document configure VirtualBox VMs as +described in the [[*Testing][Testing]] chapter. For more information about how a +small institute turns the example Ansible code into a working Ansible +configuration, see chapter [[*The Ansible Configuration][The Ansible Configuration]]. + +** Generic Particulars + +The small institute's domain name is used quite frequently in the +Ansible code. The example used here is ~small.example.org~. The +following line sets ~domain_name~ to that value. (Ansible will then +replace ~{{ domain_name }}~ in the code with ~small.example.org~.) + +#+CAPTION: =public/vars.yml= +#+BEGIN_SRC conf :tangle public/vars.yml :mkdirp yes +--- +domain_name: small.example.org +domain_priv: small.private +#+END_SRC + +The private version of the institute's domain name should end with one +of the top-level domains expected for this purpose: =.intranet=, +=.internal=, =.private=, =.corp=, =.home= or =.lan=.[fn:5] + +** Subnets + +The small institute uses a private Ethernet, two VPNs, and an +untrusted Ethernet (for the campus Wi-Fi access point). Each must +have a unique private network address. Hosts using the VPNs are also +using foreign private networks, e.g. a notebook on a hotel Wi-Fi. To +better the chances that all of these networks get unique addresses, +the small institute uses addresses in the IANA's (Internet Assigned +Numbers Authority's) private network address ranges /except/ the +~192.168~ address range already in widespread use. This still leaves +69,632 8 bit networks (each addressing up to 254 hosts) from which to +choose. The following table lists their CIDRs (subnet numbers in +Classless Inter-Domain Routing notation) in abbreviated form (eliding +69,624 rows). +# 10.0.0.0 -- 10.255.255.255 => (* 256 256) subnets +# 172.16.0.0 -- 172.31.255.255 => (* 16 256) subnets +# (+ (* 256 256) (* 16 256)) => 69632 subnets + +#+CAPTION: IANA Private 8bit Subnetwork CIDRs +| Subnet CIDR | Host Addresses | +|-----------------+--------------------------------| +| 10.0.0.0/24 | 10.0.0.1 -- 10.0.0.254 | +| 10.0.1.0/24 | 10.0.1.1 -- 10.0.1.254 | +| 10.0.2.0/24 | 10.0.2.1 -- 10.0.2.254 | +| ... | ... | +| 10.255.255.0/24 | 10.255.255.1 -- 10.255.255.254 | +| 172.16.0.0/24 | 172.16.0.1 -- 172.16.0.254 | +| 172.16.1.0/24 | 172.16.1.1 -- 172.16.1.254 | +| 172.16.2.0/24 | 172.16.2.1 -- 172.16.2.254 | +| ... | ... | +| 172.31.255.0/24 | 172.31.255.1 -- 172.31.255.254 | + +The following Emacs Lisp randomly chooses one of these 8 bit subnets. +The small institute used it to pick its four private subnets. An +example result follows the code. + +#+BEGIN_SRC emacs-lisp + (let ((bytes + (let ((i (random (+ 256 16)))) + (if (< i 256) + (list 10 i (1+ (random 254))) + (list 172 (+ 16 (- i 256)) (1+ (random 254))))))) + (format "%d.%d.%d.0/24" (car bytes) (cadr bytes) (caddr bytes))) +#+END_SRC + +#+RESULTS: +: 10.62.17.0/24 + +The four private networks are named and given example CIDRs in the +code block below. The small institute treats these addresses as +sensitive information so the code block below "tangles" into +=private/vars.yml= rather than =public/vars.yml=. Two of the +addresses are in ~192.168~ subnets because they are part of a test +configuration using mostly-default VirtualBoxes (described [[*Testing][here]]). + +#+CAPTION: =private/vars.yml= +#+BEGIN_SRC conf :tangle private/vars.yml :tangle-mode u=rw +--- +private_net_cidr: 192.168.56.0/24 +public_vpn_net_cidr: 10.177.86.0/24 +campus_vpn_net_cidr: 10.84.138.0/24 +gate_wifi_net_cidr: 192.168.57.0/24 +#+END_SRC + +The network addresses are needed in several additional formats, e.g. +network address and subnet mask (~10.84.138.0 255.255.255.0~). The +following boilerplate uses Ansible's ~ipaddr~ filter to set several +corresponding variables, each with an appropriate suffix, +e.g. ~_net_and_mask~ rather than ~_net_cidr~. + +#+CAPTION: =private/vars.yml= +#+BEGIN_SRC conf :tangle private/vars.yml +private_net: "{{ private_net_cidr | ipaddr('network') }}" +private_net_mask: "{{ private_net_cidr | ipaddr('netmask') }}" +private_net_and_mask: "{{ private_net }} {{ private_net_mask }}" +public_vpn_net: "{{ public_vpn_net_cidr | ipaddr('network') }}" +public_vpn_net_mask: "{{ public_vpn_net_cidr | ipaddr('netmask') }}" +public_vpn_net_and_mask: + "{{ public_vpn_net }} {{ public_vpn_net_mask }}" +campus_vpn_net: "{{ campus_vpn_net_cidr | ipaddr('network') }}" +campus_vpn_net_mask: "{{ campus_vpn_net_cidr | ipaddr('netmask') }}" +campus_vpn_net_and_mask: + "{{ campus_vpn_net }} {{ campus_vpn_net_mask }}" +gate_wifi_net: "{{ gate_wifi_net_cidr | ipaddr('network') }}" +gate_wifi_net_mask: "{{ gate_wifi_net_cidr | ipaddr('netmask') }}" +gate_wifi_net_and_mask: + "{{ gate_wifi_net }} {{ gate_wifi_net_mask }}" +gate_wifi_broadcast: "{{ gate_wifi_net_cidr | ipaddr('broadcast') }}" +#+END_SRC + +The institute prefers to configure its services with IP addresses +rather than domain names, and one of the most important for secure and +reliable operation is Front's public IP address known to the world by +the institute's Internet domain name. + +#+CAPTION: =public/vars.yml= +#+BEGIN_SRC conf :tangle public/vars.yml +front_addr: 192.168.15.5 +#+END_SRC + +The example address is a private network address because the example +configuration is intended to run in a test jig made up of VirtualBox +virtual machines and networks, and the VirtualBox user manual uses +~192.168.15.0~ in its example configuration of a "NAT Network" +(simulating Front's ISP's network). + +Finally, five host addresses are needed frequently in the Ansible +code. The first two are Core's and Gate's addresses on the private +Ethernet. The next two are Gate's and the campus Wi-Fi's addresses on +the Gate-WiFi subnet, the tiny Ethernet (~gate_wifi_net~) between Gate +and the (untrusted) campus Wi-Fi access point. The last is Front's +address on the public VPN, perversely called ~front_private_addr~. +The following code block picks the obvious IP addresses for Core +(host 1) and Gate (host 2). + +#+CAPTION: =private/vars.yml= +#+BEGIN_SRC conf :tangle private/vars.yml +core_addr_cidr: "{{ private_net_cidr | ipaddr('1') }}" +gate_addr_cidr: "{{ private_net_cidr | ipaddr('2') }}" +gate_wifi_addr_cidr: "{{ gate_wifi_net_cidr | ipaddr('1') }}" +wifi_wan_addr_cidr: "{{ gate_wifi_net_cidr | ipaddr('2') }}" +front_private_addr_cidr: "{{ public_vpn_net_cidr | ipaddr('1') }}" + +core_addr: "{{ core_addr_cidr | ipaddr('address') }}" +gate_addr: "{{ gate_addr_cidr | ipaddr('address') }}" +gate_wifi_addr: "{{ gate_wifi_addr_cidr | ipaddr('address') }}" +wifi_wan_addr: "{{ wifi_wan_addr_cidr | ipaddr('address') }}" +front_private_addr: + "{{ front_private_addr_cidr | ipaddr('address') }}" +#+END_SRC + + +* The Hardware + +The small institute's network was built by its system administrator +using Ansible on a trusted notebook. The Ansible configuration and +scripts were generated by "tangling" the Ansible code included here. +([[*The Ansible Configuration][The Ansible Configuration]] describes how to do this.) The following +sections describe how Front, Gate and Core were prepared for Ansible. + +** The Front Machine + +Front is the small institute's public facing server, a virtual machine +on the Internets. It needs only as much disk as required by the +institute's public web site. Often the cheapest offering (4GB RAM, 1 +core, 20GB disk) is sufficient. The provider should make it easy and +fast to (re)initialize the machine to a factory fresh Debian Server, +and install additional Debian software packages. Indeed it should be +possible to quickly re-provision a new Front machine from a frontier +Internet café using just the administrator's notebook. + +*** A Digital Ocean Droplet + +The following example prepared a new front on a Digital Ocean droplet. +The institute administrator opened an account at Digital Ocean, +registered an ssh key, and used a Digital Ocean control panel to +create a new machine (again, one of the cheapest, smallest available) +with Ubuntu Server 20.04LTS installed. Once created, the machine and +its IP address (~159.65.75.60~) appeared on the panel. Using that +address, the administrator logged into the new machine with ~ssh~. + +On the administrator's notebook (in a terminal): + +: notebook$ ssh root@159.65.75.60 +: root@ubuntu# + +The freshly created Digital Ocean droplet came with just one account, +~root~, but the small institute avoids remote access to the "super +user" account (per the policy in [[*The Administration Accounts][The Administration Accounts]]), so the +administrator created a ~sysadm~ account with the ability to request +escalated privileges via the ~sudo~ command. + +: root@ubuntu# adduser sysadm +: ... +: New password: givitysticangout +: Retype new password: givitysticangout +: ... +: Full Name []: System Administrator +: ... +: Is the information correct? [Y/n] +: root@ubuntu# adduser sysadm sudo +: root@ubuntu# logout +: notebook$ + +The password was generated by ~gpw~, saved in the administrator's +password keep, and later added to =Secret/become.yml= as shown below. +(Producing a working Ansible configuration with =Secret/become.yml= +file is described in [[*The Ansible Configuration][The Ansible Configuration]].) + +: notebook$ gpw 1 16 +: givitysticangout +: notebook$ echo -n "become_front: " >>Secret/become.yml +: notebook$ ansible-vault encrypt_string givitysticangout \ +: notebook_ >>Secret/become.yml + +After creating the ~sysadm~ account on the droplet, the administrator +concatenated a personal public ssh key and the key found in +=Secret/ssh_admin/= (created by [[*The CA Command][The CA Command]]) into an =admin_keys= +file, copied it to the droplet, and installed it as the +=authorized_keys= for ~sysadm~. + +: notebook$ cat ~/.ssh/id_rsa.pub Secret/ssh_admin/id_rsa.pub \ +: notebook_ > admin_keys +: notebook$ rsync admin_keys sysadm@159.65.75.60: +: The authenticity of host '159.65.75.60' can't be established. +: .... +: Are you sure you want to continue connecting (...)? yes +: ... +: sysadm@159.65.75.60's password: givitysticangout +: notebook$ ssh sysadm@159.65.75.60 +: sysadm@159.65.75.60's password: givitysticangout +: sysadm@ubuntu$ ( mask 077; mkdir .ssh; \ +: sysadm@ubuntu_ cp admin_keys .ssh/authorized_keys; \ +: sysadm@ubuntu_ rm admin_keys ) +: sysadm@ubuntu$ logout +: notebook$ rm admin_keys +: notebook$ + +The administrator then tested the password-less ssh login as well as +the privilege escalation command. + +: notebook$ ssh sysadm@159.65.75.60 +: sysadm@ubuntu$ sudo head -1 /etc/shadow +: [sudo] password for sysadm: +: root:*:18355:0:99999:7::: + +/After/ passing the above test, the administrator disabled root logins +on the droplet. The last command below tested that root logins were +indeed denied. + +: sysadm@ubuntu$ sudo rm -r /root/.ssh +: sysadm@ubuntu# logout +: notebook$ ssh root@159.65.75.60 +: root@159.65.75.60: Permission denied (publickey). +: notebook$ + +At this point the droplet was ready for configuration by Ansible. +Later, provisioned with all of Front's services /and/ tested, the +institute's domain name was changed, making ~159.65.75.60~ its new +address. + +** The Core Machine + +Core is the small institute's private file, email, cloud and whatnot +server. It should have some serious horsepower (RAM, cores, GHz) and +storage (hundreds of gigabytes). An old desktop system might be +sufficient and if later it proves it is not, moving Core to new +hardware is "easy" and good practice. It is also straightforward to +move the heaviest workloads (storage, cloud, internal web sites) to +additional machines. + +Core need not have a desktop, and will probably be more reliable if it +is not also playing games. It will run continuously 24/7 and will +benefit from a UPS (uninterruptible power supply). It's file system +and services are critical. + +The following example prepared a new core on a PC with Debian 11 +freshly installed. During installation, the machine was named ~core~, +no desktop or server software was installed, no root password was set, +and a privileged account named ~sysadm~ was created (per the policy in +[[*The Administration Accounts][The Administration Accounts]]). + +: New password: oingstramextedil +: Retype new password: oingstramextedil +: ... +: Full Name []: System Administrator +: ... +: Is the information correct? [Y/n] + +The password was generated by ~gpw~, saved in the administrator's +password keep, and later added to =Secret/become.yml= as shown below. +(Producing a working Ansible configuration with =Secret/become.yml= +file is described in [[*The Ansible Configuration][The Ansible Configuration]].) + +: notebook$ gpw 1 16 +: oingstramextedil +: notebook$ echo -n "become_core: " >>Secret/become.yml +: notebook$ ansible-vault encrypt_string oingstramextedil \ +: notebook_ >>Secret/become.yml + +With Debian freshly installed, Core needed several additional software +packages. The administrator temporarily plugged Core into a cable +modem and installed them as shown below. + +: $ sudo apt install openssh-server rsync isc-dhcp-server netplan.io \ +: _ bind9 fetchmail openvpn apache2 + +The Nextcloud configuration requires Apache2, MariaDB and a number of +PHP modules. Installing them while Core was on a cable modem sped up +final configuration "in position" (on a frontier). + +: $ sudo apt install mariadb-server php php-{bcmath,curl,gd,gmp,json}\ +: _ php-{mysql,mbstring,intl,imagick,xml,zip} \ +: _ libapache2-mod-php + +Next, the administrator concatenated a personal public ssh key and the +key found in =Secret/ssh_admin/= (created by [[*The CA Command][The CA Command]]) into an +=admin_keys= file, copied it to Core, and installed it as the +=authorized_keys= for ~sysadm~. + +: notebook$ cat ~/.ssh/id_rsa.pub Secret/ssh_admin/id_rsa.pub \ +: notebook_ > admin_keys +: notebook$ rsync admin_keys sysadm@core.lan: +: The authenticity of host 'core.lan' can't be established. +: .... +: Are you sure you want to continue connecting (...)? yes +: ... +: sysadm@core.lan's password: oingstramextedil +: notebook$ ssh sysadm@core.lan +: sysadm@core.lan's password: oingstramextedil +: sysadm@core$ ( mask 077; mkdir .ssh; \ +: sysadm@core_ cp admin_keys .ssh/authorized_keys ) +: sysadm@core$ rm admin_keys +: sysadm@core$ logout +: notebook$ rm admin_keys +: notebook$ + +Note that the name ~core.lan~ should be known to the cable modem's DNS +service. An IP address might be used instead, discovered with an ~ip +a~ on Core. + +Now Core no longer needed the Internets so it was disconnected from +the cable modem and connected to the campus Ethernet switch. Its +primary Ethernet interface was temporarily (manually) configured with +a new, private IP address and a default route. + +In the example command lines below, the address ~10.227.248.1~ was +generated by the random subnet address picking procedure described in +[[*Subnets][Subnets]], and is named ~core_addr~ in the Ansible code. The second +address, ~10.227.248.2~, is the corresponding address for Gate's +Ethernet interface, and is named ~gate_addr~ in the Ansible +code. + +: sysadm@core$ sudo ip address add 10.227.248.1 dev enp82s0 +: sysadm@core$ sudo ip route add default via 10.227.248.2 dev enp82s0 + +At this point Core was ready for provisioning with Ansible. + +** The Gate Machine + +Gate is the small institute's route to the Internet, and the campus +Wi-Fi's route to the private Ethernet. It has three network +interfaces. + + 1. ~lan~ is its main Ethernet interface, connected to the campus's + private Ethernet switch. + 2. ~wifi~ is its second Ethernet interface, connected to the campus + Wi-Fi access point's WAN Ethernet interface (with a cross-over + cable). + 3. ~isp~ is its third network interface, connected to the campus + ISP. This could be an Ethernet device connected to a cable + modem. It could be a USB port tethered to a phone, a + USB-Ethernet adapter, or a wireless adapter connected to a + campground Wi-Fi access point, etc. + +#+BEGIN_EXAMPLE + =============== | ================================================== + | Premises + (Campus ISP) + | +----Member's notebook on campus + | | + | +----(Campus Wi-Fi) + | | + ============== Gate ================================================ + | Private + +----Ethernet switch +#+END_EXAMPLE + +*** Alternate Gate Topology + +While Gate and Core really need to be separate machines for security +reasons, the campus Wi-Fi and the ISP's Wi-Fi can be the same machine. +This avoids the need for a second Wi-Fi access point and leads to the +following topology. + +#+BEGIN_EXAMPLE + =============== | ================================================== + | Premises + (House ISP) + (House Wi-Fi)-----------Member's notebook on campus + (House Ethernet) + | + ============== Gate ================================================ + | Private + +----Ethernet switch +#+END_EXAMPLE +#+CAPTION: A small institute using its ISP's Wi-Fi access point. + +In this case Gate has two interfaces and there is no Gate-WiFi subnet. + +Support for this "alternate" topology is planned but /not/ yet +implemented. Like the original topology, it should require no +changes to a standard cable modem's default configuration (assuming +its Ethernet and Wi-Fi clients are allowed to communicate). + +*** Original Gate Topology + +The Ansible code in this document is somewhat dependent on the +physical network shown in the [[*Overview][Overview]] wherein Gate has three network +interfaces. + +The following example prepared a new gate on a PC with Debian 11 +freshly installed. During installation, the machine was named ~gate~, +no desktop or server software was installed, no root password was set, +and a privileged account named ~sysadm~ was created (per the policy in +[[*The Administration Accounts][The Administration Accounts]]). + +: New password: icismassssadestm +: Retype new password: icismassssadestm +: ... +: Full Name []: System Administrator +: ... +: Is the information correct? [Y/n] + +The password was generated by ~gpw~, saved in the administrator's +password keep, and later added to =Secret/become.yml= as shown below. +(Producing a working Ansible configuration with =Secret/become.yml= +file is described in [[*The Ansible Configuration][The Ansible Configuration]].) + +: notebook$ gpw 1 16 +: icismassssadestm +: notebook$ echo -n "become_gate: " >>Secret/become.yml +: notebook$ ansible-vault encrypt_string icismassssadestm \ +: notebook_ >>Secret/become.yml + +With Debian freshly installed, Gate needed a couple additional +software packages. The administrator temporarily plugged Gate into a +cable modem and installed them as shown below. + +: $ sudo apt install openssh-server isc-dhcp-server netplan.io + +Next, the administrator concatenated a personal public ssh key and the +key found in =Secret/ssh_admin/= (created by [[*The CA Command][The CA Command]]) into an +=admin_keys= file, copied it to Gate, and installed it as the +=authorized_keys= for ~sysadm~. + +: notebook$ cat ~/.ssh/id_rsa.pub Secret/ssh_admin/id_rsa.pub \ +: notebook_ > admin_keys +: notebook$ rsync admin_keys sysadm@gate.lan: +: The authenticity of host 'gate.lan' can't be established. +: .... +: Are you sure you want to continue connecting (...)? yes +: ... +: sysadm@gate.lan's password: icismassssadestm +: notebook$ ssh sysadm@gate.lan +: sysadm@gate.lan's password: icismassssadestm +: sysadm@gate$ ( mask 077; mkdir .ssh; \ +: sysadm@gate_ cp admin_keys .ssh/authorized_keys ) +: sysadm@core$ rm admin_keys +: sysadm@core$ logout +: notebook$ rm admin_keys +: notebook$ + +Note that the name ~gate.lan~ should be known to the cable modem's DNS +service. An IP address might be used instead, discovered with an ~ip +a~ command on Gate. + +Now Gate no longer needed the Internets so it was disconnected from +the cable modem and connected to the campus Ethernet switch. Its +primary Ethernet interface was temporarily (manually) configured with +a new, private IP address. + +In the example command lines below, the address ~10.227.248.2~ was +generated by the random subnet address picking procedure described in +[[*Subnets][Subnets]], and is named ~gate_addr~ in the Ansible code. + +: $ sudo ip address add 10.227.248.2 dev eth0 + +Gate was also connected to the USB Ethernet dongles cabled to the +campus Wi-Fi access point and the campus ISP. The three network +adapters are known by their MAC addresses, the values of the variables +~gate_lan_mac~, ~gate_wifi_mac~, and ~gate_isp_mac~. (For more +information, see the Gate role's [[netplan-gate][Configure Netplan]] task.) + +At this point Gate was ready for provisioning with Ansible. + + +* The Front Role + +The ~front~ role installs and configures the services expected on the +institute's publicly accessible "front door": email, web, VPN. The +virtual machine is prepared with an Ubuntu Server install and remote +access to a privileged, administrator's account. (For details, see +[[*The Front Machine][The Front Machine]].) + +Front initially presents the same self-signed, "snake oil" server +certificate for its HTTP, SMTP and IMAP services, created by the +institute's certificate authority but "snake oil" all the same +(assuming the small institute is not a well recognized CA). The HTTP, +SMTP and IMAP servers are configured to use the certificate (and +private key) in =/etc/server.crt= (and =/etc/server.key=), so +replacing the "snake oil" is as easy as replacing these two files, +perhaps with symbolic links to, for example, +=/etc/letsencrypt/live/small.example.org/fullchain.pem=. + +Note that the OpenVPN server does /not/ use =/etc/server.crt=. It +uses the institute's CA and server certificates, and expects client +certificates signed by the institute CA. + +** Include Particulars + +The ~front~ role's tasks contain references to several common +institute particulars, variables in the public and private =vars.yml= +files and the institute membership roll in =private/members.yml=. The +first ~front~ role tasks are to include these files (described in [[*The Particulars][The +Particulars]] and [[*Account Management][Account Management]]). + +The code block below is the first to tangle into +=roles/front/tasks/main.yml=. + +#+CAPTION: =roles/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml :mkdirp yes +--- +- name: Include public variables. + include_vars: ../public/vars.yml + tags: accounts + +- name: Include private variables. + include_vars: ../private/vars.yml + tags: accounts + +- name: Include members. + include_vars: "{{ lookup('first_found', membership_rolls) }}" + tags: accounts +#+END_SRC + +** Configure Hostname + +This task ensures that Front's =/etc/hostname= and =/etc/mailname= are +correct. The correct =/etc/mailname= is essential to proper email +delivery. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml +- name: Configure hostname. + become: yes + copy: + content: "{{ domain_name }}\n" + dest: "{{ item }}" + loop: + - /etc/hostname + - /etc/mailname + notify: Update hostname. +#+END_SRC + +#+CAPTION: =roles_t/front/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/handlers/main.yml :mkdirp yes +--- +- name: Update hostname. + become: yes + command: hostname -F /etc/hostname +#+END_SRC + +** Enable Systemd Resolved <> + +The ~systemd-networkd~ and ~systemd-resolved~ service units are not +enabled by default in Debian, but /are/ the default in Ubuntu, and +work with Netplan. The =/usr/share/doc/systemd/README.Debian.gz= file +recommends both services be enabled /and/ =/etc/resolv.conf= be +replaced with a symbolic link to =/run/systemd/resolve/resolv.conf=. +The institute follows these recommendations (and /not/ the suggestion +to enable "persistent logging", yet). In Debian 12 there is a +~systemd-resolved~ package that symbolically links =/etc/resolv.conf= +(and provides =/lib/systemd/systemd-resolved=, formerly part of the +~systemd~ package). + +These tasks are included in all of the roles, and so are given in a +separate code block named ~enable-resolved~.[fn:1] + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml :noweb yes +<> +#+END_SRC + +#+NAME: enable-resolved +#+CAPTION: ~enable-resolved~ +#+BEGIN_SRC conf + +- name: Install systemd-resolved. + become: yes + apt: pkg=systemd-resolved + when: + - ansible_distribution == 'Debian' + - 11 < ansible_distribution_major_version|int + +- name: Enable/Start systemd-networkd. + become: yes + systemd: + service: systemd-networkd + enabled: yes + state: started + +- name: Enable/Start systemd-resolved. + become: yes + systemd: + service: systemd-resolved + enabled: yes + state: started + +- name: Link /etc/resolv.conf. + become: yes + file: + path: /etc/resolv.conf + src: /run/systemd/resolve/resolv.conf + state: link + force: yes + when: + - ansible_distribution == 'Debian' + - 12 > ansible_distribution_major_version|int +#+END_SRC + +** Add Administrator to System Groups + +The administrator often needs to read (directories of) log files owned +by groups ~root~ and ~adm~. Adding the administrator's account to +these groups speeds up debugging. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml + +- name: Add {{ ansible_user }} to system groups. + become: yes + user: + name: "{{ ansible_user }}" + append: yes + groups: root,adm +#+END_SRC + +** Configure SSH + +The SSH service on Front needs to be known to Monkey. The following +tasks ensure this by replacing the automatically generated keys with +those stored in =Secret/ssh_front/etc/ssh/= and restarting the server. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml + +- name: Install SSH host keys. + become: yes + copy: + src: ../Secret/ssh_front/etc/ssh/{{ item.name }} + dest: /etc/ssh/{{ item.name }} + mode: "{{ item.mode }}" + loop: + - { name: ssh_host_ecdsa_key, mode: "u=rw,g=,o=" } + - { name: ssh_host_ecdsa_key.pub, mode: "u=rw,g=r,o=r" } + - { name: ssh_host_ed25519_key, mode: "u=rw,g=,o=" } + - { name: ssh_host_ed25519_key.pub, mode: "u=rw,g=r,o=r" } + - { name: ssh_host_rsa_key, mode: "u=rw,g=,o=" } + - { name: ssh_host_rsa_key.pub, mode: "u=rw,g=r,o=r" } + notify: Reload SSH server. +#+END_SRC + +#+CAPTION: =roles_t/front/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/handlers/main.yml + +- name: Reload SSH server. + become: yes + systemd: + service: ssh + state: reloaded +#+END_SRC + +** Configure Monkey + +The small institute runs cron jobs and web scripts that generate +reports and perform checks. The un-privileged jobs are run by a +system account named ~monkey~. One of Monkey's more important jobs on +Core is to run ~rsync~ to update the public web site on Front. Monkey +on Core will login as ~monkey~ on Front to synchronize the files (as +described in [[apache2-front][*Configure Apache2]]). To do that without needing a +password, the ~monkey~ account on Front should authorize Monkey's SSH +key on Core. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml + +- name: Create monkey. + become: yes + user: + name: monkey + system: yes + +- name: Authorize monkey@core. + become: yes + vars: + pubkeyfile: ../Secret/ssh_monkey/id_rsa.pub + authorized_key: + user: monkey + key: "{{ lookup('file', pubkeyfile) }}" + manage_dir: yes + +- name: Add {{ ansible_user }} to monkey group. + become: yes + user: + name: "{{ ansible_user }}" + append: yes + groups: monkey +#+END_SRC + +** Install Rsync + +Monkey uses Rsync to keep the institute's public web site up-to-date. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml + +- name: Install rsync. + become: yes + apt: pkg=rsync +#+END_SRC + +** Install Unattended Upgrades + +The institute prefers to install security updates as soon as possible. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml + +- name: Install basic software. + become: yes + apt: pkg=unattended-upgrades +#+END_SRC + +** Configure User Accounts + +User accounts are created immediately so that Postfix and Dovecot can +start delivering email immediately, /without/ returning "no such +recipient" replies. The [[*Account Management][Account Management]] chapter describes the +~members~ and ~usernames~ variables used below. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml + +- name: Create user accounts. + become: yes + user: + name: "{{ item }}" + password: "{{ members[item].password_front }}" + update_password: always + home: /home/{{ item }} + loop: "{{ usernames }}" + when: members[item].status == 'current' + tags: accounts + +- name: Disable former users. + become: yes + user: + name: "{{ item }}" + password: "!" + loop: "{{ usernames }}" + when: members[item].status != 'current' + tags: accounts + +- name: Revoke former user authorized_keys. + become: yes + file: + path: /home/{{ item }}/.ssh/authorized_keys + state: absent + loop: "{{ usernames }}" + when: members[item].status != 'current' + tags: accounts +#+END_SRC + +** Trust Institute Certificate Authority + +Front should recognize the institute's Certificate Authority as +trustworthy, so its certificate is added to Front's set of trusted +CAs. More information about how the small institute manages its +X.509 certificates is available in [[*Keys][Keys]]. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml + +- name: Trust the institute CA. + become: yes + copy: + src: ../Secret/CA/pki/ca.crt + dest: /usr/local/share/ca-certificates/{{ domain_name }}.crt + mode: u=r,g=r,o=r + owner: root + group: root + notify: Update CAs. +#+END_SRC + +#+CAPTION: =roles_t/front/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/handlers/main.yml + +- name: Update CAs. + become: yes + command: update-ca-certificates +#+END_SRC + +** Install Server Certificate + +The servers on Front use the same certificate (and key) to +authenticate themselves to institute clients. They share the +=/etc/server.crt= and =/etc/server.key= files, the latter only +readable by ~root~. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml + +- name: Install server certificate/key. + become: yes + copy: + src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }} + dest: /etc/server.{{ item.typ }} + mode: "{{ item.mode }}" + force: no + loop: + - { path: "issued/{{ domain_name }}", typ: crt, + mode: "u=r,g=r,o=r" } + - { path: "private/{{ domain_name }}", typ: key, + mode: "u=r,g=,o=" } + notify: + - Restart Postfix. + - Restart Dovecot. +#+END_SRC + +** Configure Postfix on Front + +Front uses Postfix to provide the institute's public SMTP service, and +uses the institute's domain name for its host name. The default +Debian configuration (for an "Internet Site") is nearly sufficient. +Manual installation may prompt for configuration type and mail name. +The appropriate answers are listed here but will be checked +(corrected) by Ansible tasks below. + +- General type of mail configuration: Internet Site +- System mail name: small.example.org + +As discussed in [[*The Email Service][The Email Service]] above, Front's Postfix configuration +includes site-wide support for larger message sizes, shorter queue +times, the relaying configuration, and the common path to incoming +emails. These and a few Front-specific Postfix configurations +settings make up the complete configuration (below). + +Front relays messages from the institute's public VPN via which Core +relays messages from the campus. + +#+NAME: postfix-front-networks +#+CAPTION: ~postfix-front-networks~ +#+BEGIN_SRC conf +- p: mynetworks + v: >- + {{ public_vpn_net_cidr }} + 127.0.0.0/8 + [::ffff:127.0.0.0]/104 + [::1]/128 +#+END_SRC + +Front uses one recipient restriction to make things difficult for +spammers, with ~permit_mynetworks~ at the start to /not/ make things +difficult for internal hosts, who do /not/ have (public) domain names. + +#+NAME: postfix-front-restrictions +#+CAPTION: ~postfix-front-restrictions~ +#+BEGIN_SRC conf +- p: smtpd_recipient_restrictions + v: >- + permit_mynetworks + reject_unauth_pipelining + reject_unauth_destination + reject_unknown_sender_domain +#+END_SRC + +Front uses Postfix header checks to strip ~Received~ headers from +outgoing messages. These headers contain campus host and network +names and addresses in the clear (un-encrypted). Stripping them +improves network privacy and security. Front also strips ~User-Agent~ +headers just to make it harder to target the program(s) members use to +open their email. These headers should be stripped only from outgoing +messages; incoming messages are delivered locally, without +~smtp_header_checks~. + +#+NAME: postfix-header-checks +#+CAPTION: ~postfix-header-checks~ +#+BEGIN_SRC conf +- p: smtp_header_checks + v: regexp:/etc/postfix/header_checks.cf +#+END_SRC + +#+NAME: postfix-header-checks-content +#+CAPTION: ~postfix-header-checks-content~ +#+BEGIN_SRC conf +/^Received:/ IGNORE +/^User-Agent:/ IGNORE +#+END_SRC + +The complete Postfix configuration for Front follows. In addition to +the options already discussed, it must override the ~loopback-only~ +Debian default for ~inet_interfaces~. + +#+NAME: postfix-front +#+CAPTION: ~postfix-front~ +#+BEGIN_SRC conf :noweb yes +- { p: smtpd_tls_cert_file, v: /etc/server.crt } +- { p: smtpd_tls_key_file, v: /etc/server.key } +<> +<> +<> +<> +<> +<> +<> +#+END_SRC + +The following Ansible tasks install Postfix, modify +=/etc/postfix/main.cf= according to the settings given above, and +start and enable the service. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml :noweb yes + +- name: Install Postfix. + become: yes + apt: pkg=postfix + +- name: Configure Postfix. + become: yes + lineinfile: + path: /etc/postfix/main.cf + regexp: "^ *{{ item.p }} *=" + line: "{{ item.p }} = {{ item.v }}" + loop: + <> + notify: Restart Postfix. + +- name: Install Postfix header_checks. + become: yes + copy: + content: | + <> + dest: /etc/postfix/header_checks.cf + notify: Postmap header checks. + +- name: Enable/Start Postfix. + become: yes + systemd: + service: postfix + enabled: yes + state: started +#+END_SRC + +#+CAPTION: =roles_t/front/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/handlers/main.yml + +- name: Restart Postfix. + become: yes + systemd: + service: postfix + state: restarted + +- name: Postmap header checks. + become: yes + command: + chdir: /etc/postfix/ + cmd: postmap header_checks.cf + notify: Restart Postfix. +#+END_SRC + +** Configure Public Email Aliases + +The institute's Front needs to deliver email addressed to a number of +common aliases as well as those advertised on the web site. System +daemons like ~cron(8)~ may also send email to system accounts like +~monkey~. The following aliases make these customary mailboxes +available. The aliases are installed in =/etc/aliases= in a block +with a special marker so that additional blocks can be installed by +other Ansible roles. Note that the ~postmaster~ alias forwards to +~root~ in the default Debian configuration, and the following aliases +do /not/ include the crucial ~root~ alias that forwards to the +administrator. It could be included here or in a separate block +created by a more specialized role. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml +- name: Install institute email aliases. + become: yes + blockinfile: + block: | + abuse: root + webmaster: root + admin: root + monkey: monkey@{{ front_private_addr }} + root: {{ ansible_user }} + path: /etc/aliases + marker: "# {mark} INSTITUTE MANAGED BLOCK" + notify: New aliases. +#+END_SRC + +#+CAPTION: =roles_t/front/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/handlers/main.yml + +- name: New aliases. + become: yes + command: newaliases +#+END_SRC + +** Configure Dovecot IMAPd + +Front uses Dovecot's IMAPd to allow user Fetchmail jobs on Core to +pick up messages. Front's Dovecot configuration is largely the Debian +default with POP and IMAP (without TLS) support disabled. This is a +bit "over the top" given that Core accesses Front via VPN, but helps +to ensure privacy even when members must, in extremis, access recent +email directly from their accounts on Front. For more information +about Front's role in the institute's email services, see [[*The Email Service][The Email +Service]]. + +The institute follows the recommendation in the package +=README.Debian= (in =/usr/share/dovecot-core/=). Note that the +default "snake oil" certificate can be replaced with one signed by a +recognized authority (e.g. Let's Encrypt) so that email apps will not +ask about trusting the self-signed certificate. + +The following Ansible tasks install Dovecot's IMAP daemon and its +=/etc/dovecot/local.conf= configuration file, then starts the service +and enables it to start at every reboot. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml :noweb yes + +- name: Install Dovecot IMAPd. + become: yes + apt: pkg=dovecot-imapd + +- name: Configure Dovecot IMAPd. + become: yes + copy: + content: | + <> + ssl_cert = > + <> + dest: /etc/dovecot/local.conf + notify: Restart Dovecot. + +- name: Enable/Start Dovecot. + become: yes + systemd: + service: dovecot + enabled: yes + state: started +#+END_SRC + +#+CAPTION: =roles_t/front/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/handlers/main.yml + +- name: Restart Dovecot. + become: yes + systemd: + service: dovecot + state: restarted +#+END_SRC + +** Configure Apache2 <> + +This is the small institute's public web site. It is simple, static, +and thus (hopefully) difficult to subvert. There are no server-side +scripts to run. The standard Debian install runs the server under the +~www-data~ account, which does not need /any/ permissions. It will +serve only world-readable files. + +The server's document root, =/home/www/=, is separate from the Debian +default =/var/www/html/= and (presumably) on the largest disk +partition. The directory tree, from the document root to the leaf +HTML files, should be owned by ~monkey~, and /only/ writable by its +owner. It should /not/ be writable by the Apache2 server (running as +~www-data~). + +The institute uses several SSL directives to trim protocol and cipher +suite compatibility down, eliminating old and insecure methods and +providing for forward secrecy. Along with an up-to-date Let's Encrypt +certificate, these settings win the institute's web site an A rating +from Qualys SSL Labs ([[https://www.ssllabs.com/]]). + +The ~apache-ciphers~ block below is included last in the Apache2 +configuration, so that its ~SSLCipherSuite~ directive can override +(narrow) any list of ciphers set earlier (e.g. by Let's +Encrypt![fn:2]). The protocols and cipher suites specified here were +taken from [[https://www.ssllabs.com/projects/best-practices]] in 2022. + +#+NAME: apache-ciphers +#+CAPTION: ~apache-ciphers~ +#+BEGIN_SRC conf +SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1 +SSLHonorCipherOrder on +SSLCipherSuite {{ [ 'ECDHE-ECDSA-AES128-GCM-SHA256', + 'ECDHE-ECDSA-AES256-GCM-SHA384', + 'ECDHE-ECDSA-AES128-SHA', + 'ECDHE-ECDSA-AES256-SHA', + 'ECDHE-ECDSA-AES128-SHA256', + 'ECDHE-ECDSA-AES256-SHA384', + 'ECDHE-RSA-AES128-GCM-SHA256', + 'ECDHE-RSA-AES256-GCM-SHA384', + 'ECDHE-RSA-AES128-SHA', + 'ECDHE-RSA-AES256-SHA', + 'ECDHE-RSA-AES128-SHA256', + 'ECDHE-RSA-AES256-SHA384', + 'DHE-RSA-AES128-GCM-SHA256', + 'DHE-RSA-AES256-GCM-SHA384', + 'DHE-RSA-AES128-SHA', + 'DHE-RSA-AES256-SHA', + 'DHE-RSA-AES128-SHA256', + 'DHE-RSA-AES256-SHA256', + '!aNULL', + '!eNULL', + '!LOW', + '!3DES', + '!MD5', + '!EXP', + '!PSK', + '!SRP', + '!DSS', + '!RC4' ] |join(":") }} +#+END_SRC + +The institute supports public member (static) web pages. A member can +put an =index.html= file in their =~/Public/HTML/= directory on Front +and it will be served as ~https://small.example.org/~member/~ (if the +member's account name is ~member~ and the file is world readable). + +On Front, a member's web pages are available only when they appear in +=/home/www-users/= (via a symbolic link), giving the administration +more control over what appears on the public web site. The tasks +below create or remove the symbolic links. + +The following are the necessary Apache2 directives: a ~UserDir~ +directive naming =/home/www-users/=, a matching ~Directory~ block that +allows the server to follow the symbol links, and a ~Directory~ block +that matches the user directories and includes the standard ~Require~ +and ~AllowOverride~ directives used on all of the institute's static +web sites (~https://small.example.org/~, ~http://live/~, and +~http://test/~). + +#+NAME: apache-userdir-front +#+CAPTION: ~apache-userdir-front~ +#+BEGIN_SRC conf :noweb yes +UserDir /home/www-users + + <> + +#+END_SRC + +#+NAME: apache-userdir-directory +#+CAPTION: ~apache-userdir-directory~ +#+BEGIN_SRC conf +Require all granted +AllowOverride None +#+END_SRC + +The institute requires the use of HTTPS on Front, so its default HTTP +virtual host permanently redirects requests to their corresponding +HTTPS URLs. + +#+NAME: apache-redirect-front +#+CAPTION: ~apache-redirect-front~ +#+BEGIN_SRC conf + + Redirect permanent / https://{{ domain_name }}/ + +#+END_SRC + +The complete Apache2 configuration for Front is given below. It is +installed in =/etc/apache2/sites-available/{{ domain_name }}.conf= (as +expected by Let's Encrypt's Certbot). It includes the fragments +described above and adds a ~VirtualHost~ block for the HTTPS service +(also as expected by Certbot). The ~VirtualHost~ optionally includes +an additional configuration file to allow other Ansible roles to +specialize this configuration without disturbing the institute file. + +The ~DocumentRoot~ directive is accompanied by a ~Directory~ block +that authorizes access to the tree, and ensures =.htaccess= files +within the tree are disabled for speed and security. This and most of +Front's Apache2 directives (below) are intended for the top level, not +inside a ~VirtualHost~ block, to apply globally. + +#+NAME: apache-front +#+CAPTION: ~apache-front~ +#+BEGIN_SRC conf :noweb yes +ServerName {{ domain_name }} +ServerAdmin webmaster@{{ domain_name }} + +DocumentRoot /home/www + + Require all granted + AllowOverride None + + +<> + +ErrorLog ${APACHE_LOG_DIR}/error.log +CustomLog ${APACHE_LOG_DIR}/access.log combined + +<> + + + SSLEngine on + SSLCertificateFile /etc/server.crt + SSLCertificateKeyFile /etc/server.key + IncludeOptional \ + /etc/apache2/sites-available/{{ domain_name }}-vhost.conf + + +<> +#+END_SRC + +Ansible installs the configuration above in +e.g. =/etc/apache2/sites-available/small.example.org.conf= and runs +~a2ensite -q small.example.org~ to enable it. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml :noweb yes + +- name: Install Apache2. + become: yes + apt: pkg=apache2 + +- name: Enable Apache2 modules. + become: yes + apache2_module: + name: "{{ item }}" + loop: [ ssl, userdir ] + notify: Restart Apache2. + +- name: Create DocumentRoot. + become: yes + file: + path: /home/www + state: directory + owner: monkey + group: monkey + +- name: Configure web site. + become: yes + copy: + content: | + <> + dest: /etc/apache2/sites-available/{{ domain_name }}.conf + notify: Restart Apache2. + +- name: Enable web site. + become: yes + command: + cmd: a2ensite -q {{ domain_name }} + creates: /etc/apache2/sites-enabled/{{ domain_name }}.conf + notify: Restart Apache2. + +- name: Enable/Start Apache2. + become: yes + systemd: + service: apache2 + enabled: yes + state: started +#+END_SRC + +#+CAPTION: =roles_t/front/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/handlers/main.yml + +- name: Restart Apache2. + become: yes + systemd: + service: apache2 + state: restarted +#+END_SRC + +Furthermore, the default web site and its HTTPS version is disabled so +that it does not interfere with its replacement. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml + +- name: Disable default vhosts. + become: yes + file: + path: /etc/apache2/sites-enabled/{{ item }} + state: absent + loop: [ 000-default.conf, default-ssl.conf ] + notify: Restart Apache2. +#+END_SRC + +The redundant default =other-vhosts-access-log= configuration option +is also disabled. There are no other virtual hosts, and it stores the +same records as =access.log=. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml + +- name: Disable other-vhosts-access-log option. + become: yes + file: + path: /etc/apache2/conf-enabled/other-vhosts-access-log.conf + state: absent + notify: Restart Apache2. +#+END_SRC + +Finally, the ~UserDir~ is created and populated with symbolic links to +the users' =~/Public/HTML/= directories. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml + +- name: Create UserDir. + become: yes + file: + path: /home/www-users/ + state: directory + +- name: Create UserDir links. + become: yes + file: + path: /home/www-users/{{ item }} + src: /home/{{ item }}/Public/HTML + state: link + force: yes + loop: "{{ usernames }}" + when: members[item].status == 'current' + tags: accounts + +- name: Disable former UserDir links. + become: yes + file: + path: /home/www-users/{{ item }} + state: absent + loop: "{{ usernames }}" + when: members[item].status != 'current' + tags: accounts +#+END_SRC + +** Configure OpenVPN + +Front uses OpenVPN to provide the institute's public VPN service. The +configuration is straightforward with one complication. OpenVPN needs +to know how to route to the campus VPN, which is only accessible when +Core is connected. OpenVPN supports these dynamic routes internally +with client-specific configuration files. The small institute uses +one of these, =/etc/openvpn/ccd/core=, so that OpenVPN will know to +route packets for the campus networks to Core. + +#+NAME: openvpn-ccd-core +#+CAPTION: ~openvpn-ccd-core~ +#+BEGIN_SRC conf +iroute {{ private_net_and_mask }} +iroute {{ campus_vpn_net_and_mask }} +#+END_SRC + +The VPN clients are /not/ configured to route /all/ of their traffic +through the VPN, so Front pushes routes to the other institute +networks. The clients thus know to route traffic for the private +Ethernet or campus VPN to Front on the public VPN. (If the clients +/were/ configured to route all traffic through the VPN, the one +default route is all that would be needed.) Front itself is in the +same situation, outside the institute networks with a default route +through some ISP, and thus needs the same routes as the clients. + +#+NAME: openvpn-front-routes +#+CAPTION: ~openvpn-front-routes~ +#+BEGIN_SRC conf +route {{ private_net_and_mask }} +route {{ campus_vpn_net_and_mask }} +push "route {{ private_net_and_mask }}" +push "route {{ campus_vpn_net_and_mask }}" +#+END_SRC + +The complete OpenVPN configuration for Front includes a ~server~ +option, the ~client-config-dir~ option, the routes mentioned above, +and the common options discussed in [[*The VPN Services][The VPN Service]]. + +#+NAME: openvpn-front +#+CAPTION: ~openvpn-front~ +#+BEGIN_SRC conf :noweb yes +server {{ public_vpn_net_and_mask }} +client-config-dir /etc/openvpn/ccd +<> +<> +<> +<> +<> +<> +<> +<> +ca /usr/local/share/ca-certificates/{{ domain_name }}.crt +cert server.crt +key server.key +dh dh2048.pem +tls-auth ta.key 0 +#+END_SRC + +Finally, here are the tasks (and handler) required to install and +configure the OpenVPN server on Front. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml :noweb yes + +- name: Install OpenVPN. + become: yes + apt: pkg=openvpn + +- name: Enable IP forwarding. + become: yes + sysctl: + name: net.ipv4.ip_forward + value: "1" + state: present + +- name: Create OpenVPN client configuration directory. + become: yes + file: + path: /etc/openvpn/ccd + state: directory + notify: Restart OpenVPN. + +- name: Install OpenVPN client configuration for Core. + become: yes + copy: + content: | + <> + dest: /etc/openvpn/ccd/core + notify: Restart OpenVPN. + +- name: Disable former VPN clients. + become: yes + copy: + content: "disable\n" + dest: /etc/openvpn/ccd/{{ item }} + loop: "{{ revoked }}" + tags: accounts + +- name: Install OpenVPN server certificate/key. + become: yes + copy: + src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }} + dest: /etc/openvpn/server.{{ item.typ }} + mode: "{{ item.mode }}" + loop: + - { path: "issued/{{ domain_name }}", typ: crt, + mode: "u=r,g=r,o=r" } + - { path: "private/{{ domain_name }}", typ: key, + mode: "u=r,g=,o=" } + notify: Restart OpenVPN. + +- name: Install OpenVPN secrets. + become: yes + copy: + src: ../Secret/{{ item.src }} + dest: /etc/openvpn/{{ item.dest }} + mode: u=r,g=,o= + loop: + - { src: front-dh2048.pem, dest: dh2048.pem } + - { src: front-ta.key, dest: ta.key } + notify: Restart OpenVPN. + +- name: Configure OpenVPN. + become: yes + copy: + content: | + <> + dest: /etc/openvpn/server.conf + mode: u=r,g=r,o= + notify: Restart OpenVPN. + +- name: Enable/Start OpenVPN. + become: yes + systemd: + service: openvpn@server + enabled: yes + state: started +#+END_SRC + +#+CAPTION: =roles_t/front/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/handlers/main.yml + +- name: Restart OpenVPN. + become: yes + systemd: + service: openvpn@server + state: restarted +#+END_SRC + +** Configure Kamailio + +Front uses Kamailio to provide a SIP service on the public VPN so that +members abroad can chat privately. This is a connection-less UDP +service that can be used with or without encryption. The VPN's +encryption can be relied upon or an extra layer can be used when +necessary. (Apps cannot tell if a network is secure and often assume +the luser is an idiot, so they insist on doing some encryption.) + +Kamailio listens on all network interfaces by default, but the +institute expects its SIP traffic to be aggregated and encrypted via +the public VPN. To enforce this expectation, Kamailio is instructed +to listen /only/ on Front's public VPN. The private name +~sip.small.private~ resolves to this address for the convenience +of members configuring SIP clients. The server configuration +specifies the actual IP, known here as ~front_private_addr~. + +#+NAME: kamailio +#+CAPTION: ~kamailio~ +#+BEGIN_SRC conf +listen=udp:{{ front_private_addr }}:5060 +#+END_SRC + +The Ansible tasks that install and configure Kamailio follow, but +before Kamailio is configured (thus started), the service is tweaked +by a configuration drop (which must notify Systemd before the service +starts). + +The first step is to install Kamailio. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml :noweb yes + +- name: Install Kamailio. + become: yes + apt: pkg=kamailio +#+END_SRC + +Now the configuration drop concerns the network device on which +Kamailio will be listening, the ~tun~ device created by OpenVPN. The +added configuration settings inform Systemd that Kamailio should not +be started before the ~tun~ device has appeared. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml + +- name: Create Kamailio/Systemd configuration drop. + become: yes + file: + path: /etc/systemd/system/kamailio.service.d + state: directory + +- name: Create Kamailio dependence on OpenVPN server. + become: yes + copy: + content: | + [Unit] + Requires=sys-devices-virtual-net-ovpn.device + After=sys-devices-virtual-net-ovpn.device + dest: /etc/systemd/system/kamailio.service.d/depend.conf + notify: Reload Systemd. +#+END_SRC + +#+CAPTION: =roles_t/front/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/handlers/main.yml + +- name: Reload Systemd. + become: yes + command: systemctl daemon-reload +#+END_SRC + +Finally, Kamailio can be configured and started. + +#+CAPTION: =roles_t/front/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml :noweb yes + +- name: Configure Kamailio. + become: yes + copy: + content: | + <> + dest: /etc/kamailio/kamailio-local.cfg + notify: Restart Kamailio. + +- name: Enable/Start Kamailio. + become: yes + systemd: + service: kamailio + enabled: yes + state: started +#+END_SRC + +#+CAPTION: =roles_t/front/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/front/handlers/main.yml + +- name: Restart Kamailio. + become: yes + systemd: + service: kamailio + state: restarted +#+END_SRC + + +* The Core Role + +The ~core~ role configures many essential campus network services as +well as the institute's private cloud, so the core machine has +horsepower (CPUs and RAM) and large disks and is prepared with a +Debian install and remote access to a privileged, administrator's +account. (For details, see [[*The Core Machine][The Core Machine]].) + +** Include Particulars + +The first task, as in [[*The Front Role][The Front Role]], is to include the institute +particulars and membership roll. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml :mkdirp yes +--- +- name: Include public variables. + include_vars: ../public/vars.yml + tags: accounts +- name: Include private variables. + include_vars: ../private/vars.yml + tags: accounts +- name: Include members. + include_vars: "{{ lookup('first_found', membership_rolls) }}" + tags: accounts +#+END_SRC + +** Configure Hostname + +This task ensures that Core's =/etc/hostname= and =/etc/mailname= are +correct. Core accepts email addressed to the institute's public or +private domain names, e.g. to ~dick@small.example.org~ as well as +~dick@small.private~. The correct =/etc/mailname= is essential to +proper email delivery. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Configure hostname. + become: yes + copy: + content: "{{ item.name }}\n" + dest: "{{ item.file }}" + loop: + - { name: "core.{{ domain_priv }}", file: /etc/mailname } + - { name: "{{ inventory_hostname }}", file: /etc/hostname } + notify: Update hostname. +#+END_SRC + +#+CAPTION: =roles_t/core/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml :mkdirp yes +--- +- name: Update hostname. + become: yes + command: hostname -F /etc/hostname +#+END_SRC + +** Enable Systemd Resolved + +Core starts the ~systemd-networkd~ and ~systemd-resolved~ service +units on boot. See [[resolved-front][Enable Systemd Resolved]]. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml :noweb yes +<> +#+END_SRC + +** Configure Systemd Resolved + +Core runs the campus name server, so Resolved is configured to use it +(or ~dns.google~), to include the institute's domain in its search +list, and to disable its cache and stub listener. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Configure resolved. + become: yes + lineinfile: + path: /etc/systemd/resolved.conf + regexp: "{{ item.regexp }}" + line: "{{ item.line }}" + loop: + - { regexp: '^ *DNS *=', line: "DNS=127.0.0.1" } + - { regexp: '^ *FallbackDNS *=', line: "FallbackDNS=8.8.8.8" } + - { regexp: '^ *Domains *=', line: "Domains={{ domain_priv }}" } + - { regexp: '^ *Cache *=', line: "Cache=no" } + - { regexp: '^ *DNSStubListener *=', line: "DNSStubListener=no" } + notify: + - Reload Systemd. + - Restart Systemd resolved. +#+END_SRC + +#+CAPTION: =roles_t/core/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml + +- name: Reload Systemd. + become: yes + command: systemctl daemon-reload + +- name: Restart Systemd resolved. + become: yes + systemd: + service: systemd-resolved + state: restarted +#+END_SRC + +** Configure Netplan + +Core's network interface is statically configured using Netplan and an +=/etc/netplan/60-core.yaml= file. That file provides Core's address +on the private Ethernet, the campus name server and search domain, and +the default route through Gate to the campus ISP. A second route, +through Core itself to Front, is advertised to other hosts, but is not +created here. It is created by OpenVPN when Core connects to Front's +VPN. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Install netplan. + become: yes + apt: pkg=netplan.io + +- name: Configure netplan. + become: yes + copy: + content: | + network: + renderer: networkd + ethernets: + {{ ansible_default_ipv4.interface }}: + dhcp4: false + addresses: [ {{ core_addr_cidr }} ] + nameservers: + search: [ {{ domain_priv }} ] + addresses: [ {{ core_addr }} ] + gateway4: {{ gate_addr }} + dest: /etc/netplan/60-core.yaml + mode: u=rw,g=r,o= + notify: Apply netplan. +#+END_SRC + +#+CAPTION: =roles_t/core/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml + +- name: Apply netplan. + become: yes + command: netplan apply +#+END_SRC + +** Configure DHCP For the Private Ethernet + +Core speaks DHCP (Dynamic Host Configuration Protocol) using the +Internet Software Consortium's DHCP server. The server assigns unique +network addresses to hosts plugged into the private Ethernet as well +as advertising local net services, especially the local Domain Name +Service. + +The example configuration file, =private/core-dhcpd.conf=, uses +RFC3442's extension to encode a second (non-default) static route. +The default route is through the campus ISP at Gate. A second route +directs campus traffic to the Front VPN through Core. This is just an +example file. The administrator adds and removes actual machines from +the actual =private/core-dhcpd.conf= file. + +#+CAPTION: =private/core-dhcpd.conf= +#+BEGIN_SRC conf :tangle private/core-dhcpd.conf :tangle-mode u=rw +option domain-name "small.private"; +option domain-name-servers 192.168.56.1; + +default-lease-time 3600; +max-lease-time 7200; + +ddns-update-style none; + +authoritative; + +log-facility daemon; + +option rfc3442-routes code 121 = array of integer 8; + +subnet 192.168.56.0 netmask 255.255.255.0 { + option subnet-mask 255.255.255.0; + option broadcast-address 192.168.56.255; + option routers 192.168.56.2; + option ntp-servers 192.168.56.1; + option rfc3442-routes 24, 10,177,86, 192,168,56,1, 0, 192,168,56,2; +} + +host core { + hardware ethernet 08:00:27:45:3b:a2; fixed-address 192.168.56.1; } +host gate { + hardware ethernet 08:00:27:e0:79:ab; fixed-address 192.168.56.2; } +host server { + hardware ethernet 08:00:27:f3:41:66; fixed-address 192.168.56.3; } +#+END_SRC + +The following tasks install the ISC's DHCP server and configure it +with the real =private/core-dhcpd.conf= (/not/ the example above). + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Install DHCP server. + become: yes + apt: pkg=isc-dhcp-server + +- name: Configure DHCP interface. + become: yes + lineinfile: + path: /etc/default/isc-dhcp-server + line: INTERFACESv4="{{ ansible_default_ipv4.interface }}" + regexp: ^INTERFACESv4= + notify: Restart DHCP server. + +- name: Configure DHCP subnet. + become: yes + copy: + src: ../private/core-dhcpd.conf + dest: /etc/dhcp/dhcpd.conf + notify: Restart DHCP server. + +- name: Enable/Start DHCP server. + become: yes + systemd: + service: isc-dhcp-server + enabled: yes + state: started +#+END_SRC + +#+CAPTION: =roles_t/core/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml + +- name: Restart DHCP server. + become: yes + systemd: + service: isc-dhcp-server + state: restarted +#+END_SRC + +** Configure BIND9 + +Core uses BIND9 to provide a private-view name service for the +institute as described in [[*The Name Service][The Name Service]]. The configuration +supports reverse name lookups, resolving many private network +addresses to private domain names. + +The following tasks install and configure BIND9 on Core. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml :noweb yes + +- name: Install BIND9. + become: yes + apt: pkg=bind9 + +- name: Configure BIND9 with named.conf.options. + become: yes + copy: + content: | + <> + dest: /etc/bind/named.conf.options + notify: Reload BIND9. + +- name: Configure BIND9 with named.conf.local. + become: yes + copy: + content: | + <> + dest: /etc/bind/named.conf.local + notify: Reload BIND9. + +- name: Install BIND9 zonefiles. + become: yes + copy: + src: ../private/db.{{ item }} + dest: /etc/bind/db.{{ item }} + loop: [ domain, private, public_vpn, campus_vpn ] + notify: Reload BIND9. + +- name: Enable/Start BIND9. + become: yes + systemd: + service: bind9 + enabled: yes + state: started +#+END_SRC + +#+CAPTION: =roles_t/core/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml + + - name: Reload BIND9. + become: yes + systemd: + service: bind9 + state: reloaded +#+END_SRC + +Examples of the necessary zone files, for the "Install BIND9 +zonefiles." task above, are given below. If the campus ISP provided +one or more IP addresses for stable name servers, those should +probably be used as forwarders rather than Google. And SecureDNS just +craps up =/var/log/= and the Systemd journal. + +#+NAME: bind-options +#+CAPTION: ~bind-options~ +#+BEGIN_SRC conf +acl "trusted" { + {{ private_net_cidr }}; + {{ public_vpn_net_cidr }}; + {{ campus_vpn_net_cidr }}; + {{ gate_wifi_net_cidr }}; + localhost; +}; + +options { + directory "/var/cache/bind"; + + forwarders { + 8.8.4.4; + 8.8.8.8; + }; + + allow-query { any; }; + allow-recursion { trusted; }; + allow-query-cache { trusted; }; + + //============================================================ + // If BIND logs error messages about the root key being + // expired, you will need to update your keys. + // See https://www.isc.org/bind-keys + //============================================================ + //dnssec-validation auto; + // If Secure DNS is too much of a headache... + dnssec-enable no; + dnssec-validation no; + + auth-nxdomain no; # conform to RFC1035 + //listen-on-v6 { any; }; + listen-on { {{ core_addr }}; }; +}; +#+END_SRC + +#+NAME: bind-local +#+CAPTION: ~bind-local~ +#+BEGIN_SRC conf +include "/etc/bind/zones.rfc1918"; + +zone "{{ domain_priv }}." { + type master; + file "/etc/bind/db.domain"; +}; + +zone "{{ private_net_cidr | ipaddr('revdns') + | regex_replace('^0\.','') }}" { + type master; + file "/etc/bind/db.private"; +}; + +zone "{{ public_vpn_net_cidr | ipaddr('revdns') + | regex_replace('^0\.','') }}" { + type master; + file "/etc/bind/db.public_vpn"; +}; + +zone "{{ campus_vpn_net_cidr | ipaddr('revdns') + | regex_replace('^0\.','') }}" { + type master; + file "/etc/bind/db.campus_vpn"; +}; +#+END_SRC + +#+CAPTION: =private/db.domain= +#+BEGIN_SRC conf :tangle private/db.domain :tangle-mode u=rw +; +; BIND data file for a small institute's PRIVATE domain names. +; +$TTL 604800 +@ IN SOA small.private. root.small.private. ( + 1 ; Serial + 604800 ; Refresh + 86400 ; Retry + 2419200 ; Expire + 604800 ) ; Negative Cache TTL +; +@ IN NS core.small.private. +$TTL 7200 +mail IN CNAME core.small.private. +smtp IN CNAME core.small.private. +ns IN CNAME core.small.private. +www IN CNAME core.small.private. +test IN CNAME core.small.private. +live IN CNAME core.small.private. +ntp IN CNAME core.small.private. +sip IN A 10.177.86.1 +; +core IN A 192.168.56.1 +gate IN A 192.168.56.2 +#+END_SRC + +#+CAPTION: =private/db.private= +#+BEGIN_SRC conf :tangle private/db.private :tangle-mode u=rw +; +; BIND reverse data file for a small institute's private Ethernet. +; +$TTL 604800 +@ IN SOA small.private. root.small.private. ( + 1 ; Serial + 604800 ; Refresh + 86400 ; Retry + 2419200 ; Expire + 604800 ) ; Negative Cache TTL +; +@ IN NS core.small.private. +$TTL 7200 +1 IN PTR core.small.private. +2 IN PTR gate.small.private. +#+END_SRC + +#+CAPTION: =private/db.public_vpn= +#+BEGIN_SRC conf :tangle private/db.public_vpn :tangle-mode u=rw +; +; BIND reverse data file for a small institute's public VPN. +; +$TTL 604800 +@ IN SOA small.private. root.small.private. ( + 1 ; Serial + 604800 ; Refresh + 86400 ; Retry + 2419200 ; Expire + 604800 ) ; Negative Cache TTL +; +@ IN NS core.small.private. +$TTL 7200 +1 IN PTR front-p.small.private. +2 IN PTR core-p.small.private. +#+END_SRC + +#+CAPTION: =private/db.campus_vpn= +#+BEGIN_SRC conf :tangle private/db.campus_vpn :tangle-mode u=rw +; +; BIND reverse data file for a small institute's campus VPN. +; +$TTL 604800 +@ IN SOA small.private. root.small.private. ( + 1 ; Serial + 604800 ; Refresh + 86400 ; Retry + 2419200 ; Expire + 604800 ) ; Negative Cache TTL +; +@ IN NS core.small.private. +$TTL 7200 +1 IN PTR gate-c.small.private. +#+END_SRC + +** Add Administrator to System Groups + +The administrator often needs to read (directories of) log files owned +by groups ~root~ and ~adm~. Adding the administrator's account to +these groups speeds up debugging. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Add {{ ansible_user }} to system groups. + become: yes + user: + name: "{{ ansible_user }}" + append: yes + groups: root,adm +#+END_SRC + +** Configure Monkey + +The small institute runs cron jobs and web scripts that generate +reports and perform checks. The un-privileged jobs are run by a +system account named ~monkey~. One of Monkey's more important jobs on +Core is to run ~rsync~ to update the public web site on Front (as +described in [[apache2-core][*Configure Apache2]]). + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Create monkey. + become: yes + user: + name: monkey + system: yes + append: yes + groups: staff + +- name: Add {{ ansible_user }} to staff groups. + become: yes + user: + name: "{{ ansible_user }}" + append: yes + groups: monkey,staff + +- name: Create /home/monkey/.ssh/. + become: yes + file: + path: /home/monkey/.ssh + state: directory + mode: u=rwx,g=,o= + owner: monkey + group: monkey + +- name: Configure monkey@core. + become: yes + copy: + src: ../Secret/ssh_monkey/{{ item.name }} + dest: /home/monkey/.ssh/{{ item.name }} + mode: "{{ item.mode }}" + owner: monkey + group: monkey + loop: + - { name: config, mode: "u=rw,g=r,o=" } + - { name: id_rsa.pub, mode: "u=rw,g=r,o=r" } + - { name: id_rsa, mode: "u=rw,g=,o=" } + +- name: Configure Monkey SSH known hosts. + become: yes + vars: + pubkeypath: ../Secret/ssh_front/etc/ssh + pubkeyfile: "{{ pubkeypath }}/ssh_host_ecdsa_key.pub" + pubkey: "{{ lookup('file', pubkeyfile) }}" + lineinfile: + regexp: "^{{ domain_name }}" + line: "{{ domain_name }},{{ front_addr }} {{ pubkey }}" + path: /home/monkey/.ssh/known_hosts + create: yes + owner: monkey + group: monkey + mode: "u=rw,g=r,o=" +#+END_SRC + +** Install ~unattended-upgrades~ + +The institute prefers to install security updates as soon as possible. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Install basic software. + become: yes + apt: pkg=unattended-upgrades +#+END_SRC + +** Install Expect + +The ~expect~ program is used by [[* The Institute Commands][The Institute Commands]] to interact +with Nextcloud on the command line. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Install expect. + become: yes + apt: pkg=expect +#+END_SRC + +** Configure User Accounts + +User accounts are created immediately so that backups can begin +restoring as soon as possible. The [[*Account Management][Account Management]] chapter +describes the ~members~ and ~usernames~ variables. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Create user accounts. + become: yes + user: + name: "{{ item }}" + password: "{{ members[item].password_core }}" + update_password: always + home: /home/{{ item }} + loop: "{{ usernames }}" + when: members[item].status == 'current' + tags: accounts + +- name: Disable former users. + become: yes + user: + name: "{{ item }}" + password: "!" + loop: "{{ usernames }}" + when: members[item].status != 'current' + tags: accounts + +- name: Revoke former user authorized_keys. + become: yes + file: + path: /home/{{ item }}/.ssh/authorized_keys + state: absent + loop: "{{ usernames }}" + when: members[item].status != 'current' + tags: accounts +#+END_SRC + +** Trust Institute Certificate Authority + +Core should recognize the institute's Certificate Authority as +trustworthy, so its certificate is added to Core's set of trusted +CAs. More information about how the small institute manages its +X.509 certificates is available in [[*Keys][Keys]]. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Trust the institute CA. + become: yes + copy: + src: ../Secret/CA/pki/ca.crt + dest: /usr/local/share/ca-certificates/{{ domain_name }}.crt + mode: u=r,g=r,o=r + owner: root + group: root + notify: Update CAs. +#+END_SRC + +#+CAPTION: =roles_t/core/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml + +- name: Update CAs. + become: yes + command: update-ca-certificates +#+END_SRC + +** Install Server Certificate + +The servers on Core use the same certificate (and key) to authenticate +themselves to institute clients. They share the =/etc/server.crt= and +=/etc/server.key= files, the latter only readable by ~root~. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Install server certificate/key. + become: yes + copy: + src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }} + dest: /etc/server.{{ item.typ }} + mode: "{{ item.mode }}" + loop: + - { path: "issued/core.{{ domain_priv }}", typ: crt, + mode: "u=r,g=r,o=r" } + - { path: "private/core.{{ domain_priv }}", typ: key, + mode: "u=r,g=,o=" } + notify: + - Restart Postfix. + - Restart Dovecot. + - Restart OpenVPN. +#+END_SRC + +** Install NTP + +Core uses NTP to provide a time synchronization service to the campus. +The default daemon's default configuration is fine. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml :noweb yes + +- name: Install NTP. + become: yes + apt: pkg=ntp +#+END_SRC + +** Configure Postfix on Core + +Core uses Postfix to provide SMTP service to the campus. The default +Debian configuration (for an "Internet Site") is nearly sufficient. +Manual installation may prompt for configuration type and mail name. +The appropriate answers are listed here but will be checked +(corrected) by Ansible tasks below. + +- General type of mail configuration: Internet Site +- System mail name: core.small.private + +As discussed in [[*The Email Service][The Email Service]] above, Core delivers email addressed +to any internal domain name locally, and uses its smarthost Front to +relay the rest. Core is reachable only on institute networks, so +there is little benefit in enabling TLS, but it does need to handle +larger messages and respect the institute's expectation of shortened +queue times. + +Core relays messages from any institute network. + +#+NAME: postfix-core-networks +#+CAPTION: ~postfix-core-networks~ +#+BEGIN_SRC conf +- p: mynetworks + v: >- + {{ private_net_cidr }} + {{ public_vpn_net_cidr }} + {{ campus_vpn_net_cidr }} + 127.0.0.0/8 + [::ffff:127.0.0.0]/104 + [::1]/128 +#+END_SRC + +Core uses Front to relay messages to the Internet. + +#+NAME: postfix-core-relayhost +#+CAPTION: ~postfix-core-relayhost~ +#+BEGIN_SRC conf +- { p: relayhost, v: "[{{ front_private_addr }}]" } +#+END_SRC + +Core uses a Postfix transport file, =/etc/postfix/transport=, to +specify local delivery for email addressed to /any/ internal domain +name. Note the leading dot at the beginning of the sole line in the +file. + +#+NAME: postfix-transport +#+CAPTION: ~postfix-transport~ +#+BEGIN_SRC conf +.{{ domain_name }} local:$myhostname +.{{ domain_priv }} local:$myhostname +#+END_SRC + +The complete list of Core's Postfix settings for + =/etc/postfix/main.cf= follow. + +#+NAME: postfix-core +#+CAPTION: ~postfix-core~ +#+BEGIN_SRC conf :noweb yes +<> +- { p: smtpd_tls_security_level, v: none } +- { p: smtp_tls_security_level, v: none } +<> +<> +<> +<> +<> +- { p: inet_interfaces, v: "127.0.0.1 {{ core_addr }}" } +#+END_SRC + +The following Ansible tasks install Postfix, modify +=/etc/postfix/main.cf=, create =/etc/postfix/transport=, and start and +enable the service. Whenever =/etc/postfix/transport= is changed, the +~postmap transport~ command must also be run. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml :noweb yes + +- name: Install Postfix. + become: yes + apt: pkg=postfix + +- name: Configure Postfix. + become: yes + lineinfile: + path: /etc/postfix/main.cf + regexp: "^ *{{ item.p }} *=" + line: "{{ item.p }} = {{ item.v }}" + loop: + <> + - { p: transport_maps, v: "hash:/etc/postfix/transport" } + notify: Restart Postfix. + +- name: Configure Postfix transport. + become: yes + copy: + content: | + <> + dest: /etc/postfix/transport + notify: Postmap transport. + +- name: Enable/Start Postfix. + become: yes + systemd: + service: postfix + enabled: yes + state: started +#+END_SRC + +#+CAPTION: =roles_t/core/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml + +- name: Restart Postfix. + become: yes + systemd: + service: postfix + state: restarted + +- name: Postmap transport. + become: yes + command: + chdir: /etc/postfix/ + cmd: postmap transport + notify: Restart Postfix. +#+END_SRC + +** Configure Private Email Aliases + +The institute's Core needs to deliver email addressed to institute +aliases including those advertised on the campus web site, in VPN +certificates, etc. System daemons like ~cron(8)~ may also send email +to e.g. ~monkey~. The following aliases are installed in +=/etc/aliases= with a special marker so that additional blocks can be +installed by more specialized roles. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Install institute email aliases. + become: yes + blockinfile: + block: | + webmaster: root + admin: root + www-data: root + monkey: root + root: {{ ansible_user }} + path: /etc/aliases + marker: "# {mark} INSTITUTE MANAGED BLOCK" + notify: New aliases. +#+END_SRC + +#+CAPTION: =roles_t/core/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml + +- name: New aliases. + become: yes + command: newaliases +#+END_SRC + +** Configure Dovecot IMAPd + +Core uses Dovecot's IMAPd to store and serve member emails. As on +Front, Core's Dovecot configuration is largely the Debian default with +POP and IMAP (without TLS) support disabled. This is a bit "over the +top" given that Core is only accessed from private (encrypted) +networks, but helps to ensure privacy even when members accidentally +attempt connections from outside the private networks. For more +information about Core's role in the institute's email services, see +[[*The Email Service][The Email Service]]. + +The institute follows the recommendation in the package +=README.Debian= (in =/usr/share/dovecot-core/=) but replaces the +default "snake oil" certificate with another, signed by the institute. +(For more information about the institute's X.509 certificates, see +[[*Keys][Keys]].) + +The following Ansible tasks install Dovecot's IMAP daemon and its +=/etc/dovecot/local.conf= configuration file, then starts the service +and enables it to start at every reboot. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml :noweb yes + +- name: Install Dovecot IMAPd. + become: yes + apt: pkg=dovecot-imapd + +- name: Configure Dovecot IMAPd. + become: yes + copy: + content: | + <> + ssl_cert = > + dest: /etc/dovecot/local.conf + notify: Restart Dovecot. + +- name: Enable/Start Dovecot. + become: yes + systemd: + service: dovecot + enabled: yes + state: started +#+END_SRC + +#+CAPTION: =roles_t/core/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml + +- name: Restart Dovecot. + become: yes + systemd: + service: dovecot + state: restarted +#+END_SRC + +** Configure Fetchmail + +Core runs a ~fetchmail~ for each member of the institute. Individual +~fetchmail~ jobs can run with the ~--idle~ option and thus can +download new messages instantly. The jobs run as Systemd services and +so are monitored and started at boot. + +In the =~/.fetchmailrc= template below, the ~item~ variable is a +username, and ~members[item]~ is the membership record associated with +the username. The template is only used when the record has a +~password_fetchmail~ key providing the member's plain-text password. + +#+NAME: fetchmail-config +#+CAPTION: ~fetchmail-config~ +#+BEGIN_SRC conf +# Permissions on this file may be no greater than 0600. + +set no bouncemail +set no spambounce +set no syslog +#set logfile /home/{{ item }}/.fetchmail.log + +poll {{ front_private_addr }} protocol imap timeout 15 + username {{ item }} + password "{{ members[item].password_fetchmail }}" fetchall + ssl sslproto tls1.2+ sslcertck sslcommonname {{ domain_name }} +#+END_SRC + +The Systemd service description. + +#+NAME: fetchmail-service +#+CAPTION: ~fetchmail-service~ +#+BEGIN_SRC conf +[Unit] +Description=Fetchmail --idle task for {{ item }}. +AssertPathExists=/home/{{ item }}/.fetchmailrc +Requires=sys-devices-virtual-net-ovpn.device +After=sys-devices-virtual-net-ovpn.device + +[Service] +User={{ item }} +ExecStart=/usr/bin/fetchmail --idle +Restart=always +RestartSec=1m +NoNewPrivileges=true + +[Install] +WantedBy=default.target +#+END_SRC + +The following tasks install fetchmail, a =~/.fetchmailrc= and Systemd +=.service= file for each current member, start the services, and +enable them to start on boot. To accommodate any member of the +institute who may wish to run their own fetchmail job on their +notebook, only members with a ~fetchmail_password~ key will be +provided the Core service. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml :noweb yes + +- name: Install fetchmail. + become: yes + apt: pkg=fetchmail + +- name: Configure user fetchmails. + become: yes + copy: + content: | + <> + dest: /home/{{ item }}/.fetchmailrc + owner: "{{ item }}" + group: "{{ item }}" + mode: u=rw,g=,o= + loop: "{{ usernames }}" + when: + - members[item].status == 'current' + - members[item].password_fetchmail is defined + tags: accounts + +- name: Create user fetchmail services. + become: yes + copy: + content: | + <> + dest: /etc/systemd/system/fetchmail-{{ item }}.service + loop: "{{ usernames }}" + when: + - members[item].status == 'current' + - members[item].password_fetchmail is defined + tags: accounts + +- name: Enable/Start user fetchmail services. + become: yes + systemd: + service: fetchmail-{{ item }}.service + enabled: yes + state: started + loop: "{{ usernames }}" + when: + - members[item].status == 'current' + - members[item].password_fetchmail is defined + tags: accounts +#+END_SRC + +Finally, any former member's Fetchmail service on Core should be +stopped and disabled from restarting at boot, deleted even. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Stop former user fetchmail services. + become: yes + systemd: + service: fetchmail-{{ item }} + state: stopped + enabled: no + loop: "{{ usernames }}" + when: + - members[item].status != 'current' + - members[item].password_fetchmail is defined + tags: accounts +#+END_SRC + +If the =.service= file is deleted, then Ansible cannot use the +~systemd~ module to stop it, nor check that it is still stopped. +Otherwise the following task might be appropriate. + +#+BEGIN_SRC conf + +- name: Delete former user fetchmail services. + become: yes + file: + path: /etc/systemd/system/fetchmail-{{ item }}.service + state: absent + loop: "{{ usernames }}" + when: + - members[item].status != 'current' + - members[item].password_fetchmail is defined + tags: accounts +#+END_SRC + +** Configure Apache2 <> + +This is the small institute's campus web server. It hosts several web +sites as described in [[*The Web Services][The Web Services]]. + +| URL | Doc.Root | Description | +|----------------+----------------+-------------------------| +| ~http://live/~ | =/WWW/live/= | The live, public site. | +| ~http://test/~ | =/WWW/test/= | The next public site. | +| ~http://www/~ | =/WWW/campus/= | Campus home page. | +| ~http://core/~ | =/var/www/= | whatnot, e.g. Nextcloud | + +The live (and test) web site content (eventually) is intended to be +copied to Front, so the live and test sites are configured as +identically to Front's as possible. The directories and files are +owned by ~monkey~ but are world readable, thus readable by ~www-data~, +the account running Apache2. + +The campus web site is much more permissive. Its directories are +owned by ~root~ but writable by the ~staff~ group. It runs CGI +scripts found in any of its directories, any executable with a =.cgi= +file name. It runs them as ~www-data~ so CGI scripts that need access +to private data must Set-UID to the appropriate account. + +The ~UserDir~ directives for all of Core's web sites are the same, and +punt the indirection through a =/home/www-users/= directory, simply +naming a sub-directory in the member's home directory on Core. The +~~ block is the same as the one used on Front. + +#+NAME: apache-userdir-core +#+CAPTION: ~apache-userdir-core~ +#+BEGIN_SRC conf :noweb yes +UserDir Public/HTML + + <> + +#+END_SRC + +The virtual host for the live web site is given below. It should look +like Front's top-level web configuration without the permanent +redirect, the encryption ciphers and certificates. + +#+NAME: apache-live +#+CAPTION: ~apache-live~ +#+BEGIN_SRC conf :noweb yes + + ServerName live + ServerAlias live.{{ domain_priv }} + ServerAdmin webmaster@core.{{ domain_priv }} + + DocumentRoot /WWW/live + + Require all granted + AllowOverride None + + + <> + + ErrorLog ${APACHE_LOG_DIR}/live-error.log + CustomLog ${APACHE_LOG_DIR}/live-access.log combined + + IncludeOptional /etc/apache2/sites-available/live-vhost.conf + +#+END_SRC + +The virtual host for the test web site is given below. It should look +familiar. + +#+NAME: apache-test +#+CAPTION: ~apache-test~ +#+BEGIN_SRC conf :noweb yes + + ServerName test + ServerAlias test.{{ domain_priv }} + ServerAdmin webmaster@core.{{ domain_priv }} + + DocumentRoot /WWW/test + + Require all granted + AllowOverride None + + + <> + + ErrorLog ${APACHE_LOG_DIR}/test-error.log + CustomLog ${APACHE_LOG_DIR}/test-access.log combined + + IncludeOptional /etc/apache2/sites-available/test-vhost.conf + +#+END_SRC + +The virtual host for the campus web site is given below. It too +should look familiar, but with a notably loose ~Directory~ directive. +It assumes =/WWW/campus/= is secure, writable /only/ by properly +trained staffers, monitored by a revision control system, etc. + +#+NAME: apache-campus +#+CAPTION: ~apache-campus~ +#+BEGIN_SRC conf :noweb yes + + ServerName www + ServerAlias www.{{ domain_priv }} + ServerAdmin webmaster@core.{{ domain_priv }} + + DocumentRoot /WWW/campus + + Options Indexes FollowSymLinks MultiViews ExecCGI + AddHandler cgi-script .cgi + Require all granted + AllowOverride None + + + <> + + ErrorLog ${APACHE_LOG_DIR}/campus-error.log + CustomLog ${APACHE_LOG_DIR}/campus-access.log combined + + IncludeOptional /etc/apache2/sites-available/www-vhost.conf + +#+END_SRC + +The tasks below install Apache2 and edit its default configuration. +The global ~ServerName~ directive must be deleted because it seems to +interfere with mapping URLs to the correct virtual host. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Install Apache2. + become: yes + apt: pkg=apache2 + +- name: Disable Apache2 server name. + become: yes + lineinfile: + path: /etc/apache2/apache2.conf + regexp: "([^#]+)ServerName (.*)" + backrefs: yes + line: "# \\1ServerName \\2" + notify: Restart Apache2. + +- name: Enable Apache2 modules. + become: yes + apache2_module: + name: "{{ item }}" + loop: [ userdir, cgi ] + notify: Restart Apache2. +#+END_SRC + +With Apache installed there is a =/etc/apache/sites-available/= +directory into which the above site configurations can be installed. +The ~a2ensite~ command enables them. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml :noweb yes + +- name: Install live web site. + become: yes + copy: + content: | + <> + dest: /etc/apache2/sites-available/live.conf + mode: u=rw,g=r,o=r + notify: Restart Apache2. + +- name: Install test web site. + become: yes + copy: + content: | + <> + dest: /etc/apache2/sites-available/test.conf + mode: u=rw,g=r,o=r + notify: Restart Apache2. + +- name: Install campus web site. + become: yes + copy: + content: | + <> + dest: /etc/apache2/sites-available/www.conf + mode: u=rw,g=r,o=r + notify: Restart Apache2. + +- name: Enable web sites. + become: yes + command: + cmd: a2ensite -q {{ item }} + creates: /etc/apache2/sites-enabled/{{ item }}.conf + loop: [ live, test, www ] + notify: Restart Apache2. + +- name: Enable/Start Apache2. + become: yes + systemd: + service: apache2 + enabled: yes + state: started +#+END_SRC + +#+CAPTION: =roles_t/core/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml + +- name: Restart Apache2. + become: yes + systemd: + service: apache2 + state: restarted +#+END_SRC + +** Configure Website Updates + +Monkey on Core runs =/usr/local/sbin/webupdate= every 15 minutes via a +~cron~ job. The example script mirrors =/WWW/live/= on Core to +=/home/www/= on Front. + +#+NAME: webupdate +#+CAPTION: =private/webupdate= +#+BEGIN_SRC sh +#!/bin/bash -e +# +# DO NOT EDIT. This file was tangled from institute.org. + +cd /WWW/live/ + +rsync -avz --delete --chmod=g-w \ + --filter='exclude *~' \ + --filter='exclude .git*' \ + ./ {{ domain_name }}:/home/www/ +#+END_SRC + +The following tasks install the =webupdate= script from =private/=, +and create Monkey's ~cron~ job. An example =webupdate= script is +provided [[webupdate][here]]. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: "Install Monkey's webupdate script." + become: yes + copy: + src: ../private/webupdate + dest: /usr/local/sbin/webupdate + mode: u=rx,g=rx,o= + owner: monkey + group: staff + +- name: "Create Monkey's webupdate job." + become: yes + cron: + minute: "*/15" + job: "[ -d /WWW/live ] && /usr/local/sbin/webupdate" + name: webupdate + user: monkey +#+END_SRC + +** Configure OpenVPN Connection to Front + +Core connects to Front's public VPN to provide members abroad with a +route to the campus networks. As described in the configuration of +Front's OpenVPN service, Front expects Core to connect using a client +certificate with Common Name ~Core~. + +Core's OpenVPN client configuration uses the Debian default Systemd +service unit to keep Core connected to Front. The configuration +is installed in =/etc/openvpn/front.conf= so the Systemd service is +called ~openvpn@front~. + +#+NAME: openvpn-core +#+CAPTION: ~openvpn-core~ +#+BEGIN_SRC conf :noweb yes +client +dev-type tun +dev ovpn +remote {{ front_addr }} +nobind +<> +<> +remote-cert-tls server +verify-x509-name {{ domain_name }} name +verb 3 +ca /usr/local/share/ca-certificates/{{ domain_name }}.crt +cert client.crt +key client.key +tls-auth ta.key 1 +#+END_SRC + +The tasks that install and configure the OpenVPN client configuration +for Core. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml :noweb yes + +- name: Install OpenVPN. + become: yes + apt: pkg=openvpn + +- name: Enable IP forwarding. + become: yes + sysctl: + name: net.ipv4.ip_forward + value: "1" + state: present + +- name: Install OpenVPN secret. + become: yes + copy: + src: ../Secret/front-ta.key + dest: /etc/openvpn/ta.key + mode: u=r,g=,o= + notify: Restart OpenVPN. + +- name: Install OpenVPN client certificate/key. + become: yes + copy: + src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }} + dest: /etc/openvpn/client.{{ item.typ }} + mode: "{{ item.mode }}" + loop: + - { path: "issued/core", typ: crt, mode: "u=r,g=r,o=r" } + - { path: "private/core", typ: key, mode: "u=r,g=,o=" } + notify: Restart OpenVPN. + +- name: Configure OpenVPN. + become: yes + copy: + content: | + <> + dest: /etc/openvpn/front.conf + mode: u=r,g=r,o= + notify: Restart OpenVPN. + +- name: Enable/Start OpenVPN. + become: yes + systemd: + service: openvpn@front + state: started + enabled: yes +#+END_SRC + +#+CAPTION: =roles_t/core/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml + +- name: Restart OpenVPN. + become: yes + systemd: + service: openvpn@front + state: restarted +#+END_SRC + +** Configure NAGIOS + +Core runs a ~nagios4~ server to monitor "services" on institute hosts. +The following tasks install the necessary packages and configure the +server. The last task installs the monitoring configuration in +=/etc/nagios4/conf.d/institute.cfg=. This configuration file, +=nagios.cfg=, is tangled from code blocks described in subsequent +subsections. + +The institute NAGIOS configuration includes a customized version of +the ~check_sensors~ plugin named ~inst_sensors~. Both versions rely +on the ~sensors~ command (from the ~lm-sensors~ package). The custom +version (below) is installed in =/usr/local/sbin/inst_sensors= on both +Core and Campus (and thus Gate) machines. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Install NAGIOS4. + become: yes + apt: + pkg: [ nagios4, monitoring-plugins-basic, nagios-nrpe-plugin, + lm-sensors ] + +- name: Install inst_sensors NAGIOS plugin. + become: yes + copy: + src: inst_sensors + dest: /usr/local/sbin/inst_sensors + mode: u=rwx,g=rx,o=rx + +- name: Configure NAGIOS4. + become: yes + lineinfile: + path: /etc/nagios4/nagios.cfg + regexp: "{{ item.regexp }}" + line: "{{ item.line }}" + backrefs: yes + loop: + - { regexp: "^( *cfg_file *= *localhost.cfg)", line: "# \\1" } + - { regexp: "^( *admin_email *= *)", line: "\\1{{ ansible_user }}@localhost" } + notify: Reload NAGIOS4. + +- name: Configure NAGIOS4 contacts. + become: yes + lineinfile: + path: /etc/nagios4/objects/contacts.cfg + regexp: "^( *email +)" + line: "\\1sysadm@localhost" + backrefs: yes + notify: Reload NAGIOS4. + +- name: Configure NAGIOS4 monitors. + become: yes + template: + src: nagios.cfg + dest: /etc/nagios4/conf.d/institute.cfg + notify: Reload NAGIOS4. + +- name: Enable/Start NAGIOS4. + become: yes + systemd: + service: nagios4 + enabled: yes + state: started +#+END_SRC + +#+CAPTION: =roles_t/core/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml + +- name: Reload NAGIOS4. + become: yes + systemd: + service: nagios4 + state: reloaded +#+END_SRC + +*** Configure NAGIOS Monitors for Core + +The first block in =nagios.cfg= specifies monitors for services on +Core. The monitors are simple, local plugins, and the block is very +similar to the default =objects/localhost.cfg= file. The commands +used here /may/ specify plugin arguments. + +#+CAPTION: =roles_t/core/templates/nagios.cfg= +#+BEGIN_SRC conf :tangle roles_t/core/templates/nagios.cfg :mkdirp yes +define host { + use linux-server + host_name core + address 127.0.0.1 +} + +define service { + use local-service + host_name core + service_description Root Partition + check_command check_local_disk!20%!10%!/ +} + +define service { + use local-service + host_name core + service_description Current Users + check_command check_local_users!20!50 +} + +define service { + use local-service + host_name core + service_description Zombie Processes + check_command check_local_procs!5!10!Z +} + +define service { + use local-service + host_name core + service_description Total Processes + check_command check_local_procs!150!200!RSZDT +} + +define service { + use local-service + host_name core + service_description Current Load + check_command check_local_load!5.0,4.0,3.0!10.0,6.0,4.0 +} + +define service { + use local-service + host_name core + service_description Swap Usage + check_command check_local_swap!20%!10% +} + +define service { + use local-service + host_name core + service_description SSH + check_command check_ssh +} + +define service { + use local-service + host_name core + service_description HTTP + check_command check_http +} +#+END_SRC + +*** Custom NAGIOS Monitor ~inst_sensors~ + +The ~check_sensors~ plugin is included in the package +~monitoring-plugins-basic~, but it does not report any readings. The +small institute substitutes a slightly modified version, +~inst_sensors~, that reports core CPU temperatures. + +#+CAPTION: =roles_t/core/files/inst_sensors= +#+BEGIN_SRC sh :tangle roles_t/core/files/inst_sensors +#!/bin/sh + +PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin" +export PATH +PROGNAME=`basename $0` +REVISION="2.3.1" + +. /usr/lib/nagios/plugins/utils.sh + +print_usage() { + echo "Usage: $PROGNAME" [--ignore-fault] +} + +print_help() { + print_revision $PROGNAME $REVISION + echo "" + print_usage + echo "" + echo "This plugin checks hardware status using the lm_sensors package." + echo "" + support + exit $STATE_OK +} + +brief_data() { + echo "$1" | sed -n -E -e ' + /^ *Core [0-9]+:/ { s/^ *Core [0-9]+: +([-+]?[0-9.]+).*/ \1/; H } + $ { x; s/\n//g; p }' +} + +case "$1" in + --help) + print_help + exit $STATE_OK + ;; + -h) + print_help + exit $STATE_OK + ;; + --version) + print_revision $PROGNAME $REVISION + exit $STATE_OK + ;; + -V) + print_revision $PROGNAME $REVISION + exit $STATE_OK + ;; + *) + sensordata=`sensors 2>&1` + status=$? + if test ${status} -eq 127; then + text="SENSORS UNKNOWN - command not found" + text="$text (did you install lmsensors?)" + exit=$STATE_UNKNOWN + elif test ${status} -ne 0; then + text="WARNING - sensors returned state $status" + exit=$STATE_WARNING + elif echo ${sensordata} | egrep ALARM > /dev/null; then + text="SENSOR CRITICAL -`brief_data "${sensordata}"`" + exit=$STATE_CRITICAL + elif echo ${sensordata} | egrep FAULT > /dev/null \ + && test "$1" != "-i" -a "$1" != "--ignore-fault"; then + text="SENSOR UNKNOWN - Sensor reported fault" + exit=$STATE_UNKNOWN + else + text="SENSORS OK -`brief_data "${sensordata}"`" + exit=$STATE_OK + fi + + echo "$text" + if test "$1" = "-v" -o "$1" = "--verbose"; then + echo ${sensordata} + fi + exit $exit + ;; +esac +#+END_SRC + +The following block defines the command and monitors it (locally) on +Core. + +#+CAPTION: =roles_t/core/templates/nagios.cfg= +#+BEGIN_SRC conf :tangle roles_t/core/templates/nagios.cfg + +define command { + command_name inst_sensors + command_line /usr/local/sbin/inst_sensors +} + +define service { + use local-service + host_name core + service_description Temperature Sensors + check_command inst_sensors +} +#+END_SRC + +*** Configure NAGIOS Monitors for Remote Hosts + +The following sections contain code blocks specifying monitors for +services on other campus hosts. The NAGIOS server on Core will +contact the NAGIOS Remote Plugin Executor (NRPE) servers on the other +campus hosts and request the results of several commands. For +security reasons, the NRPE servers do not accept command arguments. + +The institute defines several NRPE commands, using a ~inst_~ prefix to +distinguish their names. The commands take no arguments but execute a +plugin with pre-defined arguments appropriate for the institute. The +commands are defined in code blocks interleaved with the blocks that +monitor them. The command blocks are appended to =nrpe.cfg= and the +monitoring blocks to =nagios.cfg=. The =nrpe.cfg= file is installed +on each campus host by the campus role's [[*Configure NRPE][Configure NRPE]] tasks. + +*** Configure NAGIOS Monitors for Gate + +Define the monitored host, ~gate~. Monitor its response to network +pings. + +#+CAPTION: =roles_t/core/templates/nagios.cfg= +#+BEGIN_SRC conf :tangle roles_t/core/templates/nagios.cfg + +define host { + use linux-server + host_name gate + address {{ gate_addr }} +} + +define service { + use local-service + host_name gate + service_description PING + check_command check_ping!100.0,20%!500.0,60% +} +#+END_SRC + +For all campus NRPE servers: an ~inst_root~ command to check the free +space on the root partition. + +#+CAPTION: =roles_t/campus/files/nrpe.cfg= +#+BEGIN_SRC conf :tangle roles_t/campus/files/nrpe.cfg :mkdirp yes +command[inst_root]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p / +#+END_SRC + +Monitor ~inst_root~ on Gate. + +#+CAPTION: =roles_t/core/templates/nagios.cfg= +#+BEGIN_SRC conf :tangle roles_t/core/templates/nagios.cfg + +define service { + use generic-service + host_name gate + service_description Root Partition + check_command check_nrpe!inst_root +} +#+END_SRC + +Monitor ~check_load~ on Gate. + +#+CAPTION: =roles_t/core/templates/nagios.cfg= +#+BEGIN_SRC conf :tangle roles_t/core/templates/nagios.cfg + +define service { + use generic-service + host_name gate + service_description Current Load + check_command check_nrpe!check_load +} +#+END_SRC + +Monitor ~check_zombie_procs~ and ~check_total_procs~ on Gate. + +#+CAPTION: =roles_t/core/templates/nagios.cfg= +#+BEGIN_SRC conf :tangle roles_t/core/templates/nagios.cfg + +define service { + use generic-service + host_name gate + service_description Zombie Processes + check_command check_nrpe!check_zombie_procs +} + +define service { + use generic-service + host_name gate + service_description Total Processes + check_command check_nrpe!check_total_procs +} +#+END_SRC + +For all campus NRPE servers: an ~inst_swap~ command to check the swap +usage. + +#+CAPTION: =roles_t/campus/files/nrpe.cfg= +#+BEGIN_SRC conf :tangle roles_t/campus/files/nrpe.cfg +command[inst_swap]=/usr/lib/nagios/plugins/check_swap -w 20% -c 10% +#+END_SRC + +Monitor ~inst_swap~ on Gate. + +#+CAPTION: =roles_t/core/templates/nagios.cfg= +#+BEGIN_SRC conf :tangle roles_t/core/templates/nagios.cfg + +define service { + use generic-service + host_name gate + service_description Swap Usage + check_command check_nrpe!inst_swap +} +#+END_SRC + +Monitor Gate's SSH service. + +#+CAPTION: =roles_t/core/templates/nagios.cfg= +#+BEGIN_SRC conf :tangle roles_t/core/templates/nagios.cfg + +define service { + use generic-service + host_name gate + service_description SSH + check_command check_ssh +} +#+END_SRC + +For all campus NRPE servers: an ~inst_sensors~ command to report core +CPU temperatures. + +#+CAPTION: =roles_t/campus/files/nrpe.cfg= +#+BEGIN_SRC conf :tangle roles_t/campus/files/nrpe.cfg +command[inst_sensors]=/usr/local/sbin/inst_sensors +#+END_SRC + +Monitor ~inst_sensors~ on Gate. + +#+CAPTION: =roles_t/core/templates/nagios.cfg= +#+BEGIN_SRC conf :tangle roles_t/core/templates/nagios.cfg + +define service { + use generic-service + host_name gate + service_description Temperature Sensors + check_command check_nrpe!inst_sensors +} +#+END_SRC + +** Configure Backups + +The following task installs the =backup= script from =private/=. An +example script is provided in [[backup][here]]. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Install backup script. + become: yes + copy: + src: ../private/backup + dest: /usr/local/sbin/backup + mode: u=rx,g=r,o= +#+END_SRC + +** Configure Nextcloud + +Core runs Nextcloud to provide a private institute cloud, as described +in [[*The Cloud Service][The Cloud Service]]. Installing, restoring (from backup), and +upgrading Nextcloud are manual processes documented in [[https://docs.nextcloud.com/server/latest/admin_manual/maintenance/][The Nextcloud +Admin Manual, Maintenance]]. However Ansible can help prepare Core +before an install or restore, and perform basic security checks +afterwards. + +*** Prepare Core For Nextcloud + +The Ansible code contained herein prepares Core to run Nextcloud by +installing required software packages, configuring the web server, and +installing a cron job. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Install packages required by Nextcloud. + become: yes + apt: + pkg: [ apache2, mariadb-server, php, php-apcu, php-bcmath, + php-curl, php-gd, php-gmp, php-json, php-mysql, + php-mbstring, php-intl, php-imagick, php-xml, php-zip, + libapache2-mod-php ] +#+END_SRC + +Next, a number of Apache2 modules are enabled. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Enable Apache2 modules for Nextcloud. + become: yes + apache2_module: + name: "{{ item }}" + loop: [ rewrite, headers, env, dir, mime ] +#+END_SRC + +The Apache2 configuration is then extended with the following +=/etc/apache2/sites-available/nextcloud.conf= file, which is installed +and enabled with ~a2ensite~. The same configuration lines are given +in the "Installation on Linux" section of the Nextcloud Server +Administration Guide (sub-section [[https://docs.nextcloud.com/server/latest/admin_manual/installation/source_installation.html][Apache Web server configuration]]). + +#+CAPTION: =roles_t/core/files/nextcloud.conf= +#+BEGIN_SRC conf :tangle roles_t/core/files/nextcloud.conf +Alias /nextcloud "/var/www/nextcloud/" + + + Require all granted + AllowOverride All + Options FollowSymlinks MultiViews + + + Dav off + + +#+END_SRC + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Install Nextcloud web configuration. + become: yes + copy: + src: nextcloud.conf + dest: /etc/apache2/sites-available/nextcloud.conf + notify: Restart Apache2. + +- name: Enable Nextcloud web configuration. + become: yes + command: + cmd: a2ensite nextcloud + creates: /etc/apache2/sites-enabled/nextcloud.conf + notify: Restart Apache2. +#+END_SRC + +The institute supports "Service discovery" as recommended at the end +of the "Apache Web server configuration" subsection. The prescribed +rewrite rules are included in a ~Directory~ block for the default +virtual host's document root. + +#+CAPTION: =roles_t/core/files/nextcloud.conf= +#+BEGIN_SRC conf :tangle roles_t/core/files/nextcloud.conf + + + + RewriteEngine on + # LogLevel alert rewrite:trace3 + RewriteRule ^\.well-known/carddav \ + /nextcloud/remote.php/dav [R=301,L] + RewriteRule ^\.well-known/caldav \ + /nextcloud/remote.php/dav [R=301,L] + RewriteRule ^\.well-known/webfinger \ + /nextcloud/index.php/.well-known/webfinger [R=301,L] + RewriteRule ^\.well-known/nodeinfo \ + /nextcloud/index.php/.well-known/nodeinfo [R=301,L] + + +#+END_SRC + +The institute also includes additional Apache2 configuration +recommended by Nextcloud 20's Settings > Administration > Overview web +page. The following portion of =nextcloud.conf= sets a +~Strict-Transport-Security~ header with a ~max-age~ of 6 months. + +#+CAPTION: =roles_t/core/files/nextcloud.conf= +#+BEGIN_SRC conf :tangle roles_t/core/files/nextcloud.conf + + + Header always set \ + Strict-Transport-Security "max-age=15552000; includeSubDomains" + +#+END_SRC + +Nextcloud's directories and files are typically readable /only/ by the +web server's user ~www-data~ and the ~www-data~ group. The +administrator is added to this group to ease (speed) the debugging of +cloud FUBARs. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Add {{ ansible_user }} to web server group. + become: yes + user: + name: "{{ ansible_user }}" + append: yes + groups: www-data +#+END_SRC + +Nextcloud is configured with a cron job to run periodic background +jobs. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Create Nextcloud cron job. + become: yes + cron: + minute: 11,26,41,56 + job: >- + [ -r /var/www/nextcloud/cron.php ] + && /usr/bin/php -f /var/www/nextcloud/cron.php + name: Nextcloud + user: www-data +#+END_SRC + +Nextcloud's MariaDB database (and user) are created by the following +tasks. The user's password is taken from the ~nextcloud_dbpass~ +variable, kept in =private/vars.yml=, and generated e.g. with +the ~apg -n 1 -x 12 -m 12~ command. + +#+CAPTION: =private/vars.yml= +#+BEGIN_SRC conf :tangle private/vars.yml +nextcloud_dbpass: ippAgmaygyob +#+END_SRC + +When the ~mysql_db~ Ansible module supports ~check_implicit_admin~, +the following task can create Nextcloud's DB. + +#+BEGIN_SRC conf + +- name: Create Nextcloud DB. + become: yes + mysql_db: + check_implicit_admin: yes + name: nextcloud + collation: utf8mb4_general_ci + encoding: utf8mb4 +#+END_SRC + +Unfortunately it does not currently, yet the institute prefers the +more secure Unix socket authentication method. Rather than create +such a user, the ~nextcloud~ database and ~nextclouduser~ user are +created manually. + +The following task would work (~mysql_user~ supports +~check_implicit_admin~) /but/ the ~nextcloud~ database was not created +above. Thus both database and user are created manually, with SQL +given in the [[Install Nextcloud]] subsection below, before ~occ +maintenance:install~ can run. + +#+BEGIN_SRC conf + +- name: Create Nextcloud DB user. + become: yes + mysql_user: + check_implicit_admin: yes + name: nextclouduser + password: "{{ nextcloud_dbpass }}" + update_password: always + priv: 'nextcloud.*:all' +#+END_SRC + +Finally, a symbolic link positions =/Nextcloud/nextcloud/= at +=/var/www/nextcloud/= as expected by the Apache2 configuration above. +Nextcloud itself should always believe that =/var/www/nextcloud/= is +its document root. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Link /var/www/nextcloud. + become: yes + file: + path: /var/www/nextcloud + src: /Nextcloud/nextcloud + state: link + force: yes + follow: no +#+END_SRC + +*** Configure PHP + +The following tasks set a number of PHP parameters for better +performance, as recommended by Nextcloud. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Set PHP memory_limit for Nextcloud. + become: yes + lineinfile: + path: /etc/php/7.4/apache2/php.ini + regexp: memory_limit *= + line: memory_limit = 512M + +- name: Include PHP parameters for Nextcloud. + become: yes + copy: + content: | + ; priority=20 + apc.enable_cli=1 + opcache.enable=1 + opcache.enable_cli=1 + opcache.interned_strings_buffer=8 + opcache.max_accelerated_files=10000 + opcache.memory_consumption=128 + opcache.save_comments=1 + opcache.revalidate_freq=1 + dest: /etc/php/7.4/mods-available/nextcloud.ini + notify: Restart Apache2. + +- name: Enable Nextcloud PHP modules. + become: yes + command: + cmd: phpenmod {{ item }} + creates: /etc/php/7.4/apache2/conf.d/20-{{ item }}.ini + loop: [ nextcloud, apcu ] + notify: Restart Apache2. +#+END_SRC + +*** Create =/Nextcloud/= + +The Ansible tasks up to this point have completed Core's LAMP stack +and made Core ready to run Nextcloud, but they have /not/ installed +Nextcloud. Nextcloud must be manually installed or restored from a +backup copy. Until then, attempts to access the institute cloud will +just produce errors. + +Installing /or/ restoring Nextcloud starts by creating the +=/Nextcloud/= directory. It may be a separate disk or just a new +directory on an existing partition. The commands involved will vary +greatly depending on circumstances, but the following examples might +be helpful. + +The following command line creates =/Nextcloud/= in the root +partition. This is appropriate for one-partition machines like the +test machines. + +#+BEGIN_SRC sh +sudo mkdir /Nextcloud +sudo chmod 775 /Nextcloud +#+END_SRC + +The following command lines create =/Nextcloud/= on an existing, +large, separate (from the root) partition. A popular choice for a +second partition is mounted at =/home/=. + +#+BEGIN_SRC sh +sudo mkdir /home/nextcloud +sudo chmod 775 /home/nextcloud +sudo ln -s /home/nextcloud /Nextcloud +#+END_SRC + +These commands create =/Nextcloud/= on an entire (without +partitioning) second hard drive, =/dev/sdb=. + +#+BEGIN_SRC sh +sudo mkfs -t ext4 /dev/sdb +sudo mkdir /Nextcloud +echo "/dev/sdb /Nextcloud ext4 errors=remount-ro 0 2" \ +| sudo tee -a /etc/fstab >/dev/null +sudo mount /Nextcloud +#+END_SRC + +*** Restore Nextcloud + +Restoring Nextcloud in the newly created =/Nextcloud/= presumably +starts with plugging in the portable backup drive and unlocking it so +that it is automounted at =/media/sysadm/Backup= per its drive label: +~Backup~. Assuming this, the following command restores =/Nextcloud/= +from the backup (and can be repeated as many times as necessary to get +a successful, complete copy). + +#+BEGIN_SRC sh +rsync -a /media/sysadm/Backup/Nextcloud/ /Nextcloud/ +#+END_SRC + +Mirroring a backup onto a /new/ server may cause UID/GID mismatches. +All of the files in =/Nextcloud/nextcloud/= must be owned by user +~www-data~ and group ~www-data~. If not, the following command will +make it so. + +#+BEGIN_SRC sh +sudo chown -R www-data.www-data /Nextcloud/nextcloud/ +#+END_SRC + +The database is restored with the following commands, which assume the +last dump was made February 20th 2022 and thus was saved in +=/Nextcloud/20220220.bak=. The database will need to be +created first as when installing Nextcloud. The appropriate SQL are +given in [[*Install Nextcloud][Install Nextcloud]] below. + +#+BEGIN_SRC sh +cd /Nextcloud/ +sudo mysql --defaults-file=dbbackup.cnf nextcloud < 20220220.bak +cd nextcloud/ +sudo -u www-data php occ maintenance:data-fingerprint +#+END_SRC + +Finally the administrator surfs to ~http://core/nextcloud/~, +authenticates, and addresses any warnings on the Administration > +Overview web page. + +*** Install Nextcloud + +Installing Nextcloud in the newly created =/Nextcloud/= starts with +downloading and verifying a recent release tarball. The following +example command lines unpacked Nextcloud 23 in =nextcloud/= in +=/Nextcloud/= and set the ownerships and permissions of the new +directories and files. + +#+BEGIN_SRC sh +cd /Nextcloud/ +tar xzf ~/Downloads/nextcloud-23.0.0.tar.bz2 +sudo chown -R www-data.www-data nextcloud +sudo find nextcloud -type d -exec chmod 750 {} \; +sudo find nextcloud -type f -exec chmod 640 {} \; +#+END_SRC + +According to the latest installation instructions in version 24's +administration guide, after unpacking and setting file permissions, +the following ~occ~ command takes care of everything. This command +currently expects Nextcloud's database and user to exist. The +following SQL commands create the database and user (entered at the +SQL prompt of the ~sudo mysql~ command). The shell command then runs +~occ~. + +#+BEGIN_SRC sql +create database nextcloud + character set utf8mb4 + collate utf8mb4_general_ci; +grant all on nextcloud.* + to 'nextclouduser'@'localhost' + identified by 'ippAgmaygyobwyt5'; +flush privileges; +#+END_SRC + +#+BEGIN_SRC sh +cd /var/www/nextcloud/ +sudo -u www-data php occ maintenance:install \ + --data-dir=/var/www/nextcloud/data \ + --database=mysql --database-name=nextcloud \ + --database-user=nextclouduser \ + --database-pass=ippAgmaygyobwyt5 \ + --admin-user=sysadm --admin-pass=PASSWORD +#+END_SRC + +The =nextcloud/config/config.php= is created by the above command, but +gets the ~trusted_domains~ and ~overwrite.cli.url~ settings wrong, +using ~localhost~ where ~core.small.private~ is wanted. The +/only/ way the institute cloud should be accessed is by that name, so +adjusting the =config.php= file is straightforward. The settings +should be corrected by hand for immediate testing, but the +"Afterwards" tasks (below) will check (or update) these settings when +Core is next checked (or updated) e.g. with ~./inst config -n core~. + +Before calling Nextcloud "configured", the administrator runs ~./inst +config core~, surfs to ~http://core.small.private/nextcloud/~, +logins in as ~sysadm~, and follows any reasonable +instructions (reasonable for a small organization) on the +Administration > Overview page. + +*** Afterwards + +Whether Nextcloud was restored or installed, there are a few things +Ansible can do to bolster reliability and security (aka privacy). +These Nextcloud "Afterwards" tasks would fail if they executed before +Nextcloud was installed, so the first "afterwards" task probes for +=/Nextcloud/nextcloud= and registers the file status with the +~nextcloud~ variable. The ~nextcloud.stat.exists~ condition on the +afterwards tasks causes them to skip rather than fail. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Test for /Nextcloud/nextcloud/. + stat: + path: /Nextcloud/nextcloud + register: nextcloud +- debug: + msg: "/Nextcloud/ does not yet exist" + when: not nextcloud.stat.exists +#+END_SRC + +The institute installed Nextcloud with the ~occ maintenance:install~ +command, which produced a simple =nextcloud/config/config.php= with +incorrect ~trusted_domains~ and ~overwrite.cli.url~ settings. These +are fixed during installation, but the institute may also have +restored Nextcloud, including the =config.php= file. (This file is +edited by the web scripts and so is saved/restored in the backup +copy.) The restored settings may be different from those Ansible used +to create the database user. + +The following task checks (or updates) the ~trusted_domains~ and +~dbpassword~ settings, to ensure they are consistent with the Ansible +variables ~domain_priv~ and ~nextcloud_dbpass~. The +~overwrite.cli.url~ setting is fixed by the tasks that implement +Pretty URLs (below). + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Configure Nextcloud trusted domains. + become: yes + replace: + path: /var/www/nextcloud/config/config.php + regexp: "^( *)'trusted_domains' *=>[^)]*[)],$" + replace: |- + \1'trusted_domains' => + \1array ( + \1 0 => 'core.{{ domain_priv }}', + \1), + when: nextcloud.stat.exists + +- name: Configure Nextcloud dbpasswd. + become: yes + lineinfile: + path: /var/www/nextcloud/config/config.php + regexp: "^ *'dbpassword' *=> *'.*', *$" + line: " 'dbpassword' => '{{ nextcloud_dbpass }}'," + insertbefore: "^[)];" + firstmatch: yes + when: nextcloud.stat.exists +#+END_SRC + +The institute uses the ~php-apcu~ package to provide Nextcloud with a +local memory cache. The following ~memcache.local~ Nextcloud setting +enables it. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Configure Nextcloud memcache. + become: yes + lineinfile: + path: /var/www/nextcloud/config/config.php + regexp: "^ *'memcache.local' *=> *'.*', *$" + line: " 'memcache.local' => '\\\\OC\\\\Memcache\\\\APCu'," + insertbefore: "^[)];" + firstmatch: yes + when: nextcloud.stat.exists +#+END_SRC + +The institute implements Pretty URLs as described in the [[https://docs.nextcloud.com/server/22/admin_manual/installation/source_installation.html#pretty-urls][Pretty URLs]] +subsection of the "Installation on Linux" section of the "Installation +and server configuration" chapter in the Nextcloud 22 Server +Administration Guide. Two settings are updated: ~overwrite.cli.url~ +and ~htaccess.RewriteBase~. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Configure Nextcloud for Pretty URLs. + become: yes + lineinfile: + path: /var/www/nextcloud/config/config.php + regexp: "{{ item.regexp }}" + line: "{{ item.line }}" + insertbefore: "^[)];" + firstmatch: yes + vars: + url: http://core.{{ domain_priv }}/nextcloud + loop: + - regexp: "^ *'overwrite.cli.url' *=>" + line: " 'overwrite.cli.url' => '{{ url }}'," + - regexp: "^ *'htaccess.RewriteBase' *=>" + line: " 'htaccess.RewriteBase' => '/nextcloud'," + when: nextcloud.stat.exists +#+END_SRC + +The institute sets Nextcloud's ~default_phone_region~ mainly to avoid +a complaint on the Settings > Administration > Overview web page. + +#+CAPTION: =private/vars.yml= +#+BEGIN_SRC conf :tangle private/vars.yml +nextcloud_region: US +#+END_SRC + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Configure Nextcloud phone region. + become: yes + lineinfile: + path: /var/www/nextcloud/config/config.php + regexp: "^ *'default_phone_region' *=> *'.*', *$" + line: " 'default_phone_region' => '{{ nextcloud_region }}'," + insertbefore: "^[)];" + firstmatch: yes + when: nextcloud.stat.exists +#+END_SRC + +The next two tasks create =/Nextcloud/dbbackup.cnf= if it does not +exist, and checks the ~password~ setting in it when it does. It +should /never/ be world readable (and probably shouldn't be group +readable). This file is needed by the institute's ~backup~ command, +so ~./inst config~ and in particular these next two tasks need to +run before the next backup. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Create /Nextcloud/dbbackup.cnf. + no_log: yes + become: yes + copy: + content: | + [mysqldump] + no-tablespaces + single-transaction + host=localhost + user=nextclouduser + password={{ nextcloud_dbpass }} + dest: /Nextcloud/dbbackup.cnf + mode: g=,o= + force: no + when: nextcloud.stat.exists + +- name: Update /Nextcloud/dbbackup.cnf password. + become: yes + lineinfile: + path: /Nextcloud/dbbackup.cnf + regexp: password= + line: password={{ nextcloud_dbpass }} + when: nextcloud.stat.exists +#+END_SRC + + +* The Gate Role + +The ~gate~ role configures the services expected at the campus gate: a +VPN into the campus network via a campus Wi-Fi access point, and +Internet access via NAT to the Internet. The gate machine uses +three network interfaces (see [[*The Gate Machine][The Gate Machine]]) configured with +persistent names used in its firewall rules. + + - ~lan~ :: The campus Ethernet. + - ~wifi~ :: The campus Wi-Fi AP. + - ~isp~ :: The campus ISP. + +Requiring a VPN to access the campus network from the campus Wi-Fi +bolsters the native Wi-Fi encryption and frustrates non-RYF ([[https://ryf.fsf.org][Respects +Your Freedom]]) wireless equipment. + +Gate is also a campus machine, so the more generic ~campus~ role is +applied first, by which Gate gets a campus machine's DNS and Postfix +configurations, etc. + +** Include Particulars + +The following should be familiar boilerplate by now. + +#+CAPTION: =roles_t/gate/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/gate/tasks/main.yml :mkdirp yes +--- +- name: Include public variables. + include_vars: ../public/vars.yml + tags: accounts +- name: Include private variables. + include_vars: ../private/vars.yml + tags: accounts +- name: Include members. + include_vars: "{{ lookup('first_found', membership_rolls) }}" + tags: accounts +#+END_SRC + +** Configure Netplan <> + +Gate's network interfaces are configured using Netplan and two files. +=/etc/netplan/60-gate.yaml= describes the static interfaces, to the +campus Ethernet and WiFi. =/etc/netplan/60-isp.yaml= is expected to +be revised more frequently as the campus ISP changes. + +Netplan is configured to identify the interfaces by their MAC +addresses, which must be provided in =private/vars.yml=, as in the +example code here. + +#+CAPTION: =private/vars.yml= +#+BEGIN_SRC conf :tangle private/vars.yml +gate_lan_mac: ff:ff:ff:ff:ff:ff +gate_wifi_mac: ff:ff:ff:ff:ff:ff +gate_isp_mac: ff:ff:ff:ff:ff:ff +#+END_SRC + +The following tasks install the two configuration files and apply the +new network plan. + +#+CAPTION: =roles_t/gate/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/gate/tasks/main.yml + +- name: Install netplan (gate). + become: yes + apt: pkg=netplan.io + +- name: Configure netplan (gate). + become: yes + copy: + content: | + network: + ethernets: + lan: + match: + macaddress: {{ gate_lan_mac }} + addresses: [ {{ gate_addr_cidr }} ] + set-name: lan + dhcp4: false + nameservers: + addresses: [ {{ core_addr }} ] + search: [ {{ domain_priv }} ] + routes: + - to: {{ public_vpn_net_cidr }} + via: {{ core_addr }} + wifi: + match: + macaddress: {{ gate_wifi_mac }} + addresses: [ {{ gate_wifi_addr_cidr }} ] + set-name: wifi + dhcp4: false + dest: /etc/netplan/60-gate.yaml + mode: u=rw,g=r,o= + notify: Apply netplan. + +- name: Install netplan (ISP). + become: yes + copy: + content: | + network: + ethernets: + isp: + match: + macaddress: {{ gate_isp_mac }} + set-name: isp + dhcp4: true + dhcp4-overrides: + use-dns: false + dest: /etc/netplan/60-isp.yaml + mode: u=rw,g=r,o= + force: no + notify: Apply netplan. +#+END_SRC + +#+CAPTION: =roles_t/gate/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/gate/handlers/main.yml :mkdirp yes +--- +- name: Apply netplan. + become: yes + command: netplan apply +#+END_SRC + +Note that the =60-isp.yaml= file is only updated (created) if it does +not already exists, so that it can be easily modified to debug a new +campus ISP without interference from Ansible. + +** UFW Rules + +Gate uses the Uncomplicated FireWall (UFW) to install its packet +filters at boot-time. The institute does not use a firewall except to +configure Network Address Translation (NAT) and forwarding. Members +expect to be able to exercise experimental services on random ports. +The default policy settings in =/etc/default/ufw= are ~ACCEPT~ and +~ACCEPT~ for input and output, and ~DROP~ for forwarded packets. +Forwarding was enabled in the kernel previously (when configuring +OpenVPN) using Ansible's ~sysctl~ module. It does not need to be set +in =/etc/ufw/sysctl.conf=. + +NAT is enabled per the ~ufw-framework(8)~ manual page, by introducing +~nat~ table rules in a block at the end of =/etc/ufw/before.rules=. +They translate packets going to the ISP. These can come from the +private Ethernet or campus Wi-Fi. Hosts on the other institute +networks (the two VPNs) should not be routing their Internet traffic +through their VPN. + +#+NAME: ufw-nat +#+CAPTION: ~ufw-nat~ +#+BEGIN_SRC conf +-A POSTROUTING -s {{ private_net_cidr }} -o isp -j MASQUERADE +-A POSTROUTING -s {{ gate_wifi_net_cidr }} -o isp -j MASQUERADE +#+END_SRC + +Forwarding rules are also needed. The ~nat~ table is a /post/ routing +rule set, so the default routing policy (~DENY~) will drop packets +before NAT can translate them. The following rules are added to allow +packets to be forwarded from the campus Ethernet or Gate-WiFi subnet +to an ISP on the ~isp~ interface, and back (if related to an outgoing +packet). + +#+NAME: ufw-forward-nat +#+CAPTION: ~ufw-forward-nat~ +#+BEGIN_SRC conf +-A FORWARD -i lan -o isp -j ACCEPT +-A FORWARD -i wifi -o isp -j ACCEPT +-A FORWARD -i isp -o lan {{ ACCEPT_RELATED }} +-A FORWARD -i isp -o wifi {{ ACCEPT_RELATED }} +#+END_SRC + +To keep the above code lines short, the template references an +~ACCEPT_RELATED~ variable, provided by the task, whose value includes +the following ~iptables(8)~ rule specification parameters. + +: -m state --state ESTABLISHED,RELATED -j ACCEPT + +If "the standard ~iptables-restore~ syntax" as it is described in the +~ufw-framework~ manual page, allows continuation lines, please let us +know! + +Forwarding rules are also needed to route packets from the campus VPN +(the ~ovpn~ tunnel device) to the institute's LAN and back. The +public VPN on Front will also be included since its packets arrive at +Gate's ~lan~ interface, coming from Core. Thus forwarding between +public and campus VPNs is also allowed. + +#+NAME: ufw-forward-private +#+CAPTION: ~ufw-forward-private~ +#+BEGIN_SRC conf +-A FORWARD -i lan -o ovpn -j ACCEPT +-A FORWARD -i ovpn -o lan -j ACCEPT +#+END_SRC + +Note that there are no forwarding rules to allow packets to pass from +the ~wifi~ device to the ~lan~ device, just the ~ovpn~ device. + +** Install UFW + +The following tasks install the Uncomplicated Firewall (UFW), set its +policy in =/etc/default/ufw=, and install the above rules in +=/etc/ufw/before.rules=. When Gate is configured by ~./abbey config +gate~ as in the example bootstrap, enabling the firewall should not be +a problem. But when configuring a new gate with ~./abbey config +new-gate~, enabling the firewall could break Ansible's current and +future ssh sessions. For this reason, Ansible /does not/ enable the +firewall. The administrator must login and execute the following +command after Gate is configured or new gate is "in position" +(connected to old Gate's ~wifi~ and ~isp~ networks). + +: sudo ufw enable + +#+CAPTION: =roles_t/gate/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/gate/tasks/main.yml :noweb yes + +- name: Install UFW. + become: + apt: pkg=ufw + +- name: Configure UFW policy. + become: yes + lineinfile: + path: /etc/default/ufw + line: "{{ item.line }}" + regexp: "{{ item.regexp }}" + loop: + - { line: "DEFAULT_INPUT_POLICY=\"ACCEPT\"", + regexp: "^DEFAULT_INPUT_POLICY=" } + - { line: "DEFAULT_OUTPUT_POLICY=\"ACCEPT\"", + regexp: "^DEFAULT_OUTPUT_POLICY=" } + - { line: "DEFAULT_FORWARD_POLICY=\"DROP\"", + regexp: "^DEFAULT_FORWARD_POLICY=" } + +- name: Configure UFW rules. + become: yes + vars: + ACCEPT_RELATED: -m state --state ESTABLISHED,RELATED -j ACCEPT + blockinfile: + path: /etc/ufw/before.rules + block: | + *nat + :POSTROUTING ACCEPT [0:0] + <> + COMMIT + + *filter + <> + <> + COMMIT + insertafter: EOF +#+END_SRC + +** Configure DHCP For The Gate-WiFi Ethernet + +To accommodate commodity Wi-Fi access points without re-configuring +them, the institute attempts to look like an up-link, an ISP, e.g. a +cable modem. Thus it expects the wireless AP to route non-local +traffic out its WAN Ethernet port, and to get an IP address for the +WAN port using DHCP. Thus Gate runs ISC's DHCP daemon configured to +listen on one network interface, recognize exactly one client host, +and provide that one client with an IP address and customary network +parameters (default route, time server, etc.). + +Two Ansible variables are needed to configure Gate's DHCP service, +specifically the sole subnet host: ~wifi_wan_name~ is any word +appropriate for identifying the Wi-Fi AP, and ~wifi_wan_mac~ is the +AP's MAC address. + +#+CAPTION: =private/vars.yml= +#+BEGIN_SRC conf :tangle private/vars.yml +wifi_wan_mac: 94:83:c4:19:7d:57 +wifi_wan_name: campus-wifi-ap +#+END_SRC + +If Gate is configured with ~./abbey config gate~ and then connected to +actual networks (i.e. /not/ rebooted), the following command is +executed. If a new gate was configured with ~./abbey config new-gate~ +and not rebooted, the following command would also be executed. + +: sudo systemctl start isc-dhcp-server + +If physically moved or rebooted for some other reason, the above +command would not be necessary. + +Installation and configuration of the DHCP daemon follows. Note that +the daemon listens /only/ on the Gate-WiFi network interface. + +#+CAPTION: =roles_t/gate/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/gate/tasks/main.yml + +- name: Install DHCP server. + become: yes + apt: pkg=isc-dhcp-server + +- name: Configure DHCP interface. + become: yes + lineinfile: + path: /etc/default/isc-dhcp-server + line: INTERFACESv4="wifi" + regexp: ^INTERFACESv4= + notify: Restart DHCP server. + +- name: Configure DHCP for WiFiAP service. + become: yes + copy: + content: | + default-lease-time 3600; + max-lease-time 7200; + ddns-update-style none; + authoritative; + log-facility daemon; + + subnet {{ gate_wifi_net }} netmask {{ gate_wifi_net_mask }} { + option subnet-mask {{ gate_wifi_net_mask }}; + option broadcast-address {{ gate_wifi_broadcast }}; + option routers {{ gate_wifi_addr }}; + } + + host {{ wifi_wan_name }} { + hardware ethernet {{ wifi_wan_mac }}; + fixed-address {{ wifi_wan_addr }}; + } + dest: /etc/dhcp/dhcpd.conf + notify: Restart DHCP server. + +- name: Enable DHCP server. + become: yes + systemd: + service: isc-dhcp-server + enabled: yes +#+END_SRC + +#+CAPTION: =roles_t/gate/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/gate/handlers/main.yml + +- name: Restart DHCP server. + become: yes + systemd: + service: isc-dhcp-server + state: restarted +#+END_SRC + +** Install Server Certificate + +The (OpenVPN) server on Gate uses an institute certificate (and key) +to authenticate itself to its clients. It uses the =/etc/server.crt= +and =/etc/server.key= files just because the other servers (on Core +and Front) do. + +#+CAPTION: =roles_t/gate/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/gate/tasks/main.yml + +- name: Install server certificate/key. + become: yes + copy: + src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }} + dest: /etc/server.{{ item.typ }} + mode: "{{ item.mode }}" + loop: + - { path: "issued/gate.{{ domain_priv }}", typ: crt, + mode: "u=r,g=r,o=r" } + - { path: "private/gate.{{ domain_priv }}", typ: key, + mode: "u=r,g=,o=" } + notify: Restart OpenVPN. +#+END_SRC + +** Configure OpenVPN + +Gate uses OpenVPN to provide the institute's campus VPN service. Its +clients are /not/ configured to route /all/ of their traffic through +the VPN, so Gate pushes routes to the other institute networks. Gate +itself is on the private Ethernet and thereby learns about the route +to Front. + +#+NAME: openvpn-gate-routes +#+CAPTION: ~openvpn-gate-routes~ +#+BEGIN_SRC conf +push "route {{ private_net_and_mask }}" +push "route {{ public_vpn_net_and_mask }}" +#+END_SRC + +The complete OpenVPN configuration for Gate includes a ~server~ +option, the pushed routes mentioned above, and the common options +discussed in [[*The VPN Services][The VPN Services]]. + +#+NAME: openvpn-gate +#+CAPTION: ~openvpn-gate~ +#+BEGIN_SRC conf :noweb yes +server {{ campus_vpn_net_and_mask }} +client-config-dir /etc/openvpn/ccd +<> +<> +<> +<> +<> +<> +<> +<> +ca /usr/local/share/ca-certificates/{{ domain_name }}.crt +cert /etc/server.crt +key /etc/server.key +dh dh2048.pem +tls-auth ta.key 0 +#+END_SRC + +Finally, here are the tasks (and handler) required to install and +configure the OpenVPN server on Gate. + +#+CAPTION: =roles_t/gate/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/gate/tasks/main.yml :noweb yes + +- name: Install OpenVPN. + become: yes + apt: pkg=openvpn + +- name: Enable IP forwarding. + become: yes + sysctl: + name: net.ipv4.ip_forward + value: "1" + state: present + +- name: Create OpenVPN client configuration directory. + become: yes + file: + path: /etc/openvpn/ccd + state: directory + notify: Restart OpenVPN. + +- name: Disable former VPN clients. + become: yes + copy: + content: "disable\n" + dest: /etc/openvpn/ccd/{{ item }} + loop: "{{ revoked }}" + notify: Restart OpenVPN. + tags: accounts + +- name: Install OpenVPN secrets. + become: yes + copy: + src: ../Secret/{{ item.src }} + dest: /etc/openvpn/{{ item.dest }} + mode: u=r,g=,o= + loop: + - { src: gate-dh2048.pem, dest: dh2048.pem } + - { src: gate-ta.key, dest: ta.key } + notify: Restart OpenVPN. + +- name: Configure OpenVPN. + become: yes + copy: + content: | + <> + dest: /etc/openvpn/server.conf + mode: u=r,g=r,o= + notify: Restart OpenVPN. +#+END_SRC + +#+CAPTION: =roles_t/gate/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/gate/handlers/main.yml + +- name: Restart OpenVPN. + become: yes + systemd: + service: openvpn@server + state: restarted +#+END_SRC + + +* The Campus Role + +The ~campus~ role configures generic campus server machines: network +NAS, DVRs, wireless sensors, etc. These are simple Debian machines +administered remotely via Ansible. They should use the campus name +server, sync with the campus time server, trust the institute +certificate authority, and deliver email addressed to ~root~ to the +system administrator's account on Core. + +Wireless campus devices can get a key to the campus VPN from the +~./inst client campus~ command, but their OpenVPN client must be +configured manually. + +** Include Particulars + +The following should be familiar boilerplate by now. + +#+CAPTION: =roles_t/campus/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/tasks/main.yml :mkdirp yes +--- +- name: Include public variables. + include_vars: ../public/vars.yml +- name: Include private variables. + include_vars: ../private/vars.yml +#+END_SRC + +** Configure Hostname + +Clients should be using the expected host name. + +#+CAPTION: =roles_t/campus/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/tasks/main.yml + +- name: Configure hostname. + become: yes + copy: + content: "{{ item.content }}" + dest: "{{ item.file }}" + loop: + - { file: /etc/hostname, + content: "{{ inventory_hostname }}" } + - { file: /etc/mailname, + content: "{{ inventory_hostname }}.{{ domain_priv }}" } + when: inventory_hostname != ansible_hostname + notify: Update hostname. + +#+END_SRC + +#+CAPTION: =roles_t/campus/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/handlers/main.yml :mkdirp yes +--- +- name: Update hostname. + become: yes + command: hostname -F /etc/hostname +#+END_SRC + +** Enable Systemd Resolved + +Campus machines start the ~systemd-networkd~ and ~systemd-resolved~ +service units on boot. See [[resolved-front][Enable Systemd Resolved]]. + +#+CAPTION: =roles_t/campus/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/tasks/main.yml :noweb yes +<> +#+END_SRC + +** Configure Systemd Resolved + +Campus machines use the campus name server on Core (or ~dns.google~), +and include the institute's private domain in their search lists. + +#+CAPTION: =roles_t/campus/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/tasks/main.yml + +- name: Configure resolved. + become: yes + lineinfile: + path: /etc/systemd/resolved.conf + regexp: "{{ item.regexp }}" + line: "{{ item.line }}" + loop: + - { regexp: '^ *DNS *=', line: "DNS={{ core_addr }}" } + - { regexp: '^ *FallbackDNS *=', line: "FallbackDNS=8.8.8.8" } + - { regexp: '^ *Domains *=', line: "Domains={{ domain_priv }}" } + notify: + - Reload Systemd. + - Restart Systemd resolved. +#+END_SRC + +#+CAPTION: =roles_t/campus/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/handlers/main.yml + +- name: Reload Systemd. + become: yes + command: systemctl daemon-reload + +- name: Restart Systemd resolved. + become: yes + systemd: + service: systemd-resolved + state: restarted +#+END_SRC + +** Configure Systemd Timesyncd + +The institute uses a common time reference throughout the campus. +This is essential to campus security, improving the accuracy of log +and file timestamps. + +#+CAPTION: =roles_t/campus/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/tasks/main.yml + +- name: Configure timesyncd. + become: yes + lineinfile: + path: /etc/systemd/timesyncd.conf + line: NTP=ntp.{{ domain_priv }} + notify: Restart systemd-timesyncd. +#+END_SRC + +#+CAPTION: =roles_t/campus/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/handlers/main.yml + +- name: Restart systemd-timesyncd. + become: yes + systemd: + service: systemd-timesyncd + state: restarted +#+END_SRC + +** Add Administrator to System Groups + +The administrator often needs to read (directories of) log files owned +by groups ~root~ and ~adm~. Adding the administrator's account to +these groups speeds up debugging. + +#+CAPTION: =roles_t/campus/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/tasks/main.yml + +- name: Add {{ ansible_user }} to system groups. + become: yes + user: + name: "{{ ansible_user }}" + append: yes + groups: root,adm +#+END_SRC + +** Trust Institute Certificate Authority + +Campus hosts should recognize the institute's Certificate Authority as +trustworthy, so its certificate is added to the host's set of trusted +CAs. (For more information about how the small institute manages its +keys, certificates and passwords, see [[*Keys][Keys]].) + +#+CAPTION: =roles_t/campus/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/tasks/main.yml + +- name: Trust the institute CA. + become: yes + copy: + src: ../Secret/CA/pki/ca.crt + dest: /usr/local/share/ca-certificates/{{ domain_name }}.crt + mode: u=r,g=r,o=r + owner: root + group: root + notify: Update CAs. +#+END_SRC + +#+CAPTION: =roles_t/campus/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/handlers/main.yml + +- name: Update CAs. + become: yes + command: update-ca-certificates +#+END_SRC + +** Install Unattended Upgrades + +The institute prefers to install security updates as soon as possible. + +#+CAPTION: =roles_t/campus/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/tasks/main.yml + +- name: Install basic software. + become: yes + apt: pkg=unattended-upgrades +#+END_SRC + +** Configure Postfix on Campus + +The Postfix settings used by the campus include message size, queue +times, and the ~relayhost~ Core. The default Debian configuration +(for an "Internet Site") is otherwise sufficient. Manual installation +may prompt for configuration type and mail name. The appropriate +answers are listed here but will be checked (corrected) by Ansible +tasks below. + +- General type of mail configuration: Internet Site +- System mail name: new.small.private + +#+CAPTION: =roles_t/campus/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/tasks/main.yml :noweb yes + +- name: Install Postfix. + become: yes + apt: pkg=postfix + +- name: Configure Postfix. + become: yes + lineinfile: + path: /etc/postfix/main.cf + regexp: "^ *{{ item.p }} *=" + line: "{{ item.p }} = {{ item.v }}" + loop: + <> + <> + <> + <> + - { p: myhostname, + v: "{{ inventory_hostname }}.{{ domain_priv }}" } + - { p: mydestination, + v: "{{ postfix_mydestination | default('') }}" } + - { p: relayhost, v: "[smtp.{{ domain_priv }}]" } + - { p: inet_interfaces, v: loopback-only } + notify: Restart Postfix. + +- name: Enable/Start Postfix. + become: yes + systemd: + service: postfix + enabled: yes + state: started +#+END_SRC + +#+CAPTION: =roles_t/campus/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/handlers/main.yml + +- name: Restart Postfix. + become: yes + systemd: + service: postfix + state: restarted +#+END_SRC + +** Hard-wire Important IP Addresses + +For the edification of programs consulting the =/etc/hosts= file, the +institute's domain name and public IP address are added. The Debian +custom of translating the host name into ~127.0.1.1~ is also followed. + +#+CAPTION: =roles_t/campus/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/tasks/main.yml + +- name: Hard-wire important IP addresses. + become: yes + lineinfile: + path: /etc/hosts + regexp: "{{ item.regexp }}" + line: "{{ item.line }}" + insertafter: EOF + vars: + name: "{{ inventory_hostname }}" + loop: + - regexp: "^{{ front_addr }}[ ].*" + line: "{{ front_addr }} {{ domain_name }}" + - regexp: "^127.0.1.1[ ].*" + line: "127.0.1.1 {{ name }}.localdomain {{ name }}" +#+END_SRC + +** Configure NRPE + +Each campus host runs an NRPE (a NAGIOS Remote Plugin Executor) +server so that the NAGIOS4 server on Core can collect statistics. The +NAGIOS service is discussed in the [[*Configure NRPE][Configure NRPE]] section of [[*The Core Role][The Core +Role]]. + +#+CAPTION: =roles_t/campus/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/tasks/main.yml + +- name: Install NRPE. + become: yes + apt: + pkg: [ nagios-nrpe-server, lm-sensors ] + +- name: Install inst_sensors NAGIOS plugin. + become: yes + copy: + src: ../core/files/inst_sensors + dest: /usr/local/sbin/inst_sensors + mode: u=rwx,g=rx,o=rx + +- name: Configure NRPE server. + become: yes + copy: + content: | + allowed_hosts=127.0.0.1,::1,{{ core_addr }} + dest: /etc/nagios/nrpe_local.cfg + notify: Reload NRPE server. + +- name: Configure NRPE commands. + become: yes + copy: + src: nrpe.cfg + dest: /etc/nagios/nrpe.d/institute.cfg + notify: Reload NRPE server. + +- name: Enable/Start NRPE server. + become: yes + systemd: + service: nagios-nrpe-server + enabled: yes + state: started +#+END_SRC + +#+CAPTION: =roles_t/campus/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/campus/handlers/main.yml + +- name: Reload NRPE server. + become: yes + systemd: + service: nagios-nrpe-server + state: reloaded +#+END_SRC + + +* The Ansible Configuration + +The small institute uses Ansible to maintain the configuration of its +servers. The administrator keeps an Ansible inventory in =hosts=, and +runs the playbook =site.yml= to apply the appropriate institutional +role(s) to each host. Examples of these files are included here, and +are used to test the roles. The example configuration applies the +institutional roles to VirtualBox machines prepared according to +chapter [[*Testing][Testing]]. + +The /actual/ Ansible configuration is kept in a Git "superproject" +containing replacements for the example =hosts= inventory and +=site.yml= playbook, as well as the =public/= and =private/= +particulars. Thus changes to this document and its tangle are easily +merged with ~git pull --recurse-submodules~ or ~git submodule update~, +while changes to the institute's particulars are committed to a +separate revision history. + +** =ansible.cfg= + +The Ansible configuration file =ansible.cfg= contains just a handful +of settings, some included just to create a test jig as described in +[[*Testing][Testing]]. + +- ~interpreter_python~ is set to suppress a warning from Ansible's + "automatic interpreter discovery" (described [[https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html][here]]). It declares + that Python 3 can be expected on all institute hosts. +- ~vault_password_file~ is set to suppress prompts for the vault + password. The institute keeps its vault password in =Secret/= (as + described in [[*Keys][Keys]]) and thus sets this parameter to + =Secret/vault-password=. +- ~inventory~ is set to avoid specifying it on the command line. +- ~roles_path~ is set to the recently tangled roles files in + =roles_t/= which are preferred in the test configuration. + +#+CAPTION: =ansible.cfg= +#+BEGIN_SRC conf :tangle ansible.cfg +[defaults] +interpreter_python=/usr/bin/python3 +vault_password_file=Secret/vault-password +inventory=hosts +roles_path=roles_t +#+END_SRC + +** =hosts= + +The Ansible inventory file =hosts= describes all of the institute's +machines starting with the main servers Front, Core and Gate. It +provides the IP addresses, administrator account names and passwords +for each machine. The IP addresses are all private, campus network +addresses except Front's public IP. The following example host file +describes three test servers named ~front~, ~core~ and ~gate~. + +#+NAME: hosts +#+CAPTION: =hosts= +#+BEGIN_SRC conf :tangle hosts +all: + vars: + ansible_user: sysadm + ansible_ssh_extra_args: -i Secret/ssh_admin/id_rsa + hosts: + front: + ansible_host: 192.168.57.3 + ansible_become_password: "{{ become_front }}" + core: + ansible_host: 192.168.56.1 + ansible_become_password: "{{ become_core }}" + gate: + ansible_host: 192.168.56.2 + ansible_become_password: "{{ become_gate }}" + children: + campus: + hosts: + gate: +#+END_SRC + +The values of the ~ansible_become_password~ key are references to +variables defined in =Secret/become.yml=, which is loaded as +"extra" variables by a ~-e~ option on the ~ansible-playbook~ command +line. + +#+CAPTION: =Secret/become.yml= +#+BEGIN_SRC conf :tangle Secret/become.yml :tangle-mode u=rw +become_front: !vault | + $ANSIBLE_VAULT;1.1;AES256 + 3563626131333733666466393166323135383838666338666131336335326 + 3656437663032653333623461633866653462636664623938356563306264 + 3438660a35396630353065383430643039383239623730623861363961373 + 3376663366566326137386566623164313635303532393335363063333632 + 363163316436380a336562323739306231653561613837313435383230313 + 1653565653431356362 +become_core: !vault | + $ANSIBLE_VAULT;1.1;AES256 + 3464643665363937393937633432323039653530326465346238656530303 + 8633066663935316365376438353439333034666366363739616130643261 + 3232380a66356462303034636332356330373465623337393938616161386 + 4653864653934373766656265613636343334356361396537343135393663 + 313562613133380a373334393963623635653264663538656163613433383 + 5353439633234666134 +become_gate: !vault | + $ANSIBLE_VAULT;1.1;AES256 + 3138306434313739626461303736666236336666316535356561343566643 + 6613733353434333962393034613863353330623761623664333632303839 + 3838350a37396462343738303331356134373634306238633030303831623 + 0636537633139366333373933396637633034383132373064393939363231 + 636264323132370a393135666335303361326330623438613630333638393 + 1303632663738306634 +#+END_SRC + +The passwords are individually encrypted just to make it difficult to +acquire a list of all institute privileged account passwords in one +glance. The multi-line values are generated by the ~ansible-vault +encrypt_string~ command, which uses the =ansible.cfg= file and thus +the =Secret/vault-password= file. + +** =playbooks/site.yml= + +The example =playbooks/site.yml= playbook (below) applies the +appropriate institutional role(s) to the hosts and groups defined in +the example inventory: =hosts=. + +#+CAPTION: =playbooks/site.yml= +#+BEGIN_SRC conf :tangle playbooks/site.yml :mkdirp yes +--- +- name: Configure Front + hosts: front + roles: [ front ] + +- name: Configure Gate + hosts: gate + roles: [ gate ] + +- name: Configure Core + hosts: core + roles: [ core ] + +- name: Configure Campus + hosts: campus + roles: [ campus ] +#+END_SRC + +** =Secret/vault-password= + +As already mentioned, the small institute keeps its Ansible vault +password, a "master secret", on the encrypted partition mounted at +=Secret/= in a file named =vault-password=. The administrator +generated a 16 character pronounceable password with ~gpw 1 16~ and +saved it like so: ~gpw 1 16 >Secret/vault-password~. The following +example password matches the example encryptions above. + +#+NAME: vault-password +#+CAPTION: =Secret/vault-password= +#+BEGIN_SRC conf :tangle Secret/vault-password :tangle-mode u=r :mkdirp yes +alitysortstagess +#+END_SRC + +** Creating A Working Ansible Configuration + +A working Ansible configuration can be "tangled" from this document to +produce the test configuration described in the [[*Testing][Testing]] chapter. The +tangling is done by Emacs's ~org-babel-tangle~ function and has +already been performed with the resulting tangle included in the +distribution with this document. + +An institution using the Ansible configuration herein can include this +document and its tangle as a Git submodule, e.g. in =institute/=, and +thus safely merge updates while keeping public and private particulars +separate, in sibling subdirectories =public/= and =private/=. +The following example commands create a new Git repo in =~/net/= +and add an =Institute/= submodule. + +#+BEGIN_SRC sh +cd +mkdir network +cd network +git init +git submodule add git://birchwood-abbey.net/~puck/Institute +git add Institute +#+END_SRC + +An institute administrator would then need to add several more files. + +- A top-level Ansible configuration file, =ansible.cfg=, would be + created by copying =Institute/ansible.cfg= and changing the + ~roles_path~ to ~roles:Institute/roles~. +- A host inventory, =hosts=, would be created, perhaps by copying + =Institute/hosts= and changing its IP addresses. +- A site playbook, =site.yml=, would be created in a new =playbooks/= + subdirectory by copying =Institute/playbooks/site.yml= with + appropriate changes. +- All of the files in =Institute/public/= and =Institute/private/= + would be copied, with appropriate changes, into new subdirectories + =public/= and =private/=. +- =~/net/Secret= would be a symbolic link to the (auto-mounted?) + location of the administrator's encrypted USB drive, as described in + section [[*Keys][Keys]]. + +The files in =Institute/roles_t/= were "tangled" from this document +and must be copied to =Institute/roles/= for reasons discussed in the +next section. This document does not "tangle" /directly/ into +=roles/= to avoid clobbering changes to a working (debugged!) +configuration. + +The =playbooks/= directory must include the institutional playbooks, +which find their settings and templates relative to this directory, +e.g. in =../private/vars.yml=. Running institutional playbooks from +=~/net/playbooks/= means they will use =~/net/private/= rather than +the example =~/net/Institute/private/=. + +#+BEGIN_SRC sh +cp -r Institute/roles_t Institute/roles +( cd playbooks; ln -s ../Institute/playbooks/* . ) +#+END_SRC + +Given these preparations, the ~inst~ script should work in the +super-project's directory. + +#+BEGIN_SRC sh +./Institute/inst config -n +#+END_SRC + +** Maintaining A Working Ansible Configuration + +The Ansible roles currently tangle into the =roles_t/= directory to +ensure that debugged Ansible code in =roles/= is not clobbered by code +tangled from this document. Comparing =roles_t/= with =roles/= will +reveal any changes made to =roles/= during debugging that need to be +reconciled with this document /as well as/ any policy changes in this +document that require changes to the current =roles/=. + +When debugging literate programs becomes A Thing, then this document +can tangle directly into =roles/=, and literate debuggers can find +their way back to the code block in this document. + + +* The Institute Commands + +The institute's administrator uses a convenience script to reliably +execute standard procedures. The script is run with the command name +~./inst~ because it is intended to run "in" the same directory as the +Ansible configuration. The Ansible commands it executes are expected +to get their defaults from =./ansible.cfg=. + +** Sub-command Blocks + +The code blocks in this chapter tangle into the =inst= script. Each +block examines the script's command line arguments to determine +whether its sub-command was intended to run, and exits with an +appropriate code when it is done. + +The first code block is the header of the ~./inst~ script. + +#+CAPTION: =inst= +#+BEGIN_SRC perl :tangle inst :tangle-mode u=rwx,g=rx +#!/usr/bin/perl -w +# +# DO NOT EDIT. This file was tangled from an institute.org file. + +use strict; +use IO::File; +#+END_SRC + +** Sanity Check + +The next code block does not implement a sub-command; it implements +part of /all/ ~./inst~ sub-commands. It performs a "sanity check" on +the current directory, warning of missing files or directories, and +especially checking that all files in =private/= have appropriate +permissions. It probes past the =Secret/= mount point (probing for +=Secret/become.yml=) to ensure the volume is mounted. + +#+CAPTION: =inst= +#+BEGIN_SRC perl :tangle inst + +sub note_missing_file_p ($); +sub note_missing_directory_p ($); + +{ + my $missing = 0; + if (note_missing_file_p "ansible.cfg") { $missing += 1; } + if (note_missing_file_p "hosts") { $missing += 1; } + if (note_missing_directory_p "Secret") { $missing += 1; } + if (note_missing_file_p "Secret/become.yml") { $missing += 1; } + if (note_missing_directory_p "playbooks") { $missing += 1; } + if (note_missing_file_p "playbooks/site.yml") { $missing += 1; } + if (note_missing_directory_p "roles") { $missing += 1; } + if (note_missing_directory_p "public") { $missing += 1; } + if (note_missing_directory_p "private") { $missing += 1; } + + for my $filename (glob "private/*") { + my $perm = (stat $filename)[2]; + if ($perm & 077) { + print "$filename: not private\n"; + } + } + die "$missing missing files\n" if $missing != 0; +} + +sub note_missing_file_p ($) { + my ($filename) = @_; + if (! -f $filename) { + print "$filename: missing\n"; + return 1; + } else { + return 0; + } +} + +sub note_missing_directory_p ($) { + my ($dirname) = @_; + if (! -d $dirname) { + print "$dirname: missing\n"; + return 1; + } else { + return 0; + } +} +#+END_SRC + +** Importing Ansible Variables + +To ensure that Ansible and ~./inst~ are sympatico vis-a-vi certain +variable values (esp. private values like network addresses), a +=check-inst-vars.yml= playbook is used to update the Perl syntax file +=private/vars.pl= before ~./inst~ loads it. The Perl code in =inst= +declares the necessary global variables and =private/vars.pl= sets +them. + +#+CAPTION: =inst= +#+BEGIN_SRC conf :tangle inst + +sub mysystem (@) { + my $line = join (" ", @_); + print "$line\n"; + my $status = system $line; + die "status: $status\nCould not run $line: $!\n" if $status != 0; +} + +mysystem "ansible-playbook playbooks/check-inst-vars.yml >/dev/null"; + +our ($domain_name, $domain_priv, $front_addr, $gate_wifi_addr); +do "./private/vars.pl"; +#+END_SRC + +The playbook that updates =private/vars.pl=: + +#+CAPTION: =playbooks/check-inst-vars.yml= +#+BEGIN_SRC conf :tangle playbooks/check-inst-vars.yml +- hosts: localhost + gather_facts: no + tasks: + - include_vars: ../public/vars.yml + - include_vars: ../private/vars.yml + - copy: + content: | + $domain_name = "{{ domain_name }}"; + $domain_priv = "{{ domain_priv }}"; + $front_addr = "{{ front_addr }}"; + $gate_wifi_addr = "{{ gate_wifi_addr }}"; + dest: ../private/vars.pl + mode: u=rw,g=,o= +#+END_SRC + +** The CA Command + +The next code block implements the ~CA~ sub-command, which creates a +new CA (certificate authority) in =Secret/CA/= as well as SSH and PGP +keys for the administrator, Monkey, Front and ~root~, also in +sub-directories of =Secret/=. The CA is created with the "common +name" provided by the ~full_name~ variable. An example is given +here. + +#+CAPTION: =public/vars.yml= +#+BEGIN_SRC conf :tangle public/vars.yml +full_name: Small Institute LLC +#+END_SRC + +The =Secret/= directory is on an off-line, encrypted volume plugged in +just for the duration of ~./inst~ commands, so =Secret/= is actually a +symbolic link to a volume's automount location. + +: ln -s /media/sysadm/ADE7-F866/ Secret + +The =Secret/CA/= directory is prepared using Easy RSA's ~make-cadir~ +command. The =Secret/CA/vars= file thus created is edited to contain +the appropriate names (or just to set ~EASYRSA_DN~ to ~cn_only~). + +: sudo apt install easy-rsa +: ( cd Secret/; make-cadir CA ) +: ./inst CA + +Running ~./inst CA~ creates the new CA and keys. The command prompts +for the Common Name (or several levels of Organizational names) of the +certificate authority. The ~full_name~ is given: ~Small Institute +LLC~. The CA is used to issue certificates for ~front~, ~gate~ and +~core~, which are installed on the servers during the next ~./inst +config~. + +#+CAPTION: =inst= +#+BEGIN_SRC perl :tangle inst + +if (defined $ARGV[0] && $ARGV[0] eq "CA") { + die "usage: $0 CA" if @ARGV != 1; + die "Secret/CA/easyrsa: not an executable\n" + if ! -x "Secret/CA/easyrsa"; + die "Secret/CA/pki/: already exists\n" if -e "Secret/CA/pki"; + mysystem "cd Secret/CA; ./easyrsa init-pki"; + mysystem "cd Secret/CA; ./easyrsa build-ca nopass"; + # Common Name: small.example.org + + my $dom = $domain_name; + my $pvt = $domain_priv; + mysystem "cd Secret/CA; ./easyrsa build-server-full $dom nopass"; + mysystem "cd Secret/CA; ./easyrsa build-server-full gate.$pvt nopass"; + mysystem "cd Secret/CA; ./easyrsa build-server-full core.$pvt nopass"; + mysystem "cd Secret/CA; ./easyrsa build-client-full core nopass"; + umask 077; + mysystem "openvpn --genkey --secret Secret/front-ta.key"; + mysystem "openvpn --genkey --secret Secret/gate-ta.key"; + mysystem "openssl dhparam -out Secret/front-dh2048.pem 2048"; + mysystem "openssl dhparam -out Secret/gate-dh2048.pem 2048"; + + mysystem "mkdir --mode=700 Secret/root.gnupg"; + mysystem ("gpg --homedir Secret/root.gnupg", + " --batch --quick-generate-key --passphrase ''", + " root\@core.$pvt"); + mysystem ("gpg --homedir Secret/root.gnupg", + " --export --armor --output root-pub.pem", + " root\@core.$pvt"); + chmod 0440, "root-pub.pem"; + mysystem ("gpg --homedir Secret/root.gnupg", + " --export-secret-key --armor --output root-sec.pem", + " root\@core.$pvt"); + chmod 0400, "root-sec.pem"; + + mysystem "mkdir Secret/ssh_admin"; + chmod 0700, "Secret/ssh_admin"; + mysystem ("ssh-keygen -q -t rsa" + ." -C A\\ Small\\ Institute\\ Administrator", + " -N '' -f Secret/ssh_admin/id_rsa"); + + mysystem "mkdir Secret/ssh_monkey"; + chmod 0700, "Secret/ssh_monkey"; + mysystem "echo 'HashKnownHosts no' >Secret/ssh_monkey/config"; + mysystem ("ssh-keygen -q -t rsa -C monkey\@core", + " -N '' -f Secret/ssh_monkey/id_rsa"); + + mysystem "mkdir Secret/ssh_front"; + chmod 0700, "Secret/ssh_front"; + mysystem "ssh-keygen -A -f Secret/ssh_front -C $dom"; + exit; +} +#+END_SRC + +** The Config Command + +The next code block implements the ~config~ sub-command, which +provisions network services by running the =site.yml= playbook +described in [[*=playbooks/site.yml=][=playbooks/site.yml=]]. It recognizes an optional ~-n~ +flag indicating that the service configurations should just be +checked. Given an optional host name, it provisions (or checks) just +the named host. + +Example command lines: +: ./inst config +: ./inst config -n +: ./inst config HOST +: ./inst config -n HOST + +#+CAPTION: =inst= +#+BEGIN_SRC perl :tangle inst + +if (defined $ARGV[0] && $ARGV[0] eq "config") { + die "Secret/CA/easyrsa: not executable\n" + if ! -x "Secret/CA/easyrsa"; + shift; + my $cmd = "ansible-playbook -e \@Secret/become.yml"; + if (defined $ARGV[0] && $ARGV[0] eq "-n") { + shift; + $cmd .= " --check --diff" + } + if (@ARGV == 0) { + ; + } elsif (defined $ARGV[0]) { + my $hosts = lc $ARGV[0]; + die "$hosts: contains illegal characters" + if $hosts !~ /^!?[a-z][-a-z0-9,!]+$/; + $cmd .= " -l $hosts"; + } else { + die "usage: $0 config [-n] [HOSTS]\n"; + } + $cmd .= " playbooks/site.yml"; + mysystem $cmd; + exit; +} +#+END_SRC + +** Account Management + +For general information about members and their Unix accounts, see +[[*Accounts][Accounts]]. The account management sub-commands maintain a mapping +associating member "usernames" (Unix account names) with their +records. The mapping is stored among other things in +=private/members.yml= as the value associated with the key ~members~. + +A new member's record in the ~members~ mapping will have the ~status~ +key value ~current~. That key gets value ~former~ when the member +leaves.[fn:3] Access by former members is revoked by invalidating the +Unix account passwords, removing any authorized SSH keys from Front +and Core, and disabling their VPN certificates. + +The example file (below) contains a membership roll with one +membership record, for an account named ~dick~, which was issued +client certificates for devices named ~dick-note~, ~dick-phone~ and +~dick-razr~. ~dick-phone~ appears to be lost because its certificate +was revoked. Dick's membership record includes a vault-encrypted +password (for Fetchmail) and the two password hashes installed on +Front and Core. (The example hashes are truncated versions.) + +#+CAPTION: =private/members.yml= +#+BEGIN_SRC conf +--- +members: + dick: + status: current + clients: + - dick-note + - dick-phone + - dick-razr + password_front: + $6$17h49U76$c7TsH6eMVmoKElNANJU1F1LrRrqzYVDreNu.QarpCoSt9u0gTHgiQ + password_core: + $6$E9se3BoSilq$T.W8IUb/uSlhrVEWUQsAVBweiWB4xb3ebQ0tguVxJaeUkqzVmZ + password_fetchmail: !vault | + $ANSIBLE_VAULT;1.1;AES256 + 38323138396431323564366136343431346562633965323864633938613363336 + 4333334333966363136613264636365383031376466393432623039653230390a + 39366232633563646361616632346238333863376335633639383162356661326 + 4363936393530633631616630653032343465383032623734653461323331310a + 6535633263656434393030333032343533626235653332626330666166613833 +usernames: +- dick +revoked: +- dick-phone +#+END_SRC + +The test campus starts with the empty membership roll found in +=private/members-empty.yml= and saved in =private/members.yml= +(which is /not/ tangled from this document, thus /not/ over-written +during testing). If =members.yml= is not found, =members-empty.yml= +is used instead. + +#+CAPTION: =private/members-empty.yml= +#+BEGIN_SRC conf :tangle private/members-empty.yml :tangle-mode u=rw +--- +members: +usernames: [] +revoked: [] +#+END_SRC + +Both locations go on the ~membership_rolls~ variable used by the +~include_vars~ tasks. + +#+CAPTION: =private/vars.yml= +#+BEGIN_SRC conf :tangle private/vars.yml +membership_rolls: +- "../private/members.yml" +- "../private/members-empty.yml" +#+END_SRC + +Using the standard Perl library ~YAML::XS~, the subroutine for +reading the membership roll is simple, returning the top-level hash +read from the file. The dump subroutine is another story (below). + +#+CAPTION: =inst= +#+BEGIN_SRC perl :tangle inst + +use YAML::XS qw(LoadFile DumpFile); + +sub read_members_yaml () { + my $path; + $path = "private/members.yml"; + if (-e $path) { return LoadFile ($path); } + $path = "private/members-empty.yml"; + if (-e $path) { return LoadFile ($path); } + die "private/members.yml: not found\n"; +} + +sub write_members_yaml ($) { + my ($yaml) = @_; + my $old_umask = umask 077; + my $path = "private/members.yml"; + print "$path: "; STDOUT->flush; + eval { #DumpFile ("$path.tmp", $yaml); + dump_members_yaml ("$path.tmp", $yaml); + rename ("$path.tmp", $path) + or die "Could not rename $path.tmp: $!\n"; }; + my $err = $@; + umask $old_umask; + if ($err) { + print "ERROR\n"; + } else { + print "updated\n"; + } + die $err if $err; +} + +sub dump_members_yaml ($$) { + my ($pathname, $yaml) = @_; + my $O = new IO::File; + open ($O, ">$pathname") or die "Could not open $pathname: $!\n"; + print $O "---\n"; + if (keys %{$yaml->{"members"}}) { + print $O "members:\n"; + for my $user (sort keys %{$yaml->{"members"}}) { + print_member ($O, $yaml->{"members"}->{$user}); + } + print $O "usernames:\n"; + for my $user (sort keys %{$yaml->{"members"}}) { + print $O "- $user\n"; + } + } else { + print $O "members:\n"; + print $O "usernames: []\n"; + } + if (@{$yaml->{"revoked"}}) { + print $O "revoked:\n"; + for my $name (@{$yaml->{"revoked"}}) { + print $O "- $name\n"; + } + } else { + print $O "revoked: []\n"; + } + close $O or die "Could not close $pathname: $!\n"; +} +#+END_SRC + +The first implementation using ~YAML::Tiny~ balked at the ~!vault~ +data type. The current version using ~YAML::XS~ (Simonov's ~libyaml~) +does not support local data types neither, but does not abort. It +just produces a multi-line string. Luckily the structure of +=members.yml= is relatively simple and fixed, so a purpose-built +printer can add back the ~!vault~ data types at appropriate points. +~YAML::XS~ thus provides only a borked parser. Also luckily, the YAML +produced by the for-the-purpose printer makes the resulting membership +roll easier to read, with the ~username~ and ~status~ at the top of +each record. + +#+CAPTION: =inst= +#+BEGIN_SRC perl :tangle inst + +sub print_member ($$) { + my ($out, $member) = @_; + print $out " ", $member->{"username"}, ":\n"; + print $out " username: ", $member->{"username"}, "\n"; + print $out " status: ", $member->{"status"}, "\n"; + if (@{$member->{"clients"} || []}) { + print $out " clients:\n"; + for my $name (@{$member->{"clients"} || []}) { + print $out " - ", $name, "\n"; + } + } else { + print $out " clients: []\n"; + } + print $out " password_front: ", $member->{"password_front"}, "\n"; + print $out " password_core: ", $member->{"password_core"}, "\n"; + if (defined $member->{"password_fetchmail"}) { + print $out " password_fetchmail: !vault |\n"; + for my $line (split /\n/, $member->{"password_fetchmail"}) { + print $out " $line\n"; + } + } + my @standard_keys = ( "username", "status", "clients", + "password_front", "password_core", + "password_fetchmail" ); + my @other_keys = (sort + grep { my $k = $_; + ! grep { $_ eq $k } @standard_keys } + keys %$member); + for my $key (@other_keys) { + print $out " $key: ", $member->{$key}, "\n"; + } +} +#+END_SRC + +** The New Command + +The next code block implements the ~new~ sub-command. It adds a new +member to the institute's membership roll. It runs an Ansible +playbook to create the member's Nextcloud user, updates +=private/members.yml=, and runs the =site.yml= playbook. The site +playbook (re)creates the member's accounts on Core and Front, +(re)installs the member's personal homepage on Front, and the member's +Fetchmail service on Core. All services are configured with an +initial, generated password. + +#+CAPTION: =inst= +#+BEGIN_SRC perl :tangle inst + +sub valid_username (@); +sub shell_escape ($); +sub strip_vault ($); + +if (defined $ARGV[0] && $ARGV[0] eq "new") { + my $user = valid_username (@ARGV); + my $yaml = read_members_yaml (); + my $members = $yaml->{"members"}; + die "$user: already exists\n" if defined $members->{$user}; + + my $pass = `apg -n 1 -x 12 -m 12`; chomp $pass; + print "Initial password: $pass\n"; + my $epass = shell_escape $pass; + my $front = `mkpasswd -m sha-512 "$epass"`; chomp $front; + my $core = `mkpasswd -m sha-512 "$epass"`; chomp $core; + my $vault = strip_vault `ansible-vault encrypt_string "$epass"`; + mysystem ("ansible-playbook -e \@Secret/become.yml", + " playbooks/nextcloud-new.yml", + " -e user=$user", " -e pass=\"$epass\""); + $members->{$user} = { "username" => $user, + "status" => "current", + "password_front" => $front, + "password_core" => $core, + "password_fetchmail" => $vault }; + write_members_yaml + { "members" => $members, + "revoked" => $yaml->{"revoked"} }; + mysystem ("ansible-playbook -e \@Secret/become.yml", + " -t accounts -l core,front playbooks/site.yml"); + exit; +} + +sub valid_username (@) { + my $sub = $_[0]; + die "usage: $0 $sub USER\n" + if @_ != 2; + my $username = lc $_[1]; + die "$username: does not begin with an alphabetic character\n" + if $username !~ /^[a-z]/; + die "$username: contains non-alphanumeric character(s)\n" + if $username !~ /^[a-z0-9]+$/; + return $username; +} + +sub shell_escape ($) { + my ($string) = @_; + my $result = "$string"; + $result =~ s/([\$`"\\ ])/\\$1/g; + return ($result); +} + +sub strip_vault ($) { + my ($string) = @_; + die "Unexpected result from ansible-vault: $string\n" + if $string !~ /^ *!vault [|]/; + my @lines = split /^ */m, $string; + return (join "", @lines[1..$#lines]); +} +#+END_SRC + +#+CAPTION: =playbooks/nextcloud-new.yml= +#+BEGIN_SRC conf :tangle playbooks/nextcloud-new.yml +- hosts: core + no_log: yes + tasks: + - name: Run occ user:add. + shell: | + spawn sudo -u www-data /usr/bin/php occ user:add {{ user }} + expect { + "Enter password:" {} + timeout { exit 1 } + } + send "{{ pass|quote }}\n"; + expect { + "Confirm password:" {} + timeout { exit 2 } + } + send "{{ pass|quote }}\n"; + expect { + "The user \"{{ user }}\" was created successfully" {} + timeout { exit 3 } + } + args: + chdir: /var/www/nextcloud/ + executable: /usr/bin/expect +#+END_SRC + +** The Pass Command + +The institute's ~passwd~ command on Core securely emails ~root~ with a +member's desired password (hashed). The command may update the +servers immediately or let the administrator do that using the ~./inst +pass~ command. In either case, the administrator needs to update the +membership roll, and so receives an encrypted email, which gets piped +into ~./inst pass~. This command decrypts the message, parses the +(YAML) content, updates =private/members.yml=, and runs the full +Ansible =site.yml= playbook to update the servers. If all goes well a +message is sent to ~member@core~. + +*** Less Aggressive passwd. + +The next code block implements the less aggressive ~passwd~ command. +It is less aggressive because it just emails ~root~. It does not +update the servers, so it does not need an SSH key and password to +~root~ (any privileged account) on Front, nor a set-UID ~root~ script +(nor equivalent) on Core. It /is/ a set-UID ~shadow~ script so it can +read =/etc/shadow=. The member will need to wait for confirmation +from the administrator, but /all/ keys to ~root~ at the institute stay +in =Secret/=. + +#+CAPTION: =roles_t/core/templates/passwd= +#+BEGIN_SRC perl :tangle roles_t/core/templates/passwd :mkdirp yes +#!/bin/perl -wT + +use strict; + +$ENV{PATH} = "/usr/sbin:/usr/bin:/bin"; + +my ($username) = getpwuid $<; +if ($username ne "{{ ansible_user }}") { + { exec ("sudo", "-u", "{{ ansible_user }}", + "/usr/local/bin/passwd", $username) }; + print STDERR "Could not exec sudo: $!\n"; + exit 1; +} + +$username = $ARGV[0]; +my $passwd; +{ + my $SHADOW = new IO::File; + open $SHADOW, "; + close $SHADOW; + die "No /etc/shadow record found: $username\n" if ! defined $line; + (undef, $passwd) = split ":", $line; +} + +system "stty -echo"; +END { system "stty echo"; } + +print "Current password: "; +my $pass = ; chomp $pass; +print "\n"; +my $hash = crypt($pass, $passwd); +die "Sorry...\n" if $hash ne $passwd; + +print "New password: "; +$pass = ; chomp($pass); +die "Passwords must be at least 10 characters long.\n" + if length $pass < 10; +print "\nRetype password: "; +my $pass2 = ; chomp($pass2); +print "\n"; +die "New passwords do not match!\n" + if $pass2 ne $pass; + +use MIME::Base64; +my $epass = encode_base64 $pass; + +use File::Temp qw(tempfile); +my ($TMP, $tmp) = tempfile; +close $TMP; + +my $O = new IO::File; +open $O, ("| gpg --encrypt --armor" + ." --trust-model always --recipient root\@core" + ." > $tmp") or die "Error running gpg > $tmp: $!\n"; +print $O <flush; +copy $tmp, $O; +#print $O `cat $tmp`; +close $O or die "Error closing pipe to sendmail: $!\n"; + +print " +Your request was sent to Root. PLEASE WAIT for email confirmation +that the change was completed.\n"; +exit; +#+END_SRC + +*** Less Aggressive Pass Command + +The following code block implements the ~./inst pass~ command, used by +the administrator to update =private/members.yml= before running +=playbooks/site.yml= and emailing the concerned member. + +#+CAPTION: =inst= +#+BEGIN_SRC perl :tangle inst + +use MIME::Base64; + +if (defined $ARGV[0] && $ARGV[0] eq "pass") { + my $I = new IO::File; + open $I, "gpg --homedir Secret/root.gnupg --quiet --decrypt |" + or die "Error running gpg: $!\n"; + my $msg_yaml = LoadFile ($I); + close $I or die "Error closing pipe from gpg: $!\n"; + + my $user = $msg_yaml->{"username"}; + die "Could not find a username in the decrypted input.\n" + if ! defined $user; + my $pass64 = $msg_yaml->{"password"}; + die "Could not find a password in the decrypted input.\n" + if ! defined $pass64; + + my $mem_yaml = read_members_yaml (); + my $members = $mem_yaml->{"members"}; + my $member = $members->{$user}; + die "No such member: $user\n" if ! defined $member; + + my $pass = decode_base64 $pass64; + my $epass = shell_escape $pass; + my $front = `mkpasswd -m sha-512 "$epass"`; chomp $front; + my $core = `mkpasswd -m sha-512 "$epass"`; chomp $core; + my $vault = strip_vault `ansible-vault encrypt_string "$epass"`; + $member->{"password_front"} = $front; + $member->{"password_core"} = $core; + $member->{"password_fetchmail"} = $vault; + + mysystem ("ansible-playbook -e \@Secret/become.yml", + "playbooks/nextcloud-pass.yml", + "-e user=$user", "-e \"pass=$epass\""); + write_members_yaml $mem_yaml; + mysystem ("ansible-playbook -e \@Secret/become.yml", + "-t accounts playbooks/site.yml"); + my $O = new IO::File; + open ($O, "| sendmail $user\@$domain_priv") + or die "Could not pipe to sendmail: $!\n"; + print $O "From: +To: <$user> +Subject: Password change. + +Your new password has been distributed to the servers. + +As always: please email root with any questions or concerns.\n"; + close $O or die "pipe to sendmail failed: $!\n"; + exit; +} +#+END_SRC + +And here is the playbook that interacts with Nextcloud's ~occ +users:resetpassword~ command using ~expect(1)~. + +#+CAPTION: =playbooks/nextcloud-pass.yml= +#+BEGIN_SRC conf :tangle playbooks/nextcloud-pass.yml +- hosts: core + no_log: yes + tasks: + - name: Run occ user:resetpassword. + shell: | + spawn sudo -u www-data \ + /usr/bin/php occ user:resetpassword {{ user }} + expect { + "Enter a new password:" {} + timeout { exit 1 } + } + send "{{ pass|quote }}\n" + expect { + "Confirm the new password:" {} + timeout { exit 2 } + } + send "{{ pass|quote }}\n" + expect { + "Successfully reset password for {{ user }}" {} + "Please choose a different password." { exit 3 } + timeout { exit 4 } + } + args: + chdir: /var/www/nextcloud/ + executable: /usr/bin/expect +#+END_SRC + +*** Installing the Less Aggressive passwd + +The following Ansible tasks install the less aggressive ~passwd~ +script in =/usr/local/bin/passwd= on Core, and a ~sudo~ policy file +declaring that any user can run the script as the admin user. The +admin user is added to the shadow group so that the script can read +=/etc/shadow= and verify a member's current password. The public PGP +key for ~root@core~ is also imported into the admin user's GnuPG +configuration so that the email to root can be encrypted. + +#+CAPTION: =roles_t/core/tasks/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml + +- name: Install institute passwd command. + become: yes + template: + src: passwd + dest: /usr/local/bin/passwd + mode: u=rwx,g=rx,o=rx + +- name: Authorize institute passwd command as {{ ansible_user }}. + become: yes + copy: + content: | + ALL ALL=({{ ansible_user }}) NOPASSWD: /usr/local/bin/passwd + dest: /etc/sudoers.d/01passwd + mode: u=r,g=r,o= + owner: root + group: root + +- name: Authorize {{ ansible_user }} to read /etc/shadow. + become: yes + user: + name: "{{ ansible_user }}" + append: yes + groups: shadow + +- name: Authorize {{ ansible_user }} to run /usr/bin/php as www-data. + become: yes + copy: + content: | + {{ ansible_user }} ALL=(www-data) NOPASSWD: /usr/bin/php + dest: /etc/sudoers.d/01www-data-php + mode: u=r,g=r,o= + owner: root + group: root + +- name: Install root PGP key file. + become: no + copy: + src: ../Secret/root-pub.pem + dest: ~/.gnupg-root-pub.pem + mode: u=r,g=r,o=r + notify: Import root PGP key. +#+END_SRC + +#+CAPTION: =roles_t/core/handlers/main.yml= +#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml + +- name: Import root PGP key. + become: no + command: gpg --import ~/.gnupg-root-pub.pem +#+END_SRC + +** The Old Command + +The ~old~ command disables a member's accounts and clients. + +#+CAPTION: =inst= +#+BEGIN_SRC perl :tangle inst + +if (defined $ARGV[0] && $ARGV[0] eq "old") { + my $user = valid_username (@ARGV); + my $yaml = read_members_yaml (); + my $members = $yaml->{"members"}; + my $member = $members->{$user}; + die "$user: does not exist\n" if ! defined $member; + + mysystem ("ansible-playbook -e \@Secret/become.yml", + "playbooks/nextcloud-old.yml -e user=$user"); + $member->{"status"} = "former"; + write_members_yaml { "members" => $members, + "revoked" => [ sort @{$member->{"clients"}}, + @{$yaml->{"revoked"}} ] }; + mysystem ("ansible-playbook -e \@Secret/become.yml", + "-t accounts playbooks/site.yml"); + exit; +} +#+END_SRC + +#+CAPTION: =playbooks/nextcloud-old.yml= +#+BEGIN_SRC conf :tangle playbooks/nextcloud-old.yml +- hosts: core + tasks: + - name: Run occ user:disable. + shell: | + spawn sudo -u www-data /usr/bin/php occ user:disable {{ user }} + expect { + "The specified user is disabled" {} + timeout { exit 1 } + } + args: + chdir: /var/www/nextcloud/ + executable: /usr/bin/expect +#+END_SRC + +** The Client Command + +The ~client~ command creates an OpenVPN configuration (=.ovpn=) file +authorizing wireless devices to connect to the institute's VPNs. The +command uses the EasyRSA CA in =Secret/=. The generated configuration +is slightly different depending on the type of host, given as the +first argument to the command. + +- ~./inst client android NEW USER~ \\ + An ~android~ host runs OpenVPN for Android or work-alike. Two files + are generated. =campus.ovpn= configures a campus VPN connection, + and =public.ovpn= configures a connection to the institute's public + VPN. + +- ~./inst client debian NEW USER~ \\ + A ~debian~ host runs a Debian desktop with Network Manager. Again + two files are generated, for the campus and public VPNs. + +- ~./inst client campus NEW~ \\ + A ~campus~ host is an Debian host (with or without desktop) that is + used by the institute generally, is /not/ the property of a member, + never roams off campus, and so is remotely administered with + Ansible. One file is generated, =campus.ovpn=. + +The administrator uses encrypted email to send =.ovpn= files to new +members. New members install the ~network-manager-openvpn-gnome~ and +~openvpn-systemd-resolved~ packages, and import the =.ovpn= files into +Network Manager on their desktops. The =.ovpn= files for an +Android device are transferred by USB stick and should automatically +install when "opened". On campus hosts, the system administrator +copies the =campus.ovpn= file to =/etc/openvpn/campus.conf=. + +The OpenVPN configurations generated for Debian hosts specify an ~up~ +script, =update-systemd-resolved=, installed in =/etc/openvpn/= by the +~openvpn-systemd-resolved~ package. The following configuration lines +instruct the OpenVPN clients to run this script whenever the +connection is restarted. + +#+NAME: openvpn-up +#+CAPTION: ~openvpn-up~ +#+BEGIN_SRC conf +script-security 2 +up /etc/openvpn/update-systemd-resolved +up-restart +#+END_SRC + +#+CAPTION: =inst= +#+BEGIN_SRC perl :tangle inst :noweb yes +sub write_template ($$$$$$$$$); +sub read_file ($); +sub add_client ($$$); + +if (defined $ARGV[0] && $ARGV[0] eq "client") { + die "Secret/CA/easyrsa: not found\n" if ! -x "Secret/CA/easyrsa"; + my $type = $ARGV[1]||""; + my $name = $ARGV[2]||""; + my $user = $ARGV[3]||""; + if ($type eq "campus") { + die "usage: $0 client campus NAME\n" if @ARGV != 3; + die "$name: invalid host name\n" if $name !~ /^[a-z][-a-z0-9]+$/; + } elsif ($type eq "android" || $type eq "debian") { + die "usage: $0 client $type NAME USER\n" if @ARGV != 4; + die "$name: invalid host name\n" if $name !~ /^[a-z][-a-z0-9]+$/; + } else { + die "usage: $0 client [debian|android|campus]\n" if @ARGV != 4; + } + my $yaml; + my $member; + if ($type ne "campus") { + $yaml = read_members_yaml; + my $members = $yaml->{"members"}; + if (@ARGV == 4) { + $member = $members->{$user}; + die "$user: does not exist\n" if ! defined $member; + } + if (defined $member) { + my ($owner) = grep { grep { $_ eq $name } @{$_->{"clients"}} } + values %{$members}; + die "$name: owned by $owner->{username}\n" + if defined $owner && $owner->{username} ne $member->{username}; + } + } + + die "Secret/CA: no certificate authority found" + if ! -d "Secret/CA/pki/issued"; + + if (! -f "Secret/CA/pki/issued/$name.crt") { + mysystem "cd Secret/CA; ./easyrsa build-client-full $name nopass"; + } else { + print "Using existing key/cert...\n"; + } + + if ($type ne "campus") { + my $clients = $member->{"clients"}; + if (! grep { $_ eq $name } @$clients) { + $member->{"clients"} = [ $name, @$clients ]; + write_members_yaml $yaml; + } + } + + umask 077; + my $DEV = $type eq "android" ? "tun" : "ovpn"; + my $CA = read_file "Secret/CA/pki/ca.crt"; + my $CRT = read_file "Secret/CA/pki/issued/$name.crt"; + my $KEY = read_file "Secret/CA/pki/private/$name.key"; + my $UP = $type eq "android" ? "" : " +<>"; + + if ($type ne "campus") { + my $TA = read_file "Secret/front-ta.key"; + write_template ($DEV,$UP,$CA,$CRT,$KEY,$TA, $front_addr, + $domain_name, "public.ovpn"); + print "Wrote public VPN configuration to public.ovpn.\n"; + } + my $TA = read_file "Secret/gate-ta.key"; + write_template ($DEV,$UP,$CA,$CRT,$KEY,$TA, $gate_wifi_addr, + "gate.$domain_priv", "campus.ovpn"); + print "Wrote campus VPN configuration to campus.ovpn.\n"; + + exit; +} + +sub write_template ($$$$$$$$$) { + my ($DEV,$UP,$CA,$CRT,$KEY,$TA,$ADDR,$NAME,$FILE) = @_; + my $O = new IO::File; + open ($O, ">$FILE.tmp") or die "Could not open $FILE.tmp: $!\n"; + print $O "client +dev-type tun +dev $DEV +remote $ADDR +nobind +<> +remote-cert-tls server +verify-x509-name $NAME name +<>$UP +verb 3 +key-direction 1 +\n$CA +\n$CRT +\n$KEY +\n$TA\n"; + close $O or die "Could not close $FILE.tmp: $!\n"; + rename ("$FILE.tmp", $FILE) + or die "Could not rename $FILE.tmp: $!\n"; +} + +sub read_file ($) { + my ($path) = @_; + my $I = new IO::File; + open ($I, "<$path") or die "$path: could not read: $!\n"; + local $/; + my $c = <$I>; + close $I or die "$path: could not close: $!\n"; + return $c; +} +#+END_SRC + +** Institute Command Help + +This should be the last block tangled into the =inst= script. It +catches any command lines that were not handled by a sub-command +above. + +#+CAPTION: =inst= +#+BEGIN_SRC perl :tangle inst + +die "usage: $0 [CA|config|new|pass|old|client] ...\n"; +#+END_SRC + + +* Testing + +The example files in this document, =ansible.cfg= and =hosts= as +well as those in =public/= and =private/=, along with the +matching EasyRSA certificate authority and GnuPG key-ring in +=Secret/= (included in the distribution), can be used to configure +three VirtualBox VMs simulating Core, Gate and Front in a test network +simulating a campus Ethernet, campus ISP, and commercial cloud. With +the test network up and running, a simulated member's notebook can be +created, and alternately attached to the simulated campus Wi-Fi or the +simulated Internet (as though abroad). The administrator's notebook +in this simulation is the VirtualBox host. + +The next two sections list the steps taken to create the simulated +Core, Gate and Front, and connect them to a simulated campus Ethernet, +campus ISP, and commercial cloud. The process is similar to that +described in [[*The Hardware][The (Actual) Hardware]], but is covered in detail here +where the VirtualBox hypervisor can be assumed and exact command lines +can be given (and copied during re-testing). The remaining sections +describe the manual testing process, simulating an administrator +adding and removing member accounts and devices, a member's desktop +sending and receiving email, etc. + +For more information on the VirtualBox Hypervisor, the User Manual can +be found off-line in [[/usr/share/doc/virtualbox/UserManual.pdf]]. An +HTML version of the latest revision can be found on the official web +site at [[https://www.virtualbox.org/manual/UserManual.html]]. + +** The Test Networks + +The networks used in the test: + +- ~premises~ :: A NAT Network, simulating the cloud provider's and + campus ISP's networks. This is the only network with DHCP and DNS + services provided by the hypervisor. It is not the default NAT + network because ~gate~ and ~front~ need to communicate. + +- ~vboxnet0~ :: A Host-only network, simulating the institute's + private Ethernet switch. It has no services, no DHCP, just the host + machine at ~192.168.56.10~ pretending to be the administrator's + notebook. + +- ~vboxnet1~ :: Another Host-only network, simulating the tiny + Ethernet between Gate and the campus Wi-Fi access point. It has no + services, no DHCP, just the host at ~192.168.57.2~. It might one + day have a simulated access point at that address. Currently it is + just an interface for ~gate~'s DHCP server to listen on. + + In this simulation the IP address for ~front~ is not a public + address but a private address on the NAT network ~premises~. Thus + ~front~ is not accessible to the administrator's notebook (the + host). To work around this restriction, ~front~ gets a second + network interface connected to the ~vboxnet1~ network and used only + for ssh access from the host.[fn:4] + +As in [[*The Hardware][The Hardware]], all machines start with their primary Ethernet +adapters attached to the NAT Network ~premises~ so that they can +download additional packages. Later, ~core~ and ~gate~ are moved to +the simulated private Ethernet ~vboxnet0~. + +The networks described above are created and "started" with the +following ~VBoxManage~ commands. + +#+BEGIN_SRC sh +VBoxManage natnetwork add --netname premises \ + --network 192.168.15.0/24 \ + --enable --dhcp on --ipv6 off +VBoxManage natnetwork start --netname premises +VBoxManage hostonlyif create # vboxnet0 +VBoxManage hostonlyif ipconfig vboxnet0 --ip 192.168.56.10 \ + --dhcp off --ipv6 off +VBoxManage hostonlyif create # vboxnet1 +VBoxManage hostonlyif ipconfig vboxnet1 --ip 192.168.57.2 \ + --dhcp off --ipv6 off +#+END_SRC + +Note that actual ISPs and clouds will provide Gate and Front with +public network addresses but in this simulation "they" provide +addresses in the private ~192.168.15.0/24~ network. + +** The Test Machines + +The virtual machines are created by ~VBoxManage~ command lines in the +following sub-sections. They each start with a recent Debian release +(e.g. =debian-11.3.0-amd64-netinst.iso=) on the NAT network +~premises~. As in [[*The Hardware][The Hardware]] preparation process being simulated, a +few additional software packages are installed and remote access is +authorized before the machines are moved to their final networks, +prepared for Ansible. + +*** A Test Machine + +The following shell function contains most of the ~VBoxManage~ +commands needed to create the test machines. The name of the machine +is taken from the ~NAME~ shell variable and the quantity of RAM and +disk space from the ~RAM~ and ~DISK~ variables. The function creates +a DVD drive on each machine and loads it with a simulated CD of a +recent Debian release. The path to the CD disk image (=.iso= file) is +taken from the ~ISO~ shell variable. + +#+BEGIN_SRC sh +function create_vm { + VBoxManage createvm --name $NAME --ostype Debian_64 --register + VBoxManage modifyvm $NAME --memory $RAM + VBoxManage createhd --size $DISK \ + --filename ~/VirtualBox\ VMs/$NAME/$NAME.vdi + VBoxManage storagectl $NAME --name "SATA Controller" \ + --add sata --controller IntelAHCI + VBoxManage storageattach $NAME --storagectl "SATA Controller" \ + --port 0 --device 0 --type hdd \ + --medium ~/VirtualBox\ VMs/$NAME/$NAME.vdi + + VBoxManage storagectl $NAME --name "IDE Controller" --add ide + VBoxManage storageattach $NAME --storagectl "IDE Controller" \ + --port 0 --device 0 --type dvddrive --medium $ISO + VBoxManage modifyvm $NAME --boot1 dvd --boot2 disk + VBoxManage unattended install $NAME --iso=$ISO \ + --locale en_US --country US \ + --hostname $NAME.small.private \ + --user=sysadm --password=fubar \ + --full-user-name=System\ Administrator +} +#+END_SRC + +After this shell function creates a VM, its network interface is +typically attached to the NAT network ~premises~, simulating the +Internet connected network where actual hardware will be prepared. + +Here are the commands needed to create the test machine ~front~ with +512MiB of RAM and 4GiB of disk and the Debian 11.3.0 release in its +CDROM drive, to put ~front~ on the Internet connected NAT network +~premises~, and to boot ~front~ into the Debian installer. + +#+BEGIN_SRC sh +NAME=front +RAM=512 +DISK=4096 +ISO=~/Downloads/debian-11.3.0-amd64-netinst.iso +create_vm +VBoxManage modifyvm $NAME --nic1 natnetwork --natnetwork1 premises +VBoxManage startvm $NAME --type headless +#+END_SRC + +The machine's console should soon show the installer's first prompt: +to choose a system language. (The prompts might be answered by +"preseeding" the Debian installer, but that process has yet to be +debugged.) The appropriate responses to the installer's prompts are +given in the list below. + +- Select a language + + Language: English - English +- Select your location + + Country, territory or area: United States +- Configure the keyboard + + Keymap to use: American English +- Configure the network + + Hostname: front (gate, core, etc.) + + Domain name: small.example.org (small.private) +- Set up users and passwords. + + Root password: + + Full name for the new user: System Administrator + + Username for your account: sysadm + + Choose a password for the new user: fubar +- Configure the clock + + Select your time zone: Eastern +- Partition disks + + Partitioning method: Guided - use entire disk + + Select disk to partition: SCSI3 (0,0,0) (sda) - ... + + Partitioning scheme: All files in one partition + + Finish partitioning and write changes to disk: Continue + + Write the changes to disks? Yes +- Install the base system +- Configure the package manager + + Scan extra installation media? No + + Debian archive mirror country: United States + + Debian archive mirror: deb.debian.org + + HTTP proxy information (blank for none): +- Configure popularity-contest + + Participate in the package usage survey? No +- Software selection + + SSH server + + standard system utilities +- Install the GRUB boot loader + + Install the GRUB boot loader to your primary drive? Yes + + Device for boot loader installation: /dev/sda (ata-VBOX... + +After the reboot (first boot into the installed OS) the machine's +console should produce a ~login:~ prompt. The administrator logs in +here, with username ~sysadm~ and password ~fubar~, before continuing +with the specific machine's preparation (below). + +*** The Test Front Machine + +The ~front~ machine is created with 512MiB of RAM, 4GiB of disk, and +Debian 11.3.0 (recently downloaded) in its CDROM drive. The exact +command lines were given in the previous section. + +After Debian is installed (as detailed in [[*A Test Machine][A Test Machine]]) and the +machine rebooted, the administrator logs in and installs several +additional software packages. + +#+BEGIN_SRC sh +sudo apt install netplan.io expect unattended-upgrades postfix \ + dovecot-imapd apache2 openvpn +#+END_SRC + +Note that the Postfix installation may prompt for a couple settings. +The defaults, listed below, are fine, but the system mail name should +be the same as the institute's domain name. + +- General type of mail configuration: Internet Site +- System mail name: small.example.org + +To make ~front~ accessible to the simulated administrator's notebook, +it gets a second network interface attached to the host-only network +~vboxnet1~ and is given the local address ~192.168.57.3~. + +#+BEGIN_SRC sh +VBoxManage modifyvm front --nic2 hostonly --hostonlyadapter2 vboxnet1 +#+END_SRC + +The second network interface is configured with an IP address via the +Netplan configuration file =/etc/netplan/01-testing.yaml=, which is +created with the following lines. + +#+BEGIN_SRC conf +network: + ethernets: + enp0s8: + dhcp4: false + addresses: [ 192.168.57.3/24 ] +#+END_SRC + +The amended Netplan is applied immediately with the following command, +or the machine is rebooted. + +#+BEGIN_SRC sh +sudo netplan apply +#+END_SRC + +Finally, the administrator authorizes remote access by following the +instructions in the final section: [[* Ansible Test Authorization][Ansible Test Authorization]]. + +*** The Test Gate Machine + +The ~gate~ machine is created with the same amount of RAM and disk as +~front~. Assuming the ~RAM~, ~DISK~, and ~ISO~ shell variables have +not changed, ~gate~ can be created with two commands, then connected +to NAT network ~premesis~ and booted with two more. + +#+BEGIN_SRC sh +NAME=gate +create_vm +VBoxManage modifyvm gate --nic1 natnetwork --natnetwork1 premises +VBoxManage startvm gate --type headless +#+END_SRC + +After Debian is installed (as detailed in [[*A Test Machine][A Test Machine]]) and the +machine rebooted, the administrator logs in and installs several +additional software packages. + +#+BEGIN_SRC sh +sudo apt install netplan.io ufw unattended-upgrades postfix \ + isc-dhcp-server openvpn +#+END_SRC + +Again, the Postfix installation prompts for a couple settings. The +defaults, listed below, are fine. + +- General type of mail configuration: Internet Site +- System mail name: gate.small.private + +~gate~ can now move to the campus. It is shut down before the +following ~VBoxManage~ commands are executed. The commands disconnect +the primary Ethernet interface from ~premises~ and +connected it to ~vboxnet0~. The ~isp~ and ~wifi~ interfaces are also +connected to the simulated ISP and campus wireless access point. + +#+BEGIN_SRC sh +VBoxManage modifyvm gate --nic1 hostonly +VBoxManage modifyvm gate --hostonlyadapter1 vboxnet0 +VBoxManage modifyvm gate --nic2 natnetwork --natnetwork2 premises +VBoxManage modifyvm gate --nic3 hostonly +VBoxManage modifyvm gate --hostonlyadapter3 vboxnet1 +#+END_SRC + +Before rebooting, the MAC addresses of the three network interfaces +should be compared to the example variable settings in =hosts=. The +values of the ~gate_lan_mac~, ~gate_wifi_mac~, and ~gate_isp_mac~ +variables /must/ agree with the MAC addresses assigned to the virtual +machine's network interfaces. The following table assumes device +names that may vary depending on the hypervisor, version, etc. + +| device | network | simulating | MAC address variable | +|----------+------------+-----------------+----------------------| +| ~enp0s3~ | ~vboxnet0~ | campus Ethernet | ~gate_lan_mac~ | +| ~enp0s8~ | ~premises~ | campus ISP | ~gate_isp_mac~ | +| ~enp0s9~ | ~vboxnet1~ | campus wireless | ~gate_wifi_mac~ | + +After ~gate~ boots up with its new network connections, the primary +Ethernet interface is temporarily configured with an IP address. +(Ansible will install a Netplan soon.) + +#+BEGIN_SRC sh +sudo ip address add 192.168.56.2/24 dev enp0s3 +#+END_SRC + +Finally, the administrator authorizes remote access by following the +instructions in the final section: [[* Ansible Test Authorization][Ansible Test Authorization]]. + +*** The Test Core Machine + +The ~core~ machine is created with 1GiB of RAM and 6GiB of disk. +Assuming the ~ISO~ shell variable has not changed, ~core~ can be +created with following commands. + +#+BEGIN_SRC sh +NAME=core +RAM=2048 +DISK=6144 +create_vm +VBoxManage modifyvm core --nic1 natnetwork --natnetwork1 premises +VBoxManage startvm core --type headless +#+END_SRC + +After Debian is installed (as detailed in [[*A Test Machine][A Test Machine]]) and the +machine rebooted, the administrator logs in and installs several +additional software packages. + +#+BEGIN_SRC sh +sudo apt install netplan.io unattended-upgrades postfix \ + isc-dhcp-server bind9 fetchmail gnupg \ + expect dovecot-imapd apache2 openvpn +#+END_SRC + +Again, the Postfix installation prompts for a couple settings. The +defaults, listed below, are fine. + +- General type of mail configuration: Internet Site +- System mail name: core.small.private + +~core~ can now move to the campus. It is shut down before the +following ~VBoxManage~ command is executed. The command connects the +machine's NIC to ~vboxnet0~, which simulates the campus's private +Ethernet. + +#+BEGIN_SRC sh +VBoxManage modifyvm core --nic1 hostonly --hostonlyadapter1 vboxnet0 +#+END_SRC + +After ~core~ boots up with its new network connection, its primary NIC +is temporarily configured with an IP address and default route (to +~gate~). (Ansible will install a Netplan soon.) + +#+BEGIN_SRC sh +sudo ip address add 192.168.56.1/24 dev enp0s3 +sudo ip route add default via 192.168.56.2 dev enp0s3 +#+END_SRC + +Finally, the administrator authorizes remote access by following the +instructions in the next section: [[* Ansible Test Authorization][Ansible Test Authorization]]. + +*** Ansible Test Authorization + +Before Ansible can configure the three test machines, they must allow +remote access to their ~sysadm~ accounts. The administrator must use +IP addresses to copy the public key to each test machine. + +#+BEGIN_SRC sh +SRC=Secret/ssh_admin/id_rsa.pub +scp $SRC sysadm@192.168.56.1:admin_key # Core +scp $SRC sysadm@192.168.56.2:admin_key # Gate +scp $SRC sysadm@192.168.57.3:admin_key # Front +#+END_SRC + +Then the key must be installed on each machine with the following +command line (entered at each console, or in an SSH session with +each machine). + +#+BEGIN_SRC sh +( cd; umask 077; mkdir .ssh; cp admin_key .ssh/authorized_keys ) +#+END_SRC + +** The Test Ansible Configuration + +At this point the three test machines ~core~, ~gate~, and ~front~ are +running fresh Debian systems with select additional packages, on their +final networks, with a privileged account named ~sysadm~ that +authorizes password-less access from the administrator's notebook, +ready to be configured by Ansible. + +** Configure Test Machines + +To configure the test machines, the ~./inst config~ command is +executed and ~core~ restarted. Note that this first run should +exercise all of the handlers, /and/ that subsequent runs probably /do +not/. + +** Test Basics + +At this point the test institute is just ~core~, ~gate~ and ~front~, +no other campus servers, no members nor their VPN client devices. On +each machine, Systemd should assess the system's state as ~running~ +with 0 failed units. + +#+BEGIN_SRC sh +systemctl status +#+END_SRC + +~gate~ and thus ~core~ should be able to reach the Internet and +~front~. If ~core~ can reach the Internet and ~front~, then ~gate~ is +forwarding (and NATing). On ~core~ (and ~gate~): + +#+BEGIN_SRC sh +ping -c 1 8.8.4.4 # dns.google +ping -c 1 192.168.15.5 # front_addr +#+END_SRC + +~gate~ and thus ~core~ should be able to resolve internal and public +domain names. (Front does not use the institute's internal domain +names yet.) On ~core~ (and ~gate~): + +#+BEGIN_SRC sh +host dns.google +host core.small.private +host www +#+END_SRC + +The last resort email address, ~root~, should deliver to the +administrator's account. On ~core~, ~gate~ and ~front~: + +#+BEGIN_SRC sh +/sbin/sendmail root +Testing email to root. +. +#+END_SRC + +Two messages, from ~core~ and ~gate~, should appear in +=/home/sysadm/Maildir/new/= on ~core~ in just a couple seconds. The +message from ~front~ should be delivered to the same directory but on +~front~. While members' emails are automatically fetched (with +~fetchmail(1)~) to ~core~, the system administrator is expected to +fetch system emails directly to their desktop (and to give them +instant attention). + +** The Test Nextcloud + +Further tests involve Nextcloud account management. Nextcloud is +installed on ~core~ as described in [[*Configure Nextcloud][Configure Nextcloud]]. Once +=/Nextcloud/= is created, ~./inst config core~ will validate +or update its configuration files. + +The administrator will need a desktop system in the test campus +networks (using the campus name server). The test Nextcloud +configuration requires that it be accessed with the domain name +=core.small.private=. The following sections describe how a client +desktop is simulated and connected to the test VPNs (and test campus +name server). Its browser can then connect to =core.small.private= to +exercise the test Nextcloud. + +The process starts with enrolling the first member of the institute +using the ~./inst new~ command and issuing client VPN keys with the +~./inst client~ command. + +** Test New Command + +A member must be enrolled so that a member's client machine can be +authorized and then test the VPNs, Nextcloud, and the web sites. +The first member enrolled in the simulated institute is New Hampshire +innkeeper Dick Loudon. Mr. Loudon's accounts on institute servers are +named ~dick~, as is his notebook. + +#+BEGIN_SRC sh +./inst new dick +#+END_SRC + +Take note of Dick's initial password. + +** The Test Member Notebook + +A test member's notebook is created next, much like the servers, +except with memory and disk space doubled to 2GiB and 8GiB, and a +desktop. This machine is not configured by Ansible. Rather, its +desktop VPN client and web browser test the OpenVPN configurations on +~gate~ and ~front~, and the Nextcloud installation on ~core~. + +#+BEGIN_SRC sh +NAME=dick +RAM=2048 +DISK=8192 +create_vm +VBoxManage modifyvm $NAME --nic1 hostonly --hostonlyadapter1 vboxnet1 +VBoxManage modifyvm $NAME --macaddress1 080027dc54b5 +VBoxManage startvm $NAME --type headless +#+END_SRC + +Dick's notebook, ~dick~, is initially connected to the host-only +network ~vboxnet1~ as though it were the campus wireless access point. +It simulates a member's notebook on campus, connected to (NATed +behind) the access point. + +Debian is installed much as detailed in [[*A Test Machine][A Test Machine]] /except/ that +the SSH server option is /not/ needed and the GNOME desktop option +/is/. When the machine reboots, the administrator logs into the +desktop and installs a couple additional software packages (which +require several more). + +#+BEGIN_SRC +sudo apt install network-manager-openvpn-gnome \ + openvpn-systemd-resolved \ + nextcloud-desktop evolution +#+END_SRC + +** Test Client Command + +The ~./inst client~ command is used to issue keys for the institute's +VPNs. The following command generates two =.ovpn= (OpenVPN +configuration) files, =small.ovpn= and =campus.ovpn=, authorizing +access by the holder, identified as ~dick~, owned by member ~dick~, to +the test VPNs. + +#+BEGIN_SRC sh +./inst client debian dick dick +#+END_SRC + +** Test Campus VPN + +The =campus.ovpn= OpenVPN configuration file (generated in [[*Test Client Command][Test Client +Command]]) is transferred to ~dick~, which is at the Wi-Fi access +point's ~wifi_wan_addr~. + +#+BEGIN_SRC sh +scp *.ovpn sysadm@192.168.57.2: +#+END_SRC + +The file is installed using the Network tab of the desktop Settings +app. The administrator uses the "+" button, chooses "Import from +file..." and the =campus.ovpn= file. /Importantly/ the administrator +checks the "Use this connection only for resources on its network" +checkbox in the IPv4 tab of the Add VPN dialog. The admin does the +same with the =small.ovpn= file, for use on the simulated Internet. + +The administrator turns on the campus VPN on ~dick~ (which connects +instantly) and does a few basic tests in a terminal. + +#+BEGIN_SRC sh +systemctl status +ping -c 1 8.8.4.4 # dns.google +ping -c 1 192.168.56.1 # core +host dns.google +host core.small.private +host www +#+END_SRC + +** Test Web Pages + +Next, the administrator copies =Backup/WWW/= (included in the +distribution) to =/WWW/= on ~core~ and sets the file permissions +appropriately. + +#+BEGIN_SRC sh +sudo chown -R sysadm.staff /WWW/campus +sudo chown -R monkey.staff /WWW/live /WWW/test +sudo chmod 02775 /WWW/* +sudo chmod 664 /WWW/*/index.html +#+END_SRC + +then uses Firefox on ~dick~ to fetch the following URLs. They should +all succeed and the content should be a simple sentence identifying +the source file. + + - ~http://www/~ + - ~http://www.small.private/~ + - ~http://live/~ + - ~http://live.small.private/~ + - ~http://test/~ + - ~http://test.small.private/~ + - ~http://small.example.org/~ + +The last URL should re-direct to ~https://small.example.org/~, which +uses a certificate (self-)signed by an unknown authority. Firefox +will warn but allow the luser to continue. + +** Test Web Update + +Modify =/WWW/live/index.html= on ~core~ and wait 15 minutes for it to +appear as ~https://small.example.org/~ (and in =/home/www/index.html= +on ~front~). + +Hack =/home/www/index.html= on ~front~ and observe the result at +~https://small.example.org/~. Wait 15 minutes for the correction. + +** Test Nextcloud + +Nextcloud is typically installed and configured /after/ the first +Ansible run, when ~core~ has Internet access via ~gate~. Until the +installation directory =/Nextcloud/nextcloud/= appears, the Ansible +code skips parts of the Nextcloud configuration. The same +installation (or restoration) process used on Core is used on ~core~ +to create =/Nextcloud/=. The process starts with [[*Create =/Nextcloud/=][Create +=/Nextcloud/=]], involves [[*Restore Nextcloud][Restore Nextcloud]] or [[*Install Nextcloud][Install Nextcloud]], +and runs ~./inst config core~ again [[*Afterwards]]. When the ~./inst +config core~ command is happy with the Nextcloud configuration on +~core~, the administrator uses Dick's notebook to test it, performing +the following tests on ~dick~'s desktop. + +- Use a web browser to get ~http://core/nextcloud/~. It should be a + warning about accessing Nextcloud by an untrusted name. + +- Get ~http://core.small.private/nextcloud/~. It should be a + login web page. + +- Login as ~sysadm~ with password ~fubar~. + +- Examine the security & setup warnings in the Settings > + Administration > Overview web page. A few minor warnings are + expected (besides the admonishment about using ~http~ rather than + ~https~). + +- Download and enable Calendar and Contacts in the Apps > Featured web + page. + +- Logout and login as ~dick~ with Dick's initial password (noted + above). + +- Use the Nextcloud app to sync =~/nextCloud/= with the cloud. In the + Nextcloud app's Connection Wizard (the initial dialog), choose to + "Log in to your Nextcloud" with the URL + ~http://core.small.private/nextcloud~. The web browser should pop + up with a new tab: "Connect to your account". Press "Log in" and + "Grant access". The Nextcloud Connection Wizard then prompts for + sync parameters. The defaults are fine. Presumably the Local + Folder is =/home/sysadm/Nextcloud/=. + +- Drop a file in =~/Nextcloud/=, use the app to force a sync, and find + the file in the Files web page. + +- Create a Mail account in Evolution. This step does not involve + Nextcloud, but placates Evolution's Welcome Wizard, and follows in + the steps of the newly institutionalized luser. CardDAV and CalDAV + accounts can be created in Evolution later. + + The account's full name is Dick Loudon and its email address is + ~dick@small.example.org~. The Receiving Email Server Type is IMAP, + its name is ~mail.small.private~ and it uses the IMAPS port + (993). The Username on the server is ~dick~. The encryption method + is TLS on a dedicated port. Authentication is by password. The + Receiving Option defaults are fine. The Sending Email Server Type + is SMTP with the name ~smtp.small.private~ using the default + SMTP port (25). It requires neither authentication nor encryption. + + At some point Evolution will find that the server certificate is + self-signed and unknown. It must be accepted (permanently). + +- Create a CardDAV account in Evolution. Choose Edit, Accounts, Add, + Address Book, Type CardDAV, name Small Institute, and user ~dick~. + The URL starts with ~http://core.small.private/nextcloud/~ and + ends with ~remote.php/dav/addressbooks/users/dick/contacts/~ (yeah, + 88 characters!). Create a contact in the new address book and see + it in the Contacts web page. At some point Evolution will need + Dick's password to access the address book. + +- Create a CalDAV account in Evolution just like the CardDAV account + except add a Calendar account of Type CalDAV with a URL that ends + ~remote.php/dav/calendars/dick/personal/~ (only 79 characters). + Create an event in the new calendar and see it in the Calendar web + page. At some point Evolution will need Dick's password to access + the calendar. + +** Test Email + +With Evolution running on the member notebook ~dick~, one second email +delivery can be demonstrated. The administrator runs the following +commands on ~front~ + +#+BEGIN_SRC sh +/sbin/sendmail dick +Subject: Hello, Dick. + +How are you? +. +#+END_SRC + +and sees a notification on ~dick~'s desktop in a second or less. + +Outgoing email is also tested. A message to +~sysadm@small.example.org~ should be delivered to +=/home/sysadm/Maildir/new/= on ~front~ just as fast. + +** Test Public VPN + +At this point, ~dick~ can move abroad, from the campus Wi-Fi +(host-only network ~vboxnet1~) to the broader Internet (the NAT +network ~premises~). The following command makes the change. The +machine does not need to be shut down. + +#+BEGIN_SRC s +VBoxManage modifyvm dick --nic1 natnetwork --natnetwork1 premises +#+END_SRC + +The administrator might wait to see evidence of the change in +networks. Evolution may start "Testing reachability of mail account +dick@small.example.org." Eventually, the ~campus~ VPN should +disconnect. After it does, the administrator turns on the ~small~ +VPN, which connects in a second or two. Again, some basics are +tested in a terminal. + +#+BEGIN_SRC sh +ping -c 1 8.8.4.4 # dns.google +ping -c 1 192.168.56.1 # core +host dns.google +host core.small.private +host www +#+END_SRC + +And these web pages are fetched with a browser. + + - http://www/ + - http://www.small.private/ + - http://live/ + - http://live.small.private/ + - http://test/ + - http://test.small.private/ + - http://small.example.org/ + +The Nextcloud web pages too should still be refresh-able, editable, +and Evolution should still be able to edit messages, contacts and +calendar events. + +** Test Pass Command + +To test the ~./inst pass~ command, the administrator logs in to ~core~ +as ~dick~ and runs ~passwd~. A random password is entered, more +obscure than ~fubar~ (else Nextcloud will reject it!). The +administrator then finds the password change request message in the +most recent file in =/home/sysadm/Maildir/new/= and pipes it to the +~./inst pass~ command. The administrator might do that by copying the +message to a more conveniently named temporary file on ~core~, +e.g. =~/msg=, copying that to the current directory on the notebook, +and feeding it to ~./inst pass~ on its standard input. + +On ~core~, logged in as ~sysadm~: + +#+BEGIN_SRC sh +( cd ~/Maildir/new/ + cp `ls -1t | head -1` ~/msg ) +grep Subject: ~/msg +#+END_SRC + +To ensure that the most recent message is indeed the password change +request, the last command should find the line ~Subject: New +password.~. Then on the administrator's notebook: + +#+BEGIN_SRC sh +scp sysadm@192.168.56.1:msg ./ +./inst pass < msg +#+END_SRC + +The last command should complete without error. + +Finally, the administrator verifies that ~dick~ can login on ~core~, +~front~ and Nextcloud with the new password. + +** Test Old Command + +One more institute command is left to exercise. The administrator +retires ~dick~ and his main device ~dick~. + +#+BEGIN_SRC sh +./inst old dick +#+END_SRC + +The administrator tests Dick's access to ~core~, ~front~ and +Nextcloud, and attempts to re-connect the ~small~ VPN. All of these +should fail. + + +* Future Work + +The small institute's network, as currently defined in this doocument, +is lacking in a number of respects. + +** Deficiencies + +The current network monitoring is rudimentary. It could use some +love, like intrusion detection via Snort or similar. Services on +Front are not monitored except that the =webupdate= script should be +emailing ~sysadm~ whenever it cannot update Front. + +Pro-active monitoring might include notifying ~root~ of any vandalism +corrected by Monkey's quarter-hourly web update. This is a +non-trivial task that must ignore intentional changes and save suspect +changes. + +Monkey's ~cron~ jobs on Core should presumably become ~systemd.timer~ +and ~.service~ units. + +The institute's private domain names (e.g. ~www.small.private~) are +not resolvable on Front. Reverse domains (~86.177.10.in-addr.arpa~) +mapping institute network addresses back to names in the private +domain ~small.private~ work only on the campus Ethernet. These nits +might be picked when OpenVPN supports the DHCP option +~rdnss-selection~ (RFC6731), or with hard-coded ~resolvectl~ commands. + +The ~./inst old dick~ command does not break VPN connections to Dick's +clients. New connections cannot be created, but old connections can +continue to work for some time. + +The ~./inst client android dick-phone dick~ command generates =.ovpn= +files that require the member to remember to check the "Use this +connection only for resources on its network" box in the IPv4 tab of +the Add VPN dialog. The ~./inst client~ command should include a +setting in the Debian =.ovpn= files that NetworkManager will recognize +as the desired setting. + +The VPN service is overly complex. The OpenVPN 2.4.7 clients allow +multiple server addresses, but the ~openvpn(8)~ manual page suggests +per connection parameters are a restricted set that does /not/ include +the essential ~verify-x509-name~. Use the same name on separate +certificates for Gate and Front? Use the same certificate and key on +Gate and Front? + +Nextcloud should really be found at ~https://CLOUD.small.private/~ +rather than ~https://core.small.private/nextcloud/~, to ease +future expansion (moving services to additional machines). + +HTTPS could be used for Nextcloud transactions even though they are +carried on encrypted VPNs. This would eliminate a big warning on the +Nextcloud Administration Overview page. + +** More Tests + +The testing process described in the previous chapter is far from +complete. Additional tests are needed. + +*** Backup + +The ~backup~ command has not been tested. It needs an encrypted +partition with which to sync? And then some way to compare that to +=Backup/=? + +*** Restore + +The restore process has not been tested. It might just copy =Backup/= +to ~core:/~, but then it probably needs to fix up file ownerships, +perhaps permissions too. It could also use an example +=Backup/Nextcloud/20220622.bak=. + +*** Campus Disconnect + +Email access (IMAPS) on ~front~ is... difficult to test unless +~core~'s fetchmails are disconnected, i.e. the whole campus is +disconnected, so that new email stays on ~front~ long enough to be +seen. + +- Disconnect ~gate~'s NIC #2. +- Send email to ~dick@small.example.org~. +- Find it in =/home/dick/Maildir/new/=. +- Re-configure Evolution on ~dick~. Edit the ~dick@small.example.org~ + mail account (or create a new one?) so that the Receiving Email + Server name is ~192.168.15.5~, not ~mail.small.private~. The + latter domain name will not work while the campus is disappeared. + In actual use (with Front, not ~front~), the institute domain name + could be used. + + +* Appendix: The Bootstrap + +Creating the private network from whole cloth (machines with recent +standard distributions installed) is not straightforward. + +Standard distributions do not include all of the necessary server +software, esp. ~isc-dhcp-server~ and ~bind9~ for critical localnet +services. These are typically downloaded from the Internet. + +To access the Internet Core needs a default route to Gate, Gate needs +to forward with NAT to an ISP, Core needs to query the ISP for names, +etc.: quite a bit of temporary, manual localnet configuration just to +get to the additional packages. + +** The Current Strategy + +The strategy pursued in [[*The Hardware][The Hardware]] is two phase: prepare the servers +on the Internet where additional packages are accessible, then connect +them to the campus facilities (the private Ethernet switch, Wi-Fi AP, +ISP), manually configure IP addresses (while the DHCP client silently +fails), and avoid names until BIND9 is configured. + +** Starting With Gate + +The strategy of Starting With Gate concentrates on configuring Gate's +connection to the campus ISP in hope of allowing all to download +additional packages. This seems to require manual configuration of +Core or a standard rendezvous. + +- Connect Gate to ISP, e.g. apartment WAN via Wi-Fi or Ethernet. +- Connect Gate to private Ethernet switch. + : sudo ip address add GATE dev ISPDEV +- Configure Gate to NAT from private Ethernet. +- Configure Gate to serve DHCP on Ethernet, temporarily! + + Push default route through Gate, DNS from 8.8.8.8. + Or statically configure Core with address, route, and name server. + : sudo ip address add CORE dev PRIVETH + : sudo ip route add default via GATE + : sudo sh -c 'echo "nameserver 8.8.8.8" >/etc/resolve.conf' +- Configure admin's notebook similarly? +- Test remote access from administrator's notebook. +- Finally, configure Gate and Core. + : ansible-playbook -l gate site.yml + : ansible-playbook -l core site.yml + +** Pre-provision With Ansible + +A refinement of the current strategy might avoid the need to maintain +(and test!) lists of "additional" packages. With Core and Gate and +the admin's notebook all together on a café Wi-Fi, Ansible might be +configured (e.g. tasks tagged) to /just/ install the necessary +packages. The administrator would put Core's and Gate's localnet IP +addresses in Ansible's inventory file, then run just the Ansible tasks +tagged ~base-install~, leaving the new services in a decent (secure, +innocuous, disabled) default state. + +: ansible-playbook -l core -t base-install site.yml +: ansible-playbook -l gate -t base-install site.yml + + +* Footnotes + +[fn:1] Why not create a role named ~all~ and put these tasks that are +the same on all machines in that role? If there were more than a +stable handful, and no tangling mechanism to do the duplication, a +catch-all role would be a higher priority. + +[fn:2] The cipher set specified by Let's Encrypt is large enough to +turn orange many parts of an SSL Report from Qualys SSL Labs. + +[fn:3] Presumably, eventually, a former member's home directories are +archived to external storage, their other files are given new +ownerships, and their Unix accounts are deleted. This has never been +done, and is left as a manual exercise. + +[fn:4] Front is accessible via Gate but routing from the host address +on ~vboxnet0~ through Gate requires extensive interference with the +routes on Front and Gate, making the simulation less... similar. + +[fn:5] The recommended private top-level domains are listed in +"Appendix G. Private DNS Namespaces" of RFC6762 (Multicast DNS). [[https://www.rfc-editor.org/rfc/rfc6762#appendix-G][link]] diff --git a/Secret/CA/easyrsa b/Secret/CA/easyrsa new file mode 120000 index 0000000..7d6610a --- /dev/null +++ b/Secret/CA/easyrsa @@ -0,0 +1 @@ +/usr/share/easy-rsa/easyrsa \ No newline at end of file diff --git a/Secret/CA/openssl-easyrsa.cnf b/Secret/CA/openssl-easyrsa.cnf new file mode 100644 index 0000000..1139414 --- /dev/null +++ b/Secret/CA/openssl-easyrsa.cnf @@ -0,0 +1,140 @@ +# For use with Easy-RSA 3.1 and OpenSSL or LibreSSL + +RANDFILE = $ENV::EASYRSA_PKI/.rnd + +#################################################################### +[ ca ] +default_ca = CA_default # The default ca section + +#################################################################### +[ CA_default ] + +dir = $ENV::EASYRSA_PKI # Where everything is kept +certs = $dir # Where the issued certs are kept +crl_dir = $dir # Where the issued crl are kept +database = $dir/index.txt # database index file. +new_certs_dir = $dir/certs_by_serial # default place for new certs. + +certificate = $dir/ca.crt # The CA certificate +serial = $dir/serial # The current serial number +crl = $dir/crl.pem # The current CRL +private_key = $dir/private/ca.key # The private key +RANDFILE = $dir/.rand # private random number file + +x509_extensions = basic_exts # The extentions to add to the cert + +# This allows a V2 CRL. Ancient browsers don't like it, but anything Easy-RSA +# is designed for will. In return, we get the Issuer attached to CRLs. +crl_extensions = crl_ext + +default_days = $ENV::EASYRSA_CERT_EXPIRE # how long to certify for +default_crl_days= $ENV::EASYRSA_CRL_DAYS # how long before next CRL +default_md = $ENV::EASYRSA_DIGEST # use public key default MD +preserve = no # keep passed DN ordering + +# This allows to renew certificates which have not been revoked +unique_subject = no + +# A few difference way of specifying how similar the request should look +# For type CA, the listed attributes must be the same, and the optional +# and supplied fields are just that :-) +policy = policy_anything + +# For the 'anything' policy, which defines allowed DN fields +[ policy_anything ] +countryName = optional +stateOrProvinceName = optional +localityName = optional +organizationName = optional +organizationalUnitName = optional +commonName = supplied +name = optional +emailAddress = optional + +#################################################################### +# Easy-RSA request handling +# We key off $DN_MODE to determine how to format the DN +[ req ] +default_bits = $ENV::EASYRSA_KEY_SIZE +default_keyfile = privkey.pem +default_md = $ENV::EASYRSA_DIGEST +distinguished_name = $ENV::EASYRSA_DN +x509_extensions = easyrsa_ca # The extentions to add to the self signed cert + +# A placeholder to handle the $EXTRA_EXTS feature: +#%EXTRA_EXTS% # Do NOT remove or change this line as $EXTRA_EXTS support requires it + +#################################################################### +# Easy-RSA DN (Subject) handling + +# Easy-RSA DN for cn_only support: +[ cn_only ] +commonName = Common Name (eg: your user, host, or server name) +commonName_max = 64 +commonName_default = $ENV::EASYRSA_REQ_CN + +# Easy-RSA DN for org support: +[ org ] +countryName = Country Name (2 letter code) +countryName_default = $ENV::EASYRSA_REQ_COUNTRY +countryName_min = 2 +countryName_max = 2 + +stateOrProvinceName = State or Province Name (full name) +stateOrProvinceName_default = $ENV::EASYRSA_REQ_PROVINCE + +localityName = Locality Name (eg, city) +localityName_default = $ENV::EASYRSA_REQ_CITY + +0.organizationName = Organization Name (eg, company) +0.organizationName_default = $ENV::EASYRSA_REQ_ORG + +organizationalUnitName = Organizational Unit Name (eg, section) +organizationalUnitName_default = $ENV::EASYRSA_REQ_OU + +commonName = Common Name (eg: your user, host, or server name) +commonName_max = 64 +commonName_default = $ENV::EASYRSA_REQ_CN + +emailAddress = Email Address +emailAddress_default = $ENV::EASYRSA_REQ_EMAIL +emailAddress_max = 64 + +#################################################################### +# Easy-RSA cert extension handling + +# This section is effectively unused as the main script sets extensions +# dynamically. This core section is left to support the odd usecase where +# a user calls openssl directly. +[ basic_exts ] +basicConstraints = CA:FALSE +subjectKeyIdentifier = hash +authorityKeyIdentifier = keyid,issuer:always + +# The Easy-RSA CA extensions +[ easyrsa_ca ] + +# PKIX recommendations: + +subjectKeyIdentifier=hash +authorityKeyIdentifier=keyid:always,issuer:always + +# This could be marked critical, but it's nice to support reading by any +# broken clients who attempt to do so. +basicConstraints = CA:true + +# Limit key usage to CA tasks. If you really want to use the generated pair as +# a self-signed cert, comment this out. +keyUsage = cRLSign, keyCertSign + +# nsCertType omitted by default. Let's try to let the deprecated stuff die. +# nsCertType = sslCA + +# CRL extensions. +[ crl_ext ] + +# Only issuerAltName and authorityKeyIdentifier make any sense in a CRL. + +# issuerAltName=issuer:copy +authorityKeyIdentifier=keyid:always,issuer:always + diff --git a/Secret/CA/pki/.rnd b/Secret/CA/pki/.rnd new file mode 100644 index 0000000..d70df68 Binary files /dev/null and b/Secret/CA/pki/.rnd differ diff --git a/Secret/CA/pki/ca.crt b/Secret/CA/pki/ca.crt new file mode 100644 index 0000000..64112dc --- /dev/null +++ b/Secret/CA/pki/ca.crt @@ -0,0 +1,21 @@ +-----BEGIN CERTIFICATE----- +MIIDYzCCAkugAwIBAgIUdC8YacgtTTMxV6EsXOCNhlrWrWUwDQYJKoZIhvcNAQEL +BQAwHjEcMBoGA1UEAwwTU21hbGwgSW5zdGl0dXRlIExMQzAeFw0yMjAzMjIwMDE0 +MTFaFw0zMjAzMTkwMDE0MTFaMB4xHDAaBgNVBAMME1NtYWxsIEluc3RpdHV0ZSBM +TEMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9xV25/G1PuntuWsJm +Yy92ACqD2ksCeGD6CFCR39HJT8NW/rp23F95iqjWkd/9FZoegrYA9MiM1E7bfaQ+ +IdtKeHAhKozODTv4sJSwkmwtUtVaxp91C32HXMpXS9WUoybkkJz2qFJ/hP94JvbI +uNovGkW0MVfknDc0+gp1ozW757MHPR/W0sr4ne4V2UhRUZa8+xyCdv2KPV/u0FRg +eqyIV9h/r8Bwk3ojLQGV9/vlI8nPzNQctguChA+9/a31kUAMqTsDFsR0JIEoMdpj +iwM3i9ECcucW0oZpoJZgW+kh5LYPeiFyLKjop07FjwC0Ljek24X7m4nb//mBRl7J +dOClAgMBAAGjgZgwgZUwHQYDVR0OBBYEFKNL3ah13z0nwBPkmbTRw3fNDee8MFkG +A1UdIwRSMFCAFKNL3ah13z0nwBPkmbTRw3fNDee8oSKkIDAeMRwwGgYDVQQDDBNT +bWFsbCBJbnN0aXR1dGUgTExDghR0LxhpyC1NMzFXoSxc4I2GWtatZTAMBgNVHRME +BTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAqKX/gHEpZK08 +px/2A9WeV9eOf0S++OXJG39TIIbvGCzAjxjsTDMTwrvHawFgi6EwQUvY0+dtdxOf +4fP+iizBbBw8jzUvmMTubbSdYGWXwYxlEwo3+x7yD9Du1waMbi+E1+qAzqj4WTvS +PRDjUSolPFBz11d47snKQjTzCATfaDM0DzgMDUrSGL2NmTZnqoZapgpFdP+wviyK +H6QNAGmFfqgeT1un9+mwx9NBKpoSz6Y8iAq4kthy4GXzcnIYsrd7J6rK9qe9M4Mb +sHpxis6cJ0LSV8aZy5aVgXVPgU4mJvbUhyytJCDsX2A9TeNSXXwgXN5dhsshka4c +VNVlFIXaVg== +-----END CERTIFICATE----- diff --git a/Secret/CA/pki/certs_by_serial/95F05D64CEB9D8907681D5A528461DDA.pem b/Secret/CA/pki/certs_by_serial/95F05D64CEB9D8907681D5A528461DDA.pem new file mode 100644 index 0000000..bc6e145 --- /dev/null +++ b/Secret/CA/pki/certs_by_serial/95F05D64CEB9D8907681D5A528461DDA.pem @@ -0,0 +1,88 @@ +Certificate: + Data: + Version: 3 (0x2) + Serial Number: + 95:f0:5d:64:ce:b9:d8:90:76:81:d5:a5:28:46:1d:da + Signature Algorithm: sha256WithRSAEncryption + Issuer: CN=Small Institute LLC + Validity + Not Before: Mar 22 00:14:11 2022 GMT + Not After : Mar 6 00:14:11 2025 GMT + Subject: CN=small.example.org + Subject Public Key Info: + Public Key Algorithm: rsaEncryption + RSA Public-Key: (2048 bit) + Modulus: + 00:c2:b3:c6:1f:e0:e6:54:5c:1e:0d:34:2c:02:bb: + 5f:d6:84:7d:fb:63:0c:fa:0d:33:a5:92:86:af:f7: + e8:72:86:69:fb:45:fd:90:14:9d:55:dd:22:50:b0: + be:71:94:da:68:ff:3c:46:ef:22:4a:84:ae:8e:84: + 2e:f9:d6:8c:fd:44:2e:eb:fe:95:5e:45:86:3f:f7: + 86:47:00:c1:d8:64:b4:3f:55:c8:b5:fc:69:c3:1b: + aa:54:c5:f4:b6:a6:40:3f:9f:15:ff:eb:3b:1e:5e: + d7:d4:eb:ae:ad:bc:e2:cf:4a:fe:df:3d:69:36:37: + 79:67:95:bf:43:b0:e2:d6:29:60:36:18:f8:7d:32: + 67:79:bb:30:95:ec:8d:93:46:56:13:72:93:96:ac: + 70:29:53:26:c1:d8:c7:38:4a:83:2d:56:bb:90:0f: + a4:09:fd:e6:d8:72:fd:0b:48:4f:38:d4:28:31:0f: + e3:63:d0:3d:d1:e2:ab:e1:10:12:c7:27:85:03:5d: + 7d:01:40:2e:3b:96:2e:f1:a6:a2:32:a8:bd:97:2a: + 90:6e:10:b6:6f:98:7a:e9:9f:06:01:de:0b:c9:18: + 9e:83:4c:2d:a5:5b:99:0e:19:69:77:f0:5d:e2:3d: + 37:c6:4d:73:c7:b0:e8:fb:5c:16:45:29:74:e4:31: + 99:7b + Exponent: 65537 (0x10001) + X509v3 extensions: + X509v3 Basic Constraints: + CA:FALSE + X509v3 Subject Key Identifier: + 2C:AD:E6:55:8E:A6:4B:DF:B1:40:E4:7C:88:CB:75:5A:65:02:6F:8B + X509v3 Authority Key Identifier: + keyid:A3:4B:DD:A8:75:DF:3D:27:C0:13:E4:99:B4:D1:C3:77:CD:0D:E7:BC + DirName:/CN=Small Institute LLC + serial:74:2F:18:69:C8:2D:4D:33:31:57:A1:2C:5C:E0:8D:86:5A:D6:AD:65 + + X509v3 Extended Key Usage: + TLS Web Server Authentication + X509v3 Key Usage: + Digital Signature, Key Encipherment + X509v3 Subject Alternative Name: + DNS:small.example.org + Signature Algorithm: sha256WithRSAEncryption + 58:e3:fd:10:09:c5:cb:15:f6:0c:0d:22:b8:56:f6:89:85:58: + 66:e2:24:64:99:b3:35:d2:bb:63:9f:f8:53:89:29:f5:75:61: + c2:34:8a:50:ac:67:fd:97:40:98:d5:8b:05:91:fb:36:f3:50: + ad:12:53:29:44:c0:86:b1:6f:1a:21:77:6d:43:05:84:1f:ae: + 74:8f:ba:44:49:0e:61:90:17:39:2f:6c:c6:69:9f:89:82:f8: + 22:6e:63:c6:d5:88:46:e5:30:e6:80:51:4c:fc:01:98:e3:31: + 59:20:b6:3d:36:d1:0d:42:b0:9b:8e:6a:74:34:1d:a9:fb:13: + 28:49:ae:d5:b3:83:19:38:77:f6:81:74:81:7f:d0:00:f7:22: + 01:04:70:7d:ba:d0:44:1a:e9:00:b4:20:e9:3c:87:b1:84:c1: + 79:92:f0:96:b5:69:77:d1:50:c4:26:da:8d:13:45:c0:ec:70: + 5d:59:59:8f:13:59:dc:e0:84:da:73:af:7e:99:c1:30:d2:b2: + f1:b1:ed:79:b7:2e:c7:12:88:04:55:ce:d1:71:de:8c:bd:e8: + 1f:0c:c1:14:24:2b:cc:74:b7:fa:e8:ce:d2:7b:48:fb:2b:fb: + bd:d0:98:29:bb:1c:8e:e6:1c:d3:8d:78:70:b1:c3:40:00:a3: + 48:8c:a2:f4 +-----BEGIN CERTIFICATE----- +MIIDjjCCAnagAwIBAgIRAJXwXWTOudiQdoHVpShGHdowDQYJKoZIhvcNAQELBQAw +HjEcMBoGA1UEAwwTU21hbGwgSW5zdGl0dXRlIExMQzAeFw0yMjAzMjIwMDE0MTFa +Fw0yNTAzMDYwMDE0MTFaMBwxGjAYBgNVBAMMEXNtYWxsLmV4YW1wbGUub3JnMIIB +IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwrPGH+DmVFweDTQsArtf1oR9 ++2MM+g0zpZKGr/focoZp+0X9kBSdVd0iULC+cZTaaP88Ru8iSoSujoQu+daM/UQu +6/6VXkWGP/eGRwDB2GS0P1XItfxpwxuqVMX0tqZAP58V/+s7Hl7X1Ouurbziz0r+ +3z1pNjd5Z5W/Q7Di1ilgNhj4fTJnebswleyNk0ZWE3KTlqxwKVMmwdjHOEqDLVa7 +kA+kCf3m2HL9C0hPONQoMQ/jY9A90eKr4RASxyeFA119AUAuO5Yu8aaiMqi9lyqQ +bhC2b5h66Z8GAd4LyRieg0wtpVuZDhlpd/Bd4j03xk1zx7Do+1wWRSl05DGZewID +AQABo4HIMIHFMAkGA1UdEwQCMAAwHQYDVR0OBBYEFCyt5lWOpkvfsUDkfIjLdVpl +Am+LMFkGA1UdIwRSMFCAFKNL3ah13z0nwBPkmbTRw3fNDee8oSKkIDAeMRwwGgYD +VQQDDBNTbWFsbCBJbnN0aXR1dGUgTExDghR0LxhpyC1NMzFXoSxc4I2GWtatZTAT +BgNVHSUEDDAKBggrBgEFBQcDATALBgNVHQ8EBAMCBaAwHAYDVR0RBBUwE4IRc21h +bGwuZXhhbXBsZS5vcmcwDQYJKoZIhvcNAQELBQADggEBAFjj/RAJxcsV9gwNIrhW +9omFWGbiJGSZszXSu2Of+FOJKfV1YcI0ilCsZ/2XQJjViwWR+zbzUK0SUylEwIax +bxohd21DBYQfrnSPukRJDmGQFzkvbMZpn4mC+CJuY8bViEblMOaAUUz8AZjjMVkg +tj020Q1CsJuOanQ0Han7EyhJrtWzgxk4d/aBdIF/0AD3IgEEcH260EQa6QC0IOk8 +h7GEwXmS8Ja1aXfRUMQm2o0TRcDscF1ZWY8TWdzghNpzr36ZwTDSsvGx7Xm3LscS +iARVztFx3oy96B8MwRQkK8x0t/roztJ7SPsr+73QmCm7HI7mHNONeHCxw0AAo0iM +ovQ= +-----END CERTIFICATE----- diff --git a/Secret/CA/pki/certs_by_serial/99AACABEAF22703B05EDC426849DF177.pem b/Secret/CA/pki/certs_by_serial/99AACABEAF22703B05EDC426849DF177.pem new file mode 100644 index 0000000..a0dae9e --- /dev/null +++ b/Secret/CA/pki/certs_by_serial/99AACABEAF22703B05EDC426849DF177.pem @@ -0,0 +1,88 @@ +Certificate: + Data: + Version: 3 (0x2) + Serial Number: + 99:aa:ca:be:af:22:70:3b:05:ed:c4:26:84:9d:f1:77 + Signature Algorithm: sha256WithRSAEncryption + Issuer: CN=Small Institute LLC + Validity + Not Before: Mar 22 00:14:11 2022 GMT + Not After : Mar 6 00:14:11 2025 GMT + Subject: CN=gate.small.example.org + Subject Public Key Info: + Public Key Algorithm: rsaEncryption + RSA Public-Key: (2048 bit) + Modulus: + 00:c1:84:ad:a4:1d:8c:86:1d:eb:87:e5:dc:33:c6: + 00:97:00:b7:ce:03:92:3c:47:ea:d1:2b:a6:ef:2a: + de:bc:58:06:5b:00:36:80:96:2f:e2:c2:7c:a6:7c: + 71:40:f9:67:a1:6c:f7:0b:d2:d4:41:81:98:99:66: + 08:93:e5:bf:b4:dc:cf:95:36:28:14:df:4d:71:f6: + d8:5d:2a:17:25:ac:4a:dc:e8:bd:d9:17:d5:36:51: + bf:a5:00:9f:66:eb:c0:ce:fa:e3:1f:ad:1f:45:40: + d7:88:bf:93:62:cf:98:09:ba:1c:7f:74:c8:90:2f: + a5:2d:78:88:64:b9:fb:3a:c5:44:29:a1:92:99:87: + 82:35:d8:96:18:27:23:89:a6:89:1e:3f:d2:1e:08: + da:55:bf:53:aa:1d:d5:8a:17:64:6f:60:1d:07:c7: + 85:87:73:33:b4:ed:a5:c4:0b:79:e4:92:45:1c:0e: + cc:00:6a:a1:de:44:4d:67:1a:fe:fc:b5:e8:c0:f8: + 44:60:a6:fb:0a:d2:f4:d9:8a:ea:d3:dc:d4:c2:18: + 1f:1c:57:c3:72:92:2a:6f:e7:81:9a:08:e7:8a:92: + ce:45:d6:17:e1:85:a9:a5:70:99:26:aa:9a:b0:c7: + fc:55:58:b8:54:9b:89:aa:b3:5a:50:db:3d:fd:21: + 27:37 + Exponent: 65537 (0x10001) + X509v3 extensions: + X509v3 Basic Constraints: + CA:FALSE + X509v3 Subject Key Identifier: + 16:BC:27:A4:D7:CC:6F:29:65:3A:BA:F4:5A:8D:38:84:C0:FA:FF:C7 + X509v3 Authority Key Identifier: + keyid:A3:4B:DD:A8:75:DF:3D:27:C0:13:E4:99:B4:D1:C3:77:CD:0D:E7:BC + DirName:/CN=Small Institute LLC + serial:74:2F:18:69:C8:2D:4D:33:31:57:A1:2C:5C:E0:8D:86:5A:D6:AD:65 + + X509v3 Extended Key Usage: + TLS Web Server Authentication + X509v3 Key Usage: + Digital Signature, Key Encipherment + X509v3 Subject Alternative Name: + DNS:gate.small.example.org + Signature Algorithm: sha256WithRSAEncryption + 4d:42:0b:e4:65:35:a9:0a:26:03:96:eb:3e:56:52:6e:82:c1: + cd:bd:f3:45:50:a2:66:d2:65:f6:65:8e:9d:60:4e:72:53:75: + 04:02:cc:09:bb:41:b7:bd:b4:9f:d5:d0:26:75:f8:83:c1:b5: + 88:9f:b5:d5:05:07:20:6b:4b:41:ca:bf:22:49:5e:42:c3:6c: + c5:01:b2:06:af:e8:f0:b4:a5:5e:8e:14:4c:f1:1b:85:dc:33: + 19:63:ef:70:a3:02:2b:ec:19:72:58:95:04:81:78:8b:1d:05: + ef:3f:f3:2a:6b:3c:fd:ff:0b:90:81:2b:80:c0:99:bd:91:b0: + 2f:08:10:7a:1f:bb:63:3c:03:91:e8:5b:0e:69:f4:2d:75:7c: + 45:5b:c4:8d:0d:f3:4b:c9:a0:bc:9d:94:64:70:df:4f:53:a3: + 28:69:cf:fe:f3:46:e9:7a:e7:34:1e:15:f3:bb:98:b9:31:d5: + 8f:6e:e2:65:fb:0b:aa:de:a4:6d:f0:56:2a:0d:c0:51:a5:5c: + 91:ab:a8:bc:6f:65:0a:74:3c:2d:96:5c:da:0f:f1:f7:01:f3: + cc:0f:51:fe:54:d0:82:86:c2:40:60:c9:a4:81:db:9e:43:db: + 3c:66:8d:c5:2a:63:55:92:ce:9e:18:2b:2e:6b:86:7d:91:f7: + 88:c4:5c:a8 +-----BEGIN CERTIFICATE----- +MIIDmDCCAoCgAwIBAgIRAJmqyr6vInA7Be3EJoSd8XcwDQYJKoZIhvcNAQELBQAw +HjEcMBoGA1UEAwwTU21hbGwgSW5zdGl0dXRlIExMQzAeFw0yMjAzMjIwMDE0MTFa +Fw0yNTAzMDYwMDE0MTFaMCExHzAdBgNVBAMMFmdhdGUuc21hbGwuZXhhbXBsZS5v +cmcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDBhK2kHYyGHeuH5dwz +xgCXALfOA5I8R+rRK6bvKt68WAZbADaAli/iwnymfHFA+WehbPcL0tRBgZiZZgiT +5b+03M+VNigU301x9thdKhclrErc6L3ZF9U2Ub+lAJ9m68DO+uMfrR9FQNeIv5Ni +z5gJuhx/dMiQL6UteIhkufs6xUQpoZKZh4I12JYYJyOJpokeP9IeCNpVv1OqHdWK +F2RvYB0Hx4WHczO07aXEC3nkkkUcDswAaqHeRE1nGv78tejA+ERgpvsK0vTZiurT +3NTCGB8cV8Nykipv54GaCOeKks5F1hfhhamlcJkmqpqwx/xVWLhUm4mqs1pQ2z39 +ISc3AgMBAAGjgc0wgcowCQYDVR0TBAIwADAdBgNVHQ4EFgQUFrwnpNfMbyllOrr0 +Wo04hMD6/8cwWQYDVR0jBFIwUIAUo0vdqHXfPSfAE+SZtNHDd80N57yhIqQgMB4x +HDAaBgNVBAMME1NtYWxsIEluc3RpdHV0ZSBMTEOCFHQvGGnILU0zMVehLFzgjYZa +1q1lMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAsGA1UdDwQEAwIFoDAhBgNVHREEGjAY +ghZnYXRlLnNtYWxsLmV4YW1wbGUub3JnMA0GCSqGSIb3DQEBCwUAA4IBAQBNQgvk +ZTWpCiYDlus+VlJugsHNvfNFUKJm0mX2ZY6dYE5yU3UEAswJu0G3vbSf1dAmdfiD +wbWIn7XVBQcga0tByr8iSV5Cw2zFAbIGr+jwtKVejhRM8RuF3DMZY+9wowIr7Bly +WJUEgXiLHQXvP/Mqazz9/wuQgSuAwJm9kbAvCBB6H7tjPAOR6FsOafQtdXxFW8SN +DfNLyaC8nZRkcN9PU6Moac/+80bpeuc0HhXzu5i5MdWPbuJl+wuq3qRt8FYqDcBR +pVyRq6i8b2UKdDwtllzaD/H3AfPMD1H+VNCChsJAYMmkgdueQ9s8Zo3FKmNVks6e +GCsua4Z9kfeIxFyo +-----END CERTIFICATE----- diff --git a/Secret/CA/pki/certs_by_serial/DCCAF785FE1F49DD878444FEE564818A.pem b/Secret/CA/pki/certs_by_serial/DCCAF785FE1F49DD878444FEE564818A.pem new file mode 100644 index 0000000..9a74670 --- /dev/null +++ b/Secret/CA/pki/certs_by_serial/DCCAF785FE1F49DD878444FEE564818A.pem @@ -0,0 +1,85 @@ +Certificate: + Data: + Version: 3 (0x2) + Serial Number: + dc:ca:f7:85:fe:1f:49:dd:87:84:44:fe:e5:64:81:8a + Signature Algorithm: sha256WithRSAEncryption + Issuer: CN=Small Institute LLC + Validity + Not Before: Mar 22 00:14:11 2022 GMT + Not After : Mar 6 00:14:11 2025 GMT + Subject: CN=core + Subject Public Key Info: + Public Key Algorithm: rsaEncryption + RSA Public-Key: (2048 bit) + Modulus: + 00:d2:73:dd:06:e8:d8:fd:6c:62:83:fb:39:cf:9e: + 72:75:eb:25:0f:3e:46:cb:12:9b:9f:d0:a0:de:71: + b9:3e:68:54:b7:31:eb:44:c9:80:db:13:76:cf:71: + f4:55:01:e4:77:cf:8f:19:d2:1d:5f:1e:a4:6f:ea: + 42:ca:05:26:eb:7f:48:8c:cc:bd:4d:4c:91:14:c5: + 74:7f:38:cf:22:75:48:4d:cb:96:65:e0:b1:12:0e: + c4:38:9e:ce:f0:ff:98:05:5e:c8:c4:36:9b:31:95: + 0a:4e:df:03:5d:dc:2a:58:49:83:cf:ef:e0:25:57: + 6f:71:b2:37:1f:1f:f0:ee:da:6e:23:e4:37:58:34: + 55:81:0b:4e:d4:c1:f6:51:9b:4c:7d:e4:e3:36:4e: + be:f9:82:5f:24:f4:48:b6:c2:36:18:df:3a:45:58: + 49:34:b2:44:57:9b:1c:50:ea:06:8e:f8:af:0d:6d: + e4:85:18:83:94:24:8e:e1:20:f6:ee:7a:2a:b0:93: + b7:7e:3e:fc:a3:4d:13:89:97:c4:5e:c0:80:36:e7: + ea:9f:0c:8a:c1:a0:5d:74:61:55:9d:fd:6e:b4:85: + 53:00:85:68:5c:3f:9a:aa:60:b8:ec:1f:35:f3:76: + 97:04:1b:86:52:21:8f:51:0b:c1:78:46:5d:59:76: + 1e:99 + Exponent: 65537 (0x10001) + X509v3 extensions: + X509v3 Basic Constraints: + CA:FALSE + X509v3 Subject Key Identifier: + DA:E2:A2:DA:C0:46:A1:A8:FD:77:29:AD:10:17:3E:67:2E:C4:AA:36 + X509v3 Authority Key Identifier: + keyid:A3:4B:DD:A8:75:DF:3D:27:C0:13:E4:99:B4:D1:C3:77:CD:0D:E7:BC + DirName:/CN=Small Institute LLC + serial:74:2F:18:69:C8:2D:4D:33:31:57:A1:2C:5C:E0:8D:86:5A:D6:AD:65 + + X509v3 Extended Key Usage: + TLS Web Client Authentication + X509v3 Key Usage: + Digital Signature + Signature Algorithm: sha256WithRSAEncryption + 91:8c:50:62:c7:71:a2:06:8f:a5:ff:d8:04:e8:c8:e9:f9:d6: + 14:02:80:8f:ac:94:0a:7c:cc:75:c7:5a:d7:1f:ea:49:8a:ca: + f1:45:69:ac:5e:5c:24:b4:7e:63:97:a3:e2:ab:de:0c:63:b6: + 2c:e0:ac:85:8a:08:66:91:e6:f5:a3:eb:8d:14:3f:a2:b2:9c: + 4d:9f:e5:36:ae:7b:99:39:7d:39:a6:22:a6:9c:e2:82:7d:7e: + d5:ab:0e:f9:72:c7:41:3e:b6:56:b5:b8:53:f1:54:22:09:90: + 18:dc:98:b0:a0:a0:60:8e:d1:43:86:7f:46:dd:89:7a:21:03: + 7e:68:0e:14:a4:1e:40:3c:b8:74:26:66:a3:18:c7:84:2f:9f: + 80:d5:cb:53:f2:39:65:5a:61:20:0d:bb:5d:6b:da:5b:e5:59: + 7e:33:ec:56:3d:f8:b3:69:e9:1c:87:44:e5:c0:db:35:17:b7: + d4:d0:fe:cf:40:32:b7:bd:6c:ce:62:4a:c0:c0:1e:08:ee:45: + c8:ef:66:98:4a:e6:11:53:b4:78:53:3e:d9:c5:f8:94:b8:c8: + 77:d8:a1:04:0c:1d:d4:fe:9c:9b:8e:cb:69:5c:34:5a:5e:11: + a9:dd:06:a1:8d:0d:67:c6:b0:cc:c1:d8:35:f4:ff:dd:2e:3b: + e6:46:5b:43 +-----BEGIN CERTIFICATE----- +MIIDYzCCAkugAwIBAgIRANzK94X+H0ndh4RE/uVkgYowDQYJKoZIhvcNAQELBQAw +HjEcMBoGA1UEAwwTU21hbGwgSW5zdGl0dXRlIExMQzAeFw0yMjAzMjIwMDE0MTFa +Fw0yNTAzMDYwMDE0MTFaMA8xDTALBgNVBAMMBGNvcmUwggEiMA0GCSqGSIb3DQEB +AQUAA4IBDwAwggEKAoIBAQDSc90G6Nj9bGKD+znPnnJ16yUPPkbLEpuf0KDecbk+ +aFS3MetEyYDbE3bPcfRVAeR3z48Z0h1fHqRv6kLKBSbrf0iMzL1NTJEUxXR/OM8i +dUhNy5Zl4LESDsQ4ns7w/5gFXsjENpsxlQpO3wNd3CpYSYPP7+AlV29xsjcfH/Du +2m4j5DdYNFWBC07UwfZRm0x95OM2Tr75gl8k9Ei2wjYY3zpFWEk0skRXmxxQ6gaO ++K8NbeSFGIOUJI7hIPbueiqwk7d+PvyjTROJl8RewIA25+qfDIrBoF10YVWd/W60 +hVMAhWhcP5qqYLjsHzXzdpcEG4ZSIY9RC8F4Rl1Zdh6ZAgMBAAGjgaowgacwCQYD +VR0TBAIwADAdBgNVHQ4EFgQU2uKi2sBGoaj9dymtEBc+Zy7EqjYwWQYDVR0jBFIw +UIAUo0vdqHXfPSfAE+SZtNHDd80N57yhIqQgMB4xHDAaBgNVBAMME1NtYWxsIElu +c3RpdHV0ZSBMTEOCFHQvGGnILU0zMVehLFzgjYZa1q1lMBMGA1UdJQQMMAoGCCsG +AQUFBwMCMAsGA1UdDwQEAwIHgDANBgkqhkiG9w0BAQsFAAOCAQEAkYxQYsdxogaP +pf/YBOjI6fnWFAKAj6yUCnzMdcda1x/qSYrK8UVprF5cJLR+Y5ej4qveDGO2LOCs +hYoIZpHm9aPrjRQ/orKcTZ/lNq57mTl9OaYippzign1+1asO+XLHQT62VrW4U/FU +IgmQGNyYsKCgYI7RQ4Z/Rt2JeiEDfmgOFKQeQDy4dCZmoxjHhC+fgNXLU/I5ZVph +IA27XWvaW+VZfjPsVj34s2npHIdE5cDbNRe31ND+z0Ayt71szmJKwMAeCO5FyO9m +mErmEVO0eFM+2cX4lLjId9ihBAwd1P6cm47LaVw0Wl4Rqd0GoY0NZ8awzMHYNfT/ +3S475kZbQw== +-----END CERTIFICATE----- diff --git a/Secret/CA/pki/certs_by_serial/EE0A8C45387C14368F23883D172135C8.pem b/Secret/CA/pki/certs_by_serial/EE0A8C45387C14368F23883D172135C8.pem new file mode 100644 index 0000000..0aded0d --- /dev/null +++ b/Secret/CA/pki/certs_by_serial/EE0A8C45387C14368F23883D172135C8.pem @@ -0,0 +1,88 @@ +Certificate: + Data: + Version: 3 (0x2) + Serial Number: + ee:0a:8c:45:38:7c:14:36:8f:23:88:3d:17:21:35:c8 + Signature Algorithm: sha256WithRSAEncryption + Issuer: CN=Small Institute LLC + Validity + Not Before: Mar 22 00:14:11 2022 GMT + Not After : Mar 6 00:14:11 2025 GMT + Subject: CN=core.small.example.org + Subject Public Key Info: + Public Key Algorithm: rsaEncryption + RSA Public-Key: (2048 bit) + Modulus: + 00:a5:a0:85:99:10:99:2f:21:8b:a4:dd:de:36:5c: + 1c:5d:7a:43:78:09:74:28:76:35:db:52:00:d2:74: + 83:53:e3:a2:3d:77:ec:4d:56:90:7c:f3:26:94:47: + 6b:2d:a2:d4:bb:22:4c:1d:73:a3:6c:c3:70:8c:a0: + fd:89:3f:8b:eb:59:b8:22:62:42:a7:7c:d7:c9:ee: + 74:bb:8e:38:20:f7:13:48:3a:f1:a3:e3:6e:18:d0: + 8d:dc:ef:ae:54:33:db:30:50:09:f2:5f:25:7a:a4: + 09:9a:65:5c:ca:fc:44:35:76:74:5e:4b:fe:cd:55: + a9:3e:bd:36:4e:8d:a5:bc:53:f4:3d:9f:59:c7:a9: + ab:08:9c:08:e8:0a:13:97:97:07:a6:a0:86:15:44: + 6e:22:13:85:96:ae:64:8a:80:c5:09:83:c1:4d:88: + 3b:ee:0c:b7:70:eb:c7:26:15:c6:b6:63:b4:ff:50: + 71:f1:35:ed:30:6f:b2:44:06:86:5c:bd:90:7f:80: + dd:c9:d2:cc:07:55:f3:c1:29:f5:36:bd:bf:af:7c: + 18:6c:47:41:55:5b:6f:ec:d3:ef:d8:2d:5d:83:02: + 71:40:4f:95:24:14:39:14:2a:1e:a4:36:65:f5:38: + b6:6e:42:f3:bb:c1:b9:aa:5a:e0:87:28:6a:5c:e5: + 81:c3 + Exponent: 65537 (0x10001) + X509v3 extensions: + X509v3 Basic Constraints: + CA:FALSE + X509v3 Subject Key Identifier: + 9E:B9:DA:54:5F:16:1B:9F:EF:60:EB:5E:68:3E:10:35:18:BC:D6:10 + X509v3 Authority Key Identifier: + keyid:A3:4B:DD:A8:75:DF:3D:27:C0:13:E4:99:B4:D1:C3:77:CD:0D:E7:BC + DirName:/CN=Small Institute LLC + serial:74:2F:18:69:C8:2D:4D:33:31:57:A1:2C:5C:E0:8D:86:5A:D6:AD:65 + + X509v3 Extended Key Usage: + TLS Web Server Authentication + X509v3 Key Usage: + Digital Signature, Key Encipherment + X509v3 Subject Alternative Name: + DNS:core.small.example.org + Signature Algorithm: sha256WithRSAEncryption + 2a:e0:b2:65:09:a0:7b:42:a7:98:fc:09:df:28:88:f8:17:fe: + ae:46:6c:1c:c3:c7:18:7a:6e:d5:91:a4:dc:33:43:fe:26:23: + 12:f5:79:dd:9b:10:d2:d1:b9:db:dc:93:f6:f2:b7:23:9a:9e: + 49:ba:af:51:d1:39:7d:f9:99:ae:96:1f:84:96:6d:0c:90:8e: + 55:40:2e:15:76:24:72:0e:e3:5f:0c:40:ed:bf:57:a3:86:0b: + 5a:6c:5c:09:9b:fd:72:c7:20:56:a4:1e:dc:07:4a:b2:da:a8: + dc:7b:21:2e:1b:62:50:0f:22:0a:15:98:a1:4f:27:b0:15:49: + c1:b6:a2:87:f9:36:64:8b:5d:4d:36:60:f8:b3:4f:73:2b:64: + e7:7f:e4:c9:f3:d1:50:4b:1f:51:9c:27:eb:22:68:95:e2:49: + b4:88:98:ae:4c:47:67:0a:7a:32:ae:33:06:e8:8a:0d:28:12: + 83:85:df:f4:7c:13:0a:68:df:6c:2d:43:a8:57:ea:a2:63:e7: + 66:b0:07:7d:c8:18:52:c5:d7:69:5f:cf:4d:a3:ec:b2:3b:e6: + 51:ac:5d:e0:8b:e9:d7:67:8c:33:f8:9b:6f:13:20:69:73:e1: + 1f:f2:80:46:cb:e0:6a:0b:a8:50:65:93:13:49:51:97:6b:69: + 11:9a:2b:27 +-----BEGIN CERTIFICATE----- +MIIDmDCCAoCgAwIBAgIRAO4KjEU4fBQ2jyOIPRchNcgwDQYJKoZIhvcNAQELBQAw +HjEcMBoGA1UEAwwTU21hbGwgSW5zdGl0dXRlIExMQzAeFw0yMjAzMjIwMDE0MTFa +Fw0yNTAzMDYwMDE0MTFaMCExHzAdBgNVBAMMFmNvcmUuc21hbGwuZXhhbXBsZS5v +cmcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCloIWZEJkvIYuk3d42 +XBxdekN4CXQodjXbUgDSdINT46I9d+xNVpB88yaUR2stotS7Ikwdc6Nsw3CMoP2J +P4vrWbgiYkKnfNfJ7nS7jjgg9xNIOvGj424Y0I3c765UM9swUAnyXyV6pAmaZVzK +/EQ1dnReS/7NVak+vTZOjaW8U/Q9n1nHqasInAjoChOXlwemoIYVRG4iE4WWrmSK +gMUJg8FNiDvuDLdw68cmFca2Y7T/UHHxNe0wb7JEBoZcvZB/gN3J0swHVfPBKfU2 +vb+vfBhsR0FVW2/s0+/YLV2DAnFAT5UkFDkUKh6kNmX1OLZuQvO7wbmqWuCHKGpc +5YHDAgMBAAGjgc0wgcowCQYDVR0TBAIwADAdBgNVHQ4EFgQUnrnaVF8WG5/vYOte +aD4QNRi81hAwWQYDVR0jBFIwUIAUo0vdqHXfPSfAE+SZtNHDd80N57yhIqQgMB4x +HDAaBgNVBAMME1NtYWxsIEluc3RpdHV0ZSBMTEOCFHQvGGnILU0zMVehLFzgjYZa +1q1lMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAsGA1UdDwQEAwIFoDAhBgNVHREEGjAY +ghZjb3JlLnNtYWxsLmV4YW1wbGUub3JnMA0GCSqGSIb3DQEBCwUAA4IBAQAq4LJl +CaB7QqeY/AnfKIj4F/6uRmwcw8cYem7VkaTcM0P+JiMS9XndmxDS0bnb3JP28rcj +mp5Juq9R0Tl9+Zmulh+Elm0MkI5VQC4VdiRyDuNfDEDtv1ejhgtabFwJm/1yxyBW +pB7cB0qy2qjceyEuG2JQDyIKFZihTyewFUnBtqKH+TZki11NNmD4s09zK2Tnf+TJ +89FQSx9RnCfrImiV4km0iJiuTEdnCnoyrjMG6IoNKBKDhd/0fBMKaN9sLUOoV+qi +Y+dmsAd9yBhSxddpX89No+yyO+ZRrF3gi+nXZ4wz+JtvEyBpc+Ef8oBGy+BqC6hQ +ZZMTSVGXa2kRmisn +-----END CERTIFICATE----- diff --git a/Secret/CA/pki/extensions.temp b/Secret/CA/pki/extensions.temp new file mode 100644 index 0000000..5680ec9 --- /dev/null +++ b/Secret/CA/pki/extensions.temp @@ -0,0 +1,15 @@ +# X509 extensions added to every signed cert + +# This file is included for every cert signed, and by default does nothing. +# It could be used to add values every cert should have, such as a CDP as +# demonstrated in the following example: + +#crlDistributionPoints = URI:http://example.net/pki/my_ca.crl +# X509 extensions for a client + +basicConstraints = CA:FALSE +subjectKeyIdentifier = hash +authorityKeyIdentifier = keyid,issuer:always +extendedKeyUsage = clientAuth +keyUsage = digitalSignature + diff --git a/Secret/CA/pki/index.txt b/Secret/CA/pki/index.txt new file mode 100644 index 0000000..46f5d7f --- /dev/null +++ b/Secret/CA/pki/index.txt @@ -0,0 +1,4 @@ +V 250306001411Z 95F05D64CEB9D8907681D5A528461DDA unknown /CN=small.example.org +V 250306001411Z 99AACABEAF22703B05EDC426849DF177 unknown /CN=gate.small.example.org +V 250306001411Z EE0A8C45387C14368F23883D172135C8 unknown /CN=core.small.example.org +V 250306001411Z DCCAF785FE1F49DD878444FEE564818A unknown /CN=core diff --git a/Secret/CA/pki/index.txt.attr b/Secret/CA/pki/index.txt.attr new file mode 100644 index 0000000..3a7e39e --- /dev/null +++ b/Secret/CA/pki/index.txt.attr @@ -0,0 +1 @@ +unique_subject = no diff --git a/Secret/CA/pki/index.txt.attr.old b/Secret/CA/pki/index.txt.attr.old new file mode 100644 index 0000000..3a7e39e --- /dev/null +++ b/Secret/CA/pki/index.txt.attr.old @@ -0,0 +1 @@ +unique_subject = no diff --git a/Secret/CA/pki/index.txt.old b/Secret/CA/pki/index.txt.old new file mode 100644 index 0000000..f37651f --- /dev/null +++ b/Secret/CA/pki/index.txt.old @@ -0,0 +1,3 @@ +V 250306001411Z 95F05D64CEB9D8907681D5A528461DDA unknown /CN=small.example.org +V 250306001411Z 99AACABEAF22703B05EDC426849DF177 unknown /CN=gate.small.example.org +V 250306001411Z EE0A8C45387C14368F23883D172135C8 unknown /CN=core.small.example.org diff --git a/Secret/CA/pki/issued/core.crt b/Secret/CA/pki/issued/core.crt new file mode 100644 index 0000000..9a74670 --- /dev/null +++ b/Secret/CA/pki/issued/core.crt @@ -0,0 +1,85 @@ +Certificate: + Data: + Version: 3 (0x2) + Serial Number: + dc:ca:f7:85:fe:1f:49:dd:87:84:44:fe:e5:64:81:8a + Signature Algorithm: sha256WithRSAEncryption + Issuer: CN=Small Institute LLC + Validity + Not Before: Mar 22 00:14:11 2022 GMT + Not After : Mar 6 00:14:11 2025 GMT + Subject: CN=core + Subject Public Key Info: + Public Key Algorithm: rsaEncryption + RSA Public-Key: (2048 bit) + Modulus: + 00:d2:73:dd:06:e8:d8:fd:6c:62:83:fb:39:cf:9e: + 72:75:eb:25:0f:3e:46:cb:12:9b:9f:d0:a0:de:71: + b9:3e:68:54:b7:31:eb:44:c9:80:db:13:76:cf:71: + f4:55:01:e4:77:cf:8f:19:d2:1d:5f:1e:a4:6f:ea: + 42:ca:05:26:eb:7f:48:8c:cc:bd:4d:4c:91:14:c5: + 74:7f:38:cf:22:75:48:4d:cb:96:65:e0:b1:12:0e: + c4:38:9e:ce:f0:ff:98:05:5e:c8:c4:36:9b:31:95: + 0a:4e:df:03:5d:dc:2a:58:49:83:cf:ef:e0:25:57: + 6f:71:b2:37:1f:1f:f0:ee:da:6e:23:e4:37:58:34: + 55:81:0b:4e:d4:c1:f6:51:9b:4c:7d:e4:e3:36:4e: + be:f9:82:5f:24:f4:48:b6:c2:36:18:df:3a:45:58: + 49:34:b2:44:57:9b:1c:50:ea:06:8e:f8:af:0d:6d: + e4:85:18:83:94:24:8e:e1:20:f6:ee:7a:2a:b0:93: + b7:7e:3e:fc:a3:4d:13:89:97:c4:5e:c0:80:36:e7: + ea:9f:0c:8a:c1:a0:5d:74:61:55:9d:fd:6e:b4:85: + 53:00:85:68:5c:3f:9a:aa:60:b8:ec:1f:35:f3:76: + 97:04:1b:86:52:21:8f:51:0b:c1:78:46:5d:59:76: + 1e:99 + Exponent: 65537 (0x10001) + X509v3 extensions: + X509v3 Basic Constraints: + CA:FALSE + X509v3 Subject Key Identifier: + DA:E2:A2:DA:C0:46:A1:A8:FD:77:29:AD:10:17:3E:67:2E:C4:AA:36 + X509v3 Authority Key Identifier: + keyid:A3:4B:DD:A8:75:DF:3D:27:C0:13:E4:99:B4:D1:C3:77:CD:0D:E7:BC + DirName:/CN=Small Institute LLC + serial:74:2F:18:69:C8:2D:4D:33:31:57:A1:2C:5C:E0:8D:86:5A:D6:AD:65 + + X509v3 Extended Key Usage: + TLS Web Client Authentication + X509v3 Key Usage: + Digital Signature + Signature Algorithm: sha256WithRSAEncryption + 91:8c:50:62:c7:71:a2:06:8f:a5:ff:d8:04:e8:c8:e9:f9:d6: + 14:02:80:8f:ac:94:0a:7c:cc:75:c7:5a:d7:1f:ea:49:8a:ca: + f1:45:69:ac:5e:5c:24:b4:7e:63:97:a3:e2:ab:de:0c:63:b6: + 2c:e0:ac:85:8a:08:66:91:e6:f5:a3:eb:8d:14:3f:a2:b2:9c: + 4d:9f:e5:36:ae:7b:99:39:7d:39:a6:22:a6:9c:e2:82:7d:7e: + d5:ab:0e:f9:72:c7:41:3e:b6:56:b5:b8:53:f1:54:22:09:90: + 18:dc:98:b0:a0:a0:60:8e:d1:43:86:7f:46:dd:89:7a:21:03: + 7e:68:0e:14:a4:1e:40:3c:b8:74:26:66:a3:18:c7:84:2f:9f: + 80:d5:cb:53:f2:39:65:5a:61:20:0d:bb:5d:6b:da:5b:e5:59: + 7e:33:ec:56:3d:f8:b3:69:e9:1c:87:44:e5:c0:db:35:17:b7: + d4:d0:fe:cf:40:32:b7:bd:6c:ce:62:4a:c0:c0:1e:08:ee:45: + c8:ef:66:98:4a:e6:11:53:b4:78:53:3e:d9:c5:f8:94:b8:c8: + 77:d8:a1:04:0c:1d:d4:fe:9c:9b:8e:cb:69:5c:34:5a:5e:11: + a9:dd:06:a1:8d:0d:67:c6:b0:cc:c1:d8:35:f4:ff:dd:2e:3b: + e6:46:5b:43 +-----BEGIN CERTIFICATE----- +MIIDYzCCAkugAwIBAgIRANzK94X+H0ndh4RE/uVkgYowDQYJKoZIhvcNAQELBQAw +HjEcMBoGA1UEAwwTU21hbGwgSW5zdGl0dXRlIExMQzAeFw0yMjAzMjIwMDE0MTFa +Fw0yNTAzMDYwMDE0MTFaMA8xDTALBgNVBAMMBGNvcmUwggEiMA0GCSqGSIb3DQEB +AQUAA4IBDwAwggEKAoIBAQDSc90G6Nj9bGKD+znPnnJ16yUPPkbLEpuf0KDecbk+ +aFS3MetEyYDbE3bPcfRVAeR3z48Z0h1fHqRv6kLKBSbrf0iMzL1NTJEUxXR/OM8i +dUhNy5Zl4LESDsQ4ns7w/5gFXsjENpsxlQpO3wNd3CpYSYPP7+AlV29xsjcfH/Du +2m4j5DdYNFWBC07UwfZRm0x95OM2Tr75gl8k9Ei2wjYY3zpFWEk0skRXmxxQ6gaO ++K8NbeSFGIOUJI7hIPbueiqwk7d+PvyjTROJl8RewIA25+qfDIrBoF10YVWd/W60 +hVMAhWhcP5qqYLjsHzXzdpcEG4ZSIY9RC8F4Rl1Zdh6ZAgMBAAGjgaowgacwCQYD +VR0TBAIwADAdBgNVHQ4EFgQU2uKi2sBGoaj9dymtEBc+Zy7EqjYwWQYDVR0jBFIw +UIAUo0vdqHXfPSfAE+SZtNHDd80N57yhIqQgMB4xHDAaBgNVBAMME1NtYWxsIElu +c3RpdHV0ZSBMTEOCFHQvGGnILU0zMVehLFzgjYZa1q1lMBMGA1UdJQQMMAoGCCsG +AQUFBwMCMAsGA1UdDwQEAwIHgDANBgkqhkiG9w0BAQsFAAOCAQEAkYxQYsdxogaP +pf/YBOjI6fnWFAKAj6yUCnzMdcda1x/qSYrK8UVprF5cJLR+Y5ej4qveDGO2LOCs +hYoIZpHm9aPrjRQ/orKcTZ/lNq57mTl9OaYippzign1+1asO+XLHQT62VrW4U/FU +IgmQGNyYsKCgYI7RQ4Z/Rt2JeiEDfmgOFKQeQDy4dCZmoxjHhC+fgNXLU/I5ZVph +IA27XWvaW+VZfjPsVj34s2npHIdE5cDbNRe31ND+z0Ayt71szmJKwMAeCO5FyO9m +mErmEVO0eFM+2cX4lLjId9ihBAwd1P6cm47LaVw0Wl4Rqd0GoY0NZ8awzMHYNfT/ +3S475kZbQw== +-----END CERTIFICATE----- diff --git a/Secret/CA/pki/issued/core.small.example.org.crt b/Secret/CA/pki/issued/core.small.example.org.crt new file mode 100644 index 0000000..0aded0d --- /dev/null +++ b/Secret/CA/pki/issued/core.small.example.org.crt @@ -0,0 +1,88 @@ +Certificate: + Data: + Version: 3 (0x2) + Serial Number: + ee:0a:8c:45:38:7c:14:36:8f:23:88:3d:17:21:35:c8 + Signature Algorithm: sha256WithRSAEncryption + Issuer: CN=Small Institute LLC + Validity + Not Before: Mar 22 00:14:11 2022 GMT + Not After : Mar 6 00:14:11 2025 GMT + Subject: CN=core.small.example.org + Subject Public Key Info: + Public Key Algorithm: rsaEncryption + RSA Public-Key: (2048 bit) + Modulus: + 00:a5:a0:85:99:10:99:2f:21:8b:a4:dd:de:36:5c: + 1c:5d:7a:43:78:09:74:28:76:35:db:52:00:d2:74: + 83:53:e3:a2:3d:77:ec:4d:56:90:7c:f3:26:94:47: + 6b:2d:a2:d4:bb:22:4c:1d:73:a3:6c:c3:70:8c:a0: + fd:89:3f:8b:eb:59:b8:22:62:42:a7:7c:d7:c9:ee: + 74:bb:8e:38:20:f7:13:48:3a:f1:a3:e3:6e:18:d0: + 8d:dc:ef:ae:54:33:db:30:50:09:f2:5f:25:7a:a4: + 09:9a:65:5c:ca:fc:44:35:76:74:5e:4b:fe:cd:55: + a9:3e:bd:36:4e:8d:a5:bc:53:f4:3d:9f:59:c7:a9: + ab:08:9c:08:e8:0a:13:97:97:07:a6:a0:86:15:44: + 6e:22:13:85:96:ae:64:8a:80:c5:09:83:c1:4d:88: + 3b:ee:0c:b7:70:eb:c7:26:15:c6:b6:63:b4:ff:50: + 71:f1:35:ed:30:6f:b2:44:06:86:5c:bd:90:7f:80: + dd:c9:d2:cc:07:55:f3:c1:29:f5:36:bd:bf:af:7c: + 18:6c:47:41:55:5b:6f:ec:d3:ef:d8:2d:5d:83:02: + 71:40:4f:95:24:14:39:14:2a:1e:a4:36:65:f5:38: + b6:6e:42:f3:bb:c1:b9:aa:5a:e0:87:28:6a:5c:e5: + 81:c3 + Exponent: 65537 (0x10001) + X509v3 extensions: + X509v3 Basic Constraints: + CA:FALSE + X509v3 Subject Key Identifier: + 9E:B9:DA:54:5F:16:1B:9F:EF:60:EB:5E:68:3E:10:35:18:BC:D6:10 + X509v3 Authority Key Identifier: + keyid:A3:4B:DD:A8:75:DF:3D:27:C0:13:E4:99:B4:D1:C3:77:CD:0D:E7:BC + DirName:/CN=Small Institute LLC + serial:74:2F:18:69:C8:2D:4D:33:31:57:A1:2C:5C:E0:8D:86:5A:D6:AD:65 + + X509v3 Extended Key Usage: + TLS Web Server Authentication + X509v3 Key Usage: + Digital Signature, Key Encipherment + X509v3 Subject Alternative Name: + DNS:core.small.example.org + Signature Algorithm: sha256WithRSAEncryption + 2a:e0:b2:65:09:a0:7b:42:a7:98:fc:09:df:28:88:f8:17:fe: + ae:46:6c:1c:c3:c7:18:7a:6e:d5:91:a4:dc:33:43:fe:26:23: + 12:f5:79:dd:9b:10:d2:d1:b9:db:dc:93:f6:f2:b7:23:9a:9e: + 49:ba:af:51:d1:39:7d:f9:99:ae:96:1f:84:96:6d:0c:90:8e: + 55:40:2e:15:76:24:72:0e:e3:5f:0c:40:ed:bf:57:a3:86:0b: + 5a:6c:5c:09:9b:fd:72:c7:20:56:a4:1e:dc:07:4a:b2:da:a8: + dc:7b:21:2e:1b:62:50:0f:22:0a:15:98:a1:4f:27:b0:15:49: + c1:b6:a2:87:f9:36:64:8b:5d:4d:36:60:f8:b3:4f:73:2b:64: + e7:7f:e4:c9:f3:d1:50:4b:1f:51:9c:27:eb:22:68:95:e2:49: + b4:88:98:ae:4c:47:67:0a:7a:32:ae:33:06:e8:8a:0d:28:12: + 83:85:df:f4:7c:13:0a:68:df:6c:2d:43:a8:57:ea:a2:63:e7: + 66:b0:07:7d:c8:18:52:c5:d7:69:5f:cf:4d:a3:ec:b2:3b:e6: + 51:ac:5d:e0:8b:e9:d7:67:8c:33:f8:9b:6f:13:20:69:73:e1: + 1f:f2:80:46:cb:e0:6a:0b:a8:50:65:93:13:49:51:97:6b:69: + 11:9a:2b:27 +-----BEGIN CERTIFICATE----- +MIIDmDCCAoCgAwIBAgIRAO4KjEU4fBQ2jyOIPRchNcgwDQYJKoZIhvcNAQELBQAw +HjEcMBoGA1UEAwwTU21hbGwgSW5zdGl0dXRlIExMQzAeFw0yMjAzMjIwMDE0MTFa +Fw0yNTAzMDYwMDE0MTFaMCExHzAdBgNVBAMMFmNvcmUuc21hbGwuZXhhbXBsZS5v +cmcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCloIWZEJkvIYuk3d42 +XBxdekN4CXQodjXbUgDSdINT46I9d+xNVpB88yaUR2stotS7Ikwdc6Nsw3CMoP2J +P4vrWbgiYkKnfNfJ7nS7jjgg9xNIOvGj424Y0I3c765UM9swUAnyXyV6pAmaZVzK +/EQ1dnReS/7NVak+vTZOjaW8U/Q9n1nHqasInAjoChOXlwemoIYVRG4iE4WWrmSK +gMUJg8FNiDvuDLdw68cmFca2Y7T/UHHxNe0wb7JEBoZcvZB/gN3J0swHVfPBKfU2 +vb+vfBhsR0FVW2/s0+/YLV2DAnFAT5UkFDkUKh6kNmX1OLZuQvO7wbmqWuCHKGpc +5YHDAgMBAAGjgc0wgcowCQYDVR0TBAIwADAdBgNVHQ4EFgQUnrnaVF8WG5/vYOte +aD4QNRi81hAwWQYDVR0jBFIwUIAUo0vdqHXfPSfAE+SZtNHDd80N57yhIqQgMB4x +HDAaBgNVBAMME1NtYWxsIEluc3RpdHV0ZSBMTEOCFHQvGGnILU0zMVehLFzgjYZa +1q1lMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAsGA1UdDwQEAwIFoDAhBgNVHREEGjAY +ghZjb3JlLnNtYWxsLmV4YW1wbGUub3JnMA0GCSqGSIb3DQEBCwUAA4IBAQAq4LJl +CaB7QqeY/AnfKIj4F/6uRmwcw8cYem7VkaTcM0P+JiMS9XndmxDS0bnb3JP28rcj +mp5Juq9R0Tl9+Zmulh+Elm0MkI5VQC4VdiRyDuNfDEDtv1ejhgtabFwJm/1yxyBW +pB7cB0qy2qjceyEuG2JQDyIKFZihTyewFUnBtqKH+TZki11NNmD4s09zK2Tnf+TJ +89FQSx9RnCfrImiV4km0iJiuTEdnCnoyrjMG6IoNKBKDhd/0fBMKaN9sLUOoV+qi +Y+dmsAd9yBhSxddpX89No+yyO+ZRrF3gi+nXZ4wz+JtvEyBpc+Ef8oBGy+BqC6hQ +ZZMTSVGXa2kRmisn +-----END CERTIFICATE----- diff --git a/Secret/CA/pki/issued/gate.small.example.org.crt b/Secret/CA/pki/issued/gate.small.example.org.crt new file mode 100644 index 0000000..a0dae9e --- /dev/null +++ b/Secret/CA/pki/issued/gate.small.example.org.crt @@ -0,0 +1,88 @@ +Certificate: + Data: + Version: 3 (0x2) + Serial Number: + 99:aa:ca:be:af:22:70:3b:05:ed:c4:26:84:9d:f1:77 + Signature Algorithm: sha256WithRSAEncryption + Issuer: CN=Small Institute LLC + Validity + Not Before: Mar 22 00:14:11 2022 GMT + Not After : Mar 6 00:14:11 2025 GMT + Subject: CN=gate.small.example.org + Subject Public Key Info: + Public Key Algorithm: rsaEncryption + RSA Public-Key: (2048 bit) + Modulus: + 00:c1:84:ad:a4:1d:8c:86:1d:eb:87:e5:dc:33:c6: + 00:97:00:b7:ce:03:92:3c:47:ea:d1:2b:a6:ef:2a: + de:bc:58:06:5b:00:36:80:96:2f:e2:c2:7c:a6:7c: + 71:40:f9:67:a1:6c:f7:0b:d2:d4:41:81:98:99:66: + 08:93:e5:bf:b4:dc:cf:95:36:28:14:df:4d:71:f6: + d8:5d:2a:17:25:ac:4a:dc:e8:bd:d9:17:d5:36:51: + bf:a5:00:9f:66:eb:c0:ce:fa:e3:1f:ad:1f:45:40: + d7:88:bf:93:62:cf:98:09:ba:1c:7f:74:c8:90:2f: + a5:2d:78:88:64:b9:fb:3a:c5:44:29:a1:92:99:87: + 82:35:d8:96:18:27:23:89:a6:89:1e:3f:d2:1e:08: + da:55:bf:53:aa:1d:d5:8a:17:64:6f:60:1d:07:c7: + 85:87:73:33:b4:ed:a5:c4:0b:79:e4:92:45:1c:0e: + cc:00:6a:a1:de:44:4d:67:1a:fe:fc:b5:e8:c0:f8: + 44:60:a6:fb:0a:d2:f4:d9:8a:ea:d3:dc:d4:c2:18: + 1f:1c:57:c3:72:92:2a:6f:e7:81:9a:08:e7:8a:92: + ce:45:d6:17:e1:85:a9:a5:70:99:26:aa:9a:b0:c7: + fc:55:58:b8:54:9b:89:aa:b3:5a:50:db:3d:fd:21: + 27:37 + Exponent: 65537 (0x10001) + X509v3 extensions: + X509v3 Basic Constraints: + CA:FALSE + X509v3 Subject Key Identifier: + 16:BC:27:A4:D7:CC:6F:29:65:3A:BA:F4:5A:8D:38:84:C0:FA:FF:C7 + X509v3 Authority Key Identifier: + keyid:A3:4B:DD:A8:75:DF:3D:27:C0:13:E4:99:B4:D1:C3:77:CD:0D:E7:BC + DirName:/CN=Small Institute LLC + serial:74:2F:18:69:C8:2D:4D:33:31:57:A1:2C:5C:E0:8D:86:5A:D6:AD:65 + + X509v3 Extended Key Usage: + TLS Web Server Authentication + X509v3 Key Usage: + Digital Signature, Key Encipherment + X509v3 Subject Alternative Name: + DNS:gate.small.example.org + Signature Algorithm: sha256WithRSAEncryption + 4d:42:0b:e4:65:35:a9:0a:26:03:96:eb:3e:56:52:6e:82:c1: + cd:bd:f3:45:50:a2:66:d2:65:f6:65:8e:9d:60:4e:72:53:75: + 04:02:cc:09:bb:41:b7:bd:b4:9f:d5:d0:26:75:f8:83:c1:b5: + 88:9f:b5:d5:05:07:20:6b:4b:41:ca:bf:22:49:5e:42:c3:6c: + c5:01:b2:06:af:e8:f0:b4:a5:5e:8e:14:4c:f1:1b:85:dc:33: + 19:63:ef:70:a3:02:2b:ec:19:72:58:95:04:81:78:8b:1d:05: + ef:3f:f3:2a:6b:3c:fd:ff:0b:90:81:2b:80:c0:99:bd:91:b0: + 2f:08:10:7a:1f:bb:63:3c:03:91:e8:5b:0e:69:f4:2d:75:7c: + 45:5b:c4:8d:0d:f3:4b:c9:a0:bc:9d:94:64:70:df:4f:53:a3: + 28:69:cf:fe:f3:46:e9:7a:e7:34:1e:15:f3:bb:98:b9:31:d5: + 8f:6e:e2:65:fb:0b:aa:de:a4:6d:f0:56:2a:0d:c0:51:a5:5c: + 91:ab:a8:bc:6f:65:0a:74:3c:2d:96:5c:da:0f:f1:f7:01:f3: + cc:0f:51:fe:54:d0:82:86:c2:40:60:c9:a4:81:db:9e:43:db: + 3c:66:8d:c5:2a:63:55:92:ce:9e:18:2b:2e:6b:86:7d:91:f7: + 88:c4:5c:a8 +-----BEGIN CERTIFICATE----- +MIIDmDCCAoCgAwIBAgIRAJmqyr6vInA7Be3EJoSd8XcwDQYJKoZIhvcNAQELBQAw +HjEcMBoGA1UEAwwTU21hbGwgSW5zdGl0dXRlIExMQzAeFw0yMjAzMjIwMDE0MTFa +Fw0yNTAzMDYwMDE0MTFaMCExHzAdBgNVBAMMFmdhdGUuc21hbGwuZXhhbXBsZS5v +cmcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDBhK2kHYyGHeuH5dwz +xgCXALfOA5I8R+rRK6bvKt68WAZbADaAli/iwnymfHFA+WehbPcL0tRBgZiZZgiT +5b+03M+VNigU301x9thdKhclrErc6L3ZF9U2Ub+lAJ9m68DO+uMfrR9FQNeIv5Ni +z5gJuhx/dMiQL6UteIhkufs6xUQpoZKZh4I12JYYJyOJpokeP9IeCNpVv1OqHdWK +F2RvYB0Hx4WHczO07aXEC3nkkkUcDswAaqHeRE1nGv78tejA+ERgpvsK0vTZiurT +3NTCGB8cV8Nykipv54GaCOeKks5F1hfhhamlcJkmqpqwx/xVWLhUm4mqs1pQ2z39 +ISc3AgMBAAGjgc0wgcowCQYDVR0TBAIwADAdBgNVHQ4EFgQUFrwnpNfMbyllOrr0 +Wo04hMD6/8cwWQYDVR0jBFIwUIAUo0vdqHXfPSfAE+SZtNHDd80N57yhIqQgMB4x +HDAaBgNVBAMME1NtYWxsIEluc3RpdHV0ZSBMTEOCFHQvGGnILU0zMVehLFzgjYZa +1q1lMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAsGA1UdDwQEAwIFoDAhBgNVHREEGjAY +ghZnYXRlLnNtYWxsLmV4YW1wbGUub3JnMA0GCSqGSIb3DQEBCwUAA4IBAQBNQgvk +ZTWpCiYDlus+VlJugsHNvfNFUKJm0mX2ZY6dYE5yU3UEAswJu0G3vbSf1dAmdfiD +wbWIn7XVBQcga0tByr8iSV5Cw2zFAbIGr+jwtKVejhRM8RuF3DMZY+9wowIr7Bly +WJUEgXiLHQXvP/Mqazz9/wuQgSuAwJm9kbAvCBB6H7tjPAOR6FsOafQtdXxFW8SN +DfNLyaC8nZRkcN9PU6Moac/+80bpeuc0HhXzu5i5MdWPbuJl+wuq3qRt8FYqDcBR +pVyRq6i8b2UKdDwtllzaD/H3AfPMD1H+VNCChsJAYMmkgdueQ9s8Zo3FKmNVks6e +GCsua4Z9kfeIxFyo +-----END CERTIFICATE----- diff --git a/Secret/CA/pki/issued/small.example.org.crt b/Secret/CA/pki/issued/small.example.org.crt new file mode 100644 index 0000000..bc6e145 --- /dev/null +++ b/Secret/CA/pki/issued/small.example.org.crt @@ -0,0 +1,88 @@ +Certificate: + Data: + Version: 3 (0x2) + Serial Number: + 95:f0:5d:64:ce:b9:d8:90:76:81:d5:a5:28:46:1d:da + Signature Algorithm: sha256WithRSAEncryption + Issuer: CN=Small Institute LLC + Validity + Not Before: Mar 22 00:14:11 2022 GMT + Not After : Mar 6 00:14:11 2025 GMT + Subject: CN=small.example.org + Subject Public Key Info: + Public Key Algorithm: rsaEncryption + RSA Public-Key: (2048 bit) + Modulus: + 00:c2:b3:c6:1f:e0:e6:54:5c:1e:0d:34:2c:02:bb: + 5f:d6:84:7d:fb:63:0c:fa:0d:33:a5:92:86:af:f7: + e8:72:86:69:fb:45:fd:90:14:9d:55:dd:22:50:b0: + be:71:94:da:68:ff:3c:46:ef:22:4a:84:ae:8e:84: + 2e:f9:d6:8c:fd:44:2e:eb:fe:95:5e:45:86:3f:f7: + 86:47:00:c1:d8:64:b4:3f:55:c8:b5:fc:69:c3:1b: + aa:54:c5:f4:b6:a6:40:3f:9f:15:ff:eb:3b:1e:5e: + d7:d4:eb:ae:ad:bc:e2:cf:4a:fe:df:3d:69:36:37: + 79:67:95:bf:43:b0:e2:d6:29:60:36:18:f8:7d:32: + 67:79:bb:30:95:ec:8d:93:46:56:13:72:93:96:ac: + 70:29:53:26:c1:d8:c7:38:4a:83:2d:56:bb:90:0f: + a4:09:fd:e6:d8:72:fd:0b:48:4f:38:d4:28:31:0f: + e3:63:d0:3d:d1:e2:ab:e1:10:12:c7:27:85:03:5d: + 7d:01:40:2e:3b:96:2e:f1:a6:a2:32:a8:bd:97:2a: + 90:6e:10:b6:6f:98:7a:e9:9f:06:01:de:0b:c9:18: + 9e:83:4c:2d:a5:5b:99:0e:19:69:77:f0:5d:e2:3d: + 37:c6:4d:73:c7:b0:e8:fb:5c:16:45:29:74:e4:31: + 99:7b + Exponent: 65537 (0x10001) + X509v3 extensions: + X509v3 Basic Constraints: + CA:FALSE + X509v3 Subject Key Identifier: + 2C:AD:E6:55:8E:A6:4B:DF:B1:40:E4:7C:88:CB:75:5A:65:02:6F:8B + X509v3 Authority Key Identifier: + keyid:A3:4B:DD:A8:75:DF:3D:27:C0:13:E4:99:B4:D1:C3:77:CD:0D:E7:BC + DirName:/CN=Small Institute LLC + serial:74:2F:18:69:C8:2D:4D:33:31:57:A1:2C:5C:E0:8D:86:5A:D6:AD:65 + + X509v3 Extended Key Usage: + TLS Web Server Authentication + X509v3 Key Usage: + Digital Signature, Key Encipherment + X509v3 Subject Alternative Name: + DNS:small.example.org + Signature Algorithm: sha256WithRSAEncryption + 58:e3:fd:10:09:c5:cb:15:f6:0c:0d:22:b8:56:f6:89:85:58: + 66:e2:24:64:99:b3:35:d2:bb:63:9f:f8:53:89:29:f5:75:61: + c2:34:8a:50:ac:67:fd:97:40:98:d5:8b:05:91:fb:36:f3:50: + ad:12:53:29:44:c0:86:b1:6f:1a:21:77:6d:43:05:84:1f:ae: + 74:8f:ba:44:49:0e:61:90:17:39:2f:6c:c6:69:9f:89:82:f8: + 22:6e:63:c6:d5:88:46:e5:30:e6:80:51:4c:fc:01:98:e3:31: + 59:20:b6:3d:36:d1:0d:42:b0:9b:8e:6a:74:34:1d:a9:fb:13: + 28:49:ae:d5:b3:83:19:38:77:f6:81:74:81:7f:d0:00:f7:22: + 01:04:70:7d:ba:d0:44:1a:e9:00:b4:20:e9:3c:87:b1:84:c1: + 79:92:f0:96:b5:69:77:d1:50:c4:26:da:8d:13:45:c0:ec:70: + 5d:59:59:8f:13:59:dc:e0:84:da:73:af:7e:99:c1:30:d2:b2: + f1:b1:ed:79:b7:2e:c7:12:88:04:55:ce:d1:71:de:8c:bd:e8: + 1f:0c:c1:14:24:2b:cc:74:b7:fa:e8:ce:d2:7b:48:fb:2b:fb: + bd:d0:98:29:bb:1c:8e:e6:1c:d3:8d:78:70:b1:c3:40:00:a3: + 48:8c:a2:f4 +-----BEGIN CERTIFICATE----- +MIIDjjCCAnagAwIBAgIRAJXwXWTOudiQdoHVpShGHdowDQYJKoZIhvcNAQELBQAw +HjEcMBoGA1UEAwwTU21hbGwgSW5zdGl0dXRlIExMQzAeFw0yMjAzMjIwMDE0MTFa +Fw0yNTAzMDYwMDE0MTFaMBwxGjAYBgNVBAMMEXNtYWxsLmV4YW1wbGUub3JnMIIB +IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwrPGH+DmVFweDTQsArtf1oR9 ++2MM+g0zpZKGr/focoZp+0X9kBSdVd0iULC+cZTaaP88Ru8iSoSujoQu+daM/UQu +6/6VXkWGP/eGRwDB2GS0P1XItfxpwxuqVMX0tqZAP58V/+s7Hl7X1Ouurbziz0r+ +3z1pNjd5Z5W/Q7Di1ilgNhj4fTJnebswleyNk0ZWE3KTlqxwKVMmwdjHOEqDLVa7 +kA+kCf3m2HL9C0hPONQoMQ/jY9A90eKr4RASxyeFA119AUAuO5Yu8aaiMqi9lyqQ +bhC2b5h66Z8GAd4LyRieg0wtpVuZDhlpd/Bd4j03xk1zx7Do+1wWRSl05DGZewID +AQABo4HIMIHFMAkGA1UdEwQCMAAwHQYDVR0OBBYEFCyt5lWOpkvfsUDkfIjLdVpl +Am+LMFkGA1UdIwRSMFCAFKNL3ah13z0nwBPkmbTRw3fNDee8oSKkIDAeMRwwGgYD +VQQDDBNTbWFsbCBJbnN0aXR1dGUgTExDghR0LxhpyC1NMzFXoSxc4I2GWtatZTAT +BgNVHSUEDDAKBggrBgEFBQcDATALBgNVHQ8EBAMCBaAwHAYDVR0RBBUwE4IRc21h +bGwuZXhhbXBsZS5vcmcwDQYJKoZIhvcNAQELBQADggEBAFjj/RAJxcsV9gwNIrhW +9omFWGbiJGSZszXSu2Of+FOJKfV1YcI0ilCsZ/2XQJjViwWR+zbzUK0SUylEwIax +bxohd21DBYQfrnSPukRJDmGQFzkvbMZpn4mC+CJuY8bViEblMOaAUUz8AZjjMVkg +tj020Q1CsJuOanQ0Han7EyhJrtWzgxk4d/aBdIF/0AD3IgEEcH260EQa6QC0IOk8 +h7GEwXmS8Ja1aXfRUMQm2o0TRcDscF1ZWY8TWdzghNpzr36ZwTDSsvGx7Xm3LscS +iARVztFx3oy96B8MwRQkK8x0t/roztJ7SPsr+73QmCm7HI7mHNONeHCxw0AAo0iM +ovQ= +-----END CERTIFICATE----- diff --git a/Secret/CA/pki/openssl-easyrsa.cnf b/Secret/CA/pki/openssl-easyrsa.cnf new file mode 100644 index 0000000..1139414 --- /dev/null +++ b/Secret/CA/pki/openssl-easyrsa.cnf @@ -0,0 +1,140 @@ +# For use with Easy-RSA 3.1 and OpenSSL or LibreSSL + +RANDFILE = $ENV::EASYRSA_PKI/.rnd + +#################################################################### +[ ca ] +default_ca = CA_default # The default ca section + +#################################################################### +[ CA_default ] + +dir = $ENV::EASYRSA_PKI # Where everything is kept +certs = $dir # Where the issued certs are kept +crl_dir = $dir # Where the issued crl are kept +database = $dir/index.txt # database index file. +new_certs_dir = $dir/certs_by_serial # default place for new certs. + +certificate = $dir/ca.crt # The CA certificate +serial = $dir/serial # The current serial number +crl = $dir/crl.pem # The current CRL +private_key = $dir/private/ca.key # The private key +RANDFILE = $dir/.rand # private random number file + +x509_extensions = basic_exts # The extentions to add to the cert + +# This allows a V2 CRL. Ancient browsers don't like it, but anything Easy-RSA +# is designed for will. In return, we get the Issuer attached to CRLs. +crl_extensions = crl_ext + +default_days = $ENV::EASYRSA_CERT_EXPIRE # how long to certify for +default_crl_days= $ENV::EASYRSA_CRL_DAYS # how long before next CRL +default_md = $ENV::EASYRSA_DIGEST # use public key default MD +preserve = no # keep passed DN ordering + +# This allows to renew certificates which have not been revoked +unique_subject = no + +# A few difference way of specifying how similar the request should look +# For type CA, the listed attributes must be the same, and the optional +# and supplied fields are just that :-) +policy = policy_anything + +# For the 'anything' policy, which defines allowed DN fields +[ policy_anything ] +countryName = optional +stateOrProvinceName = optional +localityName = optional +organizationName = optional +organizationalUnitName = optional +commonName = supplied +name = optional +emailAddress = optional + +#################################################################### +# Easy-RSA request handling +# We key off $DN_MODE to determine how to format the DN +[ req ] +default_bits = $ENV::EASYRSA_KEY_SIZE +default_keyfile = privkey.pem +default_md = $ENV::EASYRSA_DIGEST +distinguished_name = $ENV::EASYRSA_DN +x509_extensions = easyrsa_ca # The extentions to add to the self signed cert + +# A placeholder to handle the $EXTRA_EXTS feature: +#%EXTRA_EXTS% # Do NOT remove or change this line as $EXTRA_EXTS support requires it + +#################################################################### +# Easy-RSA DN (Subject) handling + +# Easy-RSA DN for cn_only support: +[ cn_only ] +commonName = Common Name (eg: your user, host, or server name) +commonName_max = 64 +commonName_default = $ENV::EASYRSA_REQ_CN + +# Easy-RSA DN for org support: +[ org ] +countryName = Country Name (2 letter code) +countryName_default = $ENV::EASYRSA_REQ_COUNTRY +countryName_min = 2 +countryName_max = 2 + +stateOrProvinceName = State or Province Name (full name) +stateOrProvinceName_default = $ENV::EASYRSA_REQ_PROVINCE + +localityName = Locality Name (eg, city) +localityName_default = $ENV::EASYRSA_REQ_CITY + +0.organizationName = Organization Name (eg, company) +0.organizationName_default = $ENV::EASYRSA_REQ_ORG + +organizationalUnitName = Organizational Unit Name (eg, section) +organizationalUnitName_default = $ENV::EASYRSA_REQ_OU + +commonName = Common Name (eg: your user, host, or server name) +commonName_max = 64 +commonName_default = $ENV::EASYRSA_REQ_CN + +emailAddress = Email Address +emailAddress_default = $ENV::EASYRSA_REQ_EMAIL +emailAddress_max = 64 + +#################################################################### +# Easy-RSA cert extension handling + +# This section is effectively unused as the main script sets extensions +# dynamically. This core section is left to support the odd usecase where +# a user calls openssl directly. +[ basic_exts ] +basicConstraints = CA:FALSE +subjectKeyIdentifier = hash +authorityKeyIdentifier = keyid,issuer:always + +# The Easy-RSA CA extensions +[ easyrsa_ca ] + +# PKIX recommendations: + +subjectKeyIdentifier=hash +authorityKeyIdentifier=keyid:always,issuer:always + +# This could be marked critical, but it's nice to support reading by any +# broken clients who attempt to do so. +basicConstraints = CA:true + +# Limit key usage to CA tasks. If you really want to use the generated pair as +# a self-signed cert, comment this out. +keyUsage = cRLSign, keyCertSign + +# nsCertType omitted by default. Let's try to let the deprecated stuff die. +# nsCertType = sslCA + +# CRL extensions. +[ crl_ext ] + +# Only issuerAltName and authorityKeyIdentifier make any sense in a CRL. + +# issuerAltName=issuer:copy +authorityKeyIdentifier=keyid:always,issuer:always + diff --git a/Secret/CA/pki/private/ca.key b/Secret/CA/pki/private/ca.key new file mode 100644 index 0000000..88923ed --- /dev/null +++ b/Secret/CA/pki/private/ca.key @@ -0,0 +1,27 @@ +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEAvcVdufxtT7p7blrCZmMvdgAqg9pLAnhg+ghQkd/RyU/DVv66 +dtxfeYqo1pHf/RWaHoK2APTIjNRO232kPiHbSnhwISqMzg07+LCUsJJsLVLVWsaf +dQt9h1zKV0vVlKMm5JCc9qhSf4T/eCb2yLjaLxpFtDFX5Jw3NPoKdaM1u+ezBz0f +1tLK+J3uFdlIUVGWvPscgnb9ij1f7tBUYHqsiFfYf6/AcJN6Iy0Blff75SPJz8zU +HLYLgoQPvf2t9ZFADKk7AxbEdCSBKDHaY4sDN4vRAnLnFtKGaaCWYFvpIeS2D3oh +ciyo6KdOxY8AtC43pNuF+5uJ2//5gUZeyXTgpQIDAQABAoIBAB31TGiSCwetHtM7 +DLlxKwrr18pc6b6IFnciXOXKeanYJ7RSHkmpXIEpfKHzAXNIt73dULIx8n8Y/SH1 +YbpVSfMltD3oI7ZbrH4EElUVqHI3Q2tDM+UcXULDSUYiuKLwZrFqivz9cZij/FiR +fiAN3pPVB+/8Yi2645Q/bOtJSrBRC4CLjCDckmHG2IHIZLKPPd9OkeLTWNQ+k2d+ +2Ovm/W8Ep/A9Rj/A3VZRXxj1jZL1D5r9WT/R7qmeZypL+UYwgxSfnHtZIhZPRG8P +Momulsvzkr1oUqmtCVzqSxHGMSYewufFP7P0wUxVV+rD+ENXr05whI8K8uvDxjTx +0+O4j4ECgYEA+Hyw14FpteIvdQLxTbmlJrPcFamxJy7flx/LQDaOIsfHdXoRzty0 +6ee81qAtspqPYkG7OFdfaRWuVZpCB13ZZcNg9Za3DWFTuI/9ZqnAy3mimIn22blF +4pOd3rg9qOFcbcwFi4E3GzzbR5NTuCTknXD7VCk+tsaelsW+7KWpx/ECgYEAw4I1 +RDdN+1hq88mxCR5IHYapHhJL6HrBnB6XAfu0Ys4fFfKIwDaCCzq6UJyyONlJXgCo +o5xIqsAL/ukDK88/qkFMM+4wkTqrTY9bD/x4sxny89s8XBm1imaF9ZBui/XoNq6k +Wrlfhms/xhFTNcV6VOqwEV5gJCByzlm9kJX/9/UCgYEAnolHdqd9n2rI5nnTJMje +ApxcPYH/ocU5KD1DuxtTggM+UchpFjcgQd/1TmXx4fLUqlbPsTmliPEpQjpiCDsr +Wc7WzLm03peLB8TuYpLJi8h2IaZcVTrsyItv/MpFpLrr8q1pmED/vKQOL1Ni5ai8 +J2sPHvoVph2AzycpEej8MrECgYEAta58nYXfW+FQkngtolGXpoiLBDzweXwKC3CJ +1/f2K5NsY9LcrfJ5asIKffr/y8BwY4CtNk13YeXRv/L9VWrkuOyxSdjhHTSuGAdO +Ek8GQzmsAl0LfHMPtyuK9SZg9INyZc5pQT3evWVRAFj9QIzhH6RwNdPD+A6HYacX +eBNMqTkCgYATjxrJyXnlaimk5YFgd1Ptknjth1NMC7NwctCRUCurmkYUYHSgbzuq +eMKHrnhWYtyGu695T8jh7TDgYM+MuOJnaAmqmCyGn+l/1DIMjblTlk/o3UTrn7St +LKBaEGJ4OHpO+HCv2C4xNjI324zS3Yy5b4/LewWzU8sVvwvx79Gouw== +-----END RSA PRIVATE KEY----- diff --git a/Secret/CA/pki/private/core.key b/Secret/CA/pki/private/core.key new file mode 100644 index 0000000..96d523f --- /dev/null +++ b/Secret/CA/pki/private/core.key @@ -0,0 +1,28 @@ +-----BEGIN PRIVATE KEY----- +MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDSc90G6Nj9bGKD ++znPnnJ16yUPPkbLEpuf0KDecbk+aFS3MetEyYDbE3bPcfRVAeR3z48Z0h1fHqRv +6kLKBSbrf0iMzL1NTJEUxXR/OM8idUhNy5Zl4LESDsQ4ns7w/5gFXsjENpsxlQpO +3wNd3CpYSYPP7+AlV29xsjcfH/Du2m4j5DdYNFWBC07UwfZRm0x95OM2Tr75gl8k +9Ei2wjYY3zpFWEk0skRXmxxQ6gaO+K8NbeSFGIOUJI7hIPbueiqwk7d+PvyjTROJ +l8RewIA25+qfDIrBoF10YVWd/W60hVMAhWhcP5qqYLjsHzXzdpcEG4ZSIY9RC8F4 +Rl1Zdh6ZAgMBAAECggEAIUwFn289zbLVT25zMh8umuuOXIAM8VpLVxjKKwexOGeH +Z8i1IZgEFCVbOe0crEp1XGNxj7NHxGHzwGU/FfmEs+PalbRbCxzfI3suOGbDlv8Z +Zn2cmRfYzDOb5h1yPn0iD0900l6VZV3gWKQ+Qx5vcLKI8WBRhXb1Afchc4I5O4D3 +/fjCwlCmms7g/MXNlHmxZi/svzIpwXPgTc7E3ygBpk9MnHEAcGmcwsHjaEh52qcq +zv2dFzPr9ZJpg3gwUBx0gzpy2KqU4rtaKZtao3a7l25nZzcSlhspbES5AEZ9l/Nc +GJ0CVw00BEgykUyWwLvMwJisOy+1PMWVkJ/V23nUZQKBgQD/TKwvapwNW/1Xil4n +d14IRQiyGYCUBg1n6Jy8i3M6c7o4sDCD9soLHTqyedquOrzIjWL77VuRCK2H/FLz +pB1p81kKeT3D+WmSR1jnu6Vl9nRDxj5UzIcl5YDfFY86fZIYLhekQxFrrsyd63Kd +saAh8nFWT3wWnCfZqDzmm3P3owKBgQDTB7B+wRhexfDQN28VCVCAlvEa47Ozz720 +m6O+4dZO3SPyTnr5q8WUpGZVsIxK5SIQd6/zzlmLraZnKLTvKVvflWHUqs4s9Axq +yvXZunPVVz1js8j0+LvngX3l4VkZHrTp5GZV9ZcV1l3xCzZoR04WjMDn2RJW5UKn +S5Ia/YQkkwKBgQDACFw8DmTzZ45YmqvX4+HHNqYj0Sr2LNdIoZ/D8uDpxsL8gQr9 +OFUhpwrP1Pi4tVXrRO5/sTp/DZf6AcIjof6+A12mkyvyjVjrvt8Q8ASpfYhWsneQ +MYg26TrWktD5nhqWNZVy6T/hT8p5vvCnzUQ2RLcbxQ4Bs9QF1JZ6n9PLIQKBgA7d +5tA3OElM9pckoJ3BxzsX5yp2yi0rwHid0l5bOKbbq3Ghl8ZJFKVRI6h7xJZuKAUy ++WFaszJE7Ikt8/k5V7CbrIW39shx9QH9BG7vVMO93qRMgSbI8yvvEniEdKtxX1tu +7Mq3f4pZTMrzeETGaTjrd5ed0k3u3tA8YbGnFI0jAoGBAOkDspNIobHbzv4vAZXq +Qc4Q2b4KZ7Cz6scmwfA9ave1hdHrr9DL2OPXERQFX3HU36UISBvUa0+U70L88leP +JCblbMxpn8WpZyA7TxFiSBO1VlngrA0i/zGC4yAg0tuiojeV/z4ZhrkMnQbHNh0F +LDfKKUuZ5+ZpnSqbYTfaePDw +-----END PRIVATE KEY----- diff --git a/Secret/CA/pki/private/core.small.example.org.key b/Secret/CA/pki/private/core.small.example.org.key new file mode 100644 index 0000000..06d38cc --- /dev/null +++ b/Secret/CA/pki/private/core.small.example.org.key @@ -0,0 +1,28 @@ +-----BEGIN PRIVATE KEY----- +MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCloIWZEJkvIYuk +3d42XBxdekN4CXQodjXbUgDSdINT46I9d+xNVpB88yaUR2stotS7Ikwdc6Nsw3CM +oP2JP4vrWbgiYkKnfNfJ7nS7jjgg9xNIOvGj424Y0I3c765UM9swUAnyXyV6pAma +ZVzK/EQ1dnReS/7NVak+vTZOjaW8U/Q9n1nHqasInAjoChOXlwemoIYVRG4iE4WW +rmSKgMUJg8FNiDvuDLdw68cmFca2Y7T/UHHxNe0wb7JEBoZcvZB/gN3J0swHVfPB +KfU2vb+vfBhsR0FVW2/s0+/YLV2DAnFAT5UkFDkUKh6kNmX1OLZuQvO7wbmqWuCH +KGpc5YHDAgMBAAECggEBAKQAhQmBtA1FTD9eKnDtWHD/ZdtwkQKXutCHLKU4Fep1 +VutC2kviUYRISIU/CtPPjpIWbgQjw0kpZUL7DtJeiC/tUTVK0vGB3zLm2dP2CYIq +5X76TteXlicgK7j/5EEgcAQw3QiQSk5cK94kTHP6w5ekyamt2op8LfAf76xs+hW1 +/A9Swt+FpHLnFfKiuPQPJp1OR6kdWd++O9XXQ9Jn3JzuJcuMUO/S3OI6lRurhSkO +GFclA0P5nMMTPgX2rSPwWQJYqLPKJFw3i06YbcFQwtyi3JAmdaounZKowz466nI2 +eXPKDmpctRVTyJaYf8AzAX1d4d/FF4Hx5MHhIjso74ECgYEA00qG2KeYPGlXtH4Z +OduDzJUxyiYOEFE7dhPpmqmAdjEV4AS7a5ycadQ24DLP/M37Mq+yBC6NKXtBa1q8 +jfPTDLJOzHMx/OtiCW5iIkM3gDvKEQuRUCZScct6SZcgl+2byMomMw7/1ya+cc1i +YsyHNj+Lh/lMmhIWG9OGQuicBiECgYEAyKxoaVIWWv+CJplTAk8Ls4MGthkum14c +ON5pg1Bd6I8fR3FMQ3QJRILKcaLRZS884YEbLjRI1mdhKWwhZY4EEO5tvfEJMLpn +YFqjzED7/Ip/fW/ErlS9RHh5zwA+FpVnv4e/+42JV8v68jhxulVcw/5m4oy1XWxJ +EMaj7ctkw2MCgYA3ra33LcLqOIBKKeiP3I7QvIgQUxLlreJTbU/j18LoYmr3S4fw +BacaJDgJwJoablVBuBbbD0FXqwlENvb1GUmGUP5+1eRYV9bP0Wy+xqO7gQXwk/HJ +AzA6mHozJkYKgyzILqz+S3eTxLvu1UaV7nu7CefE/yb2esmkr4rz2sQywQKBgBzx +6VmPspPLmQ1SPkvt9OUeuCAZ/8P/ThjR0+xR8kmyIzPd3r84BIIyT1sWvhdXOfPY ++H+woPT0Emq0IxkP4/xBN+kW1FmH+ZNHX6r9kJs7qun/7iGrLWWr7v3xrgL55+4T +eZiiMLZOQNMhWx4iY/ANSO/SlfJ0xRE7ZbfOB6m7AoGAeOHEffVVPHBnalX/YQO4 +l81DFfv9BHhsQBQ6yfUs1m5VSAeQGeZ6StSxsc0GZoyKmOHSP7glDHx1vnjmHFLo +Pvjx1hmWw5VAmZhef9cRp2lYx+A34DyRHAsjDHTic1IpIfvc1fWPsmub6rdp9v0L +I/hWsrrCY3SEk/zWyFM8cQc= +-----END PRIVATE KEY----- diff --git a/Secret/CA/pki/private/gate.small.example.org.key b/Secret/CA/pki/private/gate.small.example.org.key new file mode 100644 index 0000000..43197bb --- /dev/null +++ b/Secret/CA/pki/private/gate.small.example.org.key @@ -0,0 +1,28 @@ +-----BEGIN PRIVATE KEY----- +MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQDBhK2kHYyGHeuH +5dwzxgCXALfOA5I8R+rRK6bvKt68WAZbADaAli/iwnymfHFA+WehbPcL0tRBgZiZ +ZgiT5b+03M+VNigU301x9thdKhclrErc6L3ZF9U2Ub+lAJ9m68DO+uMfrR9FQNeI +v5Niz5gJuhx/dMiQL6UteIhkufs6xUQpoZKZh4I12JYYJyOJpokeP9IeCNpVv1Oq +HdWKF2RvYB0Hx4WHczO07aXEC3nkkkUcDswAaqHeRE1nGv78tejA+ERgpvsK0vTZ +iurT3NTCGB8cV8Nykipv54GaCOeKks5F1hfhhamlcJkmqpqwx/xVWLhUm4mqs1pQ +2z39ISc3AgMBAAECggEAFpC8FrkDW9g9ULly9e6OvwzsYe90q+bO8NkgPB9Jnbi9 +9PqPYGsi5lQ7aMZ2BleOx/oGzLAm5ASSoMCPG3/c3OAqrIGGJvjq9PENxb9Ut6Xh +jOTuzlPDHvRlXn42GDBBaWFD/ruXO+IVv/Jm40zFs8yp6graIEYOAsFdVjGBpBaM +r0KiG+UDlldp5sxxjrYfX4Xk39ZyY+4/OBXPm36UJtcV2wiUiv/XCbujmOfXerpi +VIHH0OFtnqDCxBVngk1dsWmjEyWQ9teh45bn7M4Z2kHW152oyeDb9ptarP71oTMD +1fttNeh7c0rAS/3OwbDqzuScgA0KQV0+6T1q/ULJEQKBgQDllhU/B0HvtWnfcXd3 +i7ZtQmaNWL44qkBjsi1RxrH0AJo/9pT8vR+4PZ5oYDMI35YKwh7UGZHj9u208Kq8 +A6lmRIqB9U4XIi2jzBO7DzHoFQRTrVsDUwG+ibrfV7LP824EO4fAsq+9YsfUeH5n +bHqLlvbacnGyDp7wNlBEnSD4SwKBgQDXyExua4JbYj/wzGUGIf5gaOeGo4wUwpbA +Db8Ukc+1y5dGyUs0L4wpVzHZnItym7xX3h79gefd3CMG+zkiwa3XaeSH51bgaUMj +ybQr0zSVbrZcxVTFxEHnqaGArmqjvsj3kMvJxainU6uHa04ThsaObiNPjqHTYBDr +3OS+dfvRRQKBgQCIHXYVOzFNdAoEDpqcxrluh6qTbKTCpbWtJesWi63fkyfgekoU +mfAfZHDxQu+e+ChV0odCirJjLHf8CZ+//o/FcSeJKy2UK5BRh2G/Sp/1D9jT33iR +PPpQxAmF9tGt1o5Idh7jEU1+A/2jq5iNqtPwxJ0wIB/mSCLVGe52742nhwKBgQCB +7gPHwUifCgwCTLDP/owTNVekBLqGjZ0ES8Kw+hOeHdcbMn3sEG1PP0evBsoY2pmQ +NxlmAGDDgJg+zerbeM/ak9Kd2ri/K+LXm863TNeu2xlHxzKCWuhsPAIZX+yqaGjO +WQu8lR42kvUH957ttwu8G6l7cCEVDBVkUIAUByr4GQKBgQCy8aSZVh5m7g6NKOUw +kla5NrZ5a5ffN7dXHEI0zdzR5Ee4s1OETwWypECcvIRt9gVg8+w4GDd3Fb2dDauB +tsBL1RKqGTXqeukUAuMXvgSa+PXQBCyrhQXs/L65ZWVFX0yN6kBqEbF5+s4UeG7T +ZIu5SWq1PzvpbvjfVotIvbJnow== +-----END PRIVATE KEY----- diff --git a/Secret/CA/pki/private/small.example.org.key b/Secret/CA/pki/private/small.example.org.key new file mode 100644 index 0000000..28a6716 --- /dev/null +++ b/Secret/CA/pki/private/small.example.org.key @@ -0,0 +1,28 @@ +-----BEGIN PRIVATE KEY----- +MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDCs8Yf4OZUXB4N +NCwCu1/WhH37Ywz6DTOlkoav9+hyhmn7Rf2QFJ1V3SJQsL5xlNpo/zxG7yJKhK6O +hC751oz9RC7r/pVeRYY/94ZHAMHYZLQ/Vci1/GnDG6pUxfS2pkA/nxX/6zseXtfU +666tvOLPSv7fPWk2N3lnlb9DsOLWKWA2GPh9Mmd5uzCV7I2TRlYTcpOWrHApUybB +2Mc4SoMtVruQD6QJ/ebYcv0LSE841CgxD+Nj0D3R4qvhEBLHJ4UDXX0BQC47li7x +pqIyqL2XKpBuELZvmHrpnwYB3gvJGJ6DTC2lW5kOGWl38F3iPTfGTXPHsOj7XBZF +KXTkMZl7AgMBAAECggEAdJtaYyk8iPWKgfnnCdPSeBVtpisSUIerkNQKmkTtD/n0 +ayrly26tNAl2TcEsrbWqgQurvAfoD50bNftwbuzSD7TQLUKRjp4w4wqJfuizL7hQ +Q0ZLKML9TH67Kn5MKz+yZugOMvFcvLmspbZpLWBcri2KK4UKCBB9Q05p+E5t7Dha +kKJ/9yTtOqQY//3utpABYMpue9lsDPaTJ1/vjI7kBU0mB4ocEc2WiqyWklztIUg5 +CtGFttxALQVMyyKxzyYrHqsOq3TUzRtm/5Lw6NbZeu7x3b12uF6RshqT0PWfXGFX +tVkKzNkYIQhQqyUia6LSYeZotBEIH6gFFqRxPUtugQKBgQD4nHkUYPJ1VtoBUsGp +WCCi5D9aB025Vzzslgm2r+dDh0LctKWTt91xWHVuWAN5dWELmd7lzdhZnEG5tPAL +fpCJYOU+j0H9EGvWuX9YoDo3AArppUX1MpqE1CzWSPXtytlBqAR52Eges86sqNmD +4nNw2zvmpMALrbI7kMmYskP1IQKBgQDIfSQsGxHouItn4BMlUf8nUKx6n+XorK8V +OqwtZa+sTxNdvL7egQznJIcKCy379OfpKXuTdzJFsklC8QytWu48hqhRSpanzz0n +enj4LNrpP3lrS+upz67bxLlvvC8/SG1vhNQnBk2p2OMCSbWsFxqTN8P7+SXTAbeL +2ILSBZ/fGwKBgEX7RdoGsDl3iUZ2FS2mMQmpVmvxQl+5vtyaH4HdYiwQFzIpZ7J9 +P0h4rhWxkMjP0dGCLsxhdVVENvwfgrK5ndYOAHnruZeS18hJzx8Te0+gI3JBo7+x +zu01DKoFP7UANMfWk+v4hdSeqL7RiOknBXfvPp1eIvEmo9VAnH7vL1IBAoGBALCC +DDCQfInov0LqcbCvqfWQ/ujOkXjxXwtPpnopRiprS9+A5oG6GAP/kqvy/78M9IfA +L726eRYHSpyW39RXc9rxqoo3IsAGog55srq7stcbPOiL5KSR5Z4yahfHE8mhGEfQ +J39b+1AHVISVJE6n4Iuv0umphfVpU5DZQwNoVEH1AoGBAJLdjxNJjP+Eh07ZV0o+ +Y1W6/GSXoTuJdrmSKalQppdgr2l/0C3VSe7MjIxIlfVuULJWfoebj89epblQ0O4O +uMvIhpPy8Fq+LFDl2jjZ3HoMz0VrqaYe9hNQ7AGVYqy22D+xFTi3hXugRIVY+ut0 +aYBCoHEDILw+LVVlOIUXWrNi +-----END PRIVATE KEY----- diff --git a/Secret/CA/pki/reqs/core.req b/Secret/CA/pki/reqs/core.req new file mode 100644 index 0000000..4670da8 --- /dev/null +++ b/Secret/CA/pki/reqs/core.req @@ -0,0 +1,15 @@ +-----BEGIN CERTIFICATE REQUEST----- +MIICVDCCATwCAQAwDzENMAsGA1UEAwwEY29yZTCCASIwDQYJKoZIhvcNAQEBBQAD +ggEPADCCAQoCggEBANJz3Qbo2P1sYoP7Oc+ecnXrJQ8+RssSm5/QoN5xuT5oVLcx +60TJgNsTds9x9FUB5HfPjxnSHV8epG/qQsoFJut/SIzMvU1MkRTFdH84zyJ1SE3L +lmXgsRIOxDiezvD/mAVeyMQ2mzGVCk7fA13cKlhJg8/v4CVXb3GyNx8f8O7abiPk +N1g0VYELTtTB9lGbTH3k4zZOvvmCXyT0SLbCNhjfOkVYSTSyRFebHFDqBo74rw1t +5IUYg5QkjuEg9u56KrCTt34+/KNNE4mXxF7AgDbn6p8MisGgXXRhVZ39brSFUwCF +aFw/mqpguOwfNfN2lwQbhlIhj1ELwXhGXVl2HpkCAwEAAaAAMA0GCSqGSIb3DQEB +CwUAA4IBAQC6bMENrO6PK3lEVJK61oOCLiw51O7uK3Cpp1d+jTArH+L3oR55jIfH +FJ7Ex/yhYBGR12F9iafkyfWwmT3oEgfngsUaSF4VJZWeNFMGXrCpvlDxwFP14RHi +7hL7PnaritX6tJm+5Y8lYnalIzLlbUgY4HQD3QWYPOL1aYKeatEL9jY5UENpsw/L +X1NV2XWZ3ePFIQIvEHecRpj8/03Rvv9rsKlRnoCU12FIVDE5YFqJ6xq7HDV8IdII +U/0k4n9qzOyub17X139dvDJjl6ViCRnLwo4d5Bksic+Av4ILRlW+iH85F+S3eKqF +PGP78oFqUYoNE64NmsFle+C66D0hb9r/ +-----END CERTIFICATE REQUEST----- diff --git a/Secret/CA/pki/reqs/core.small.example.org.req b/Secret/CA/pki/reqs/core.small.example.org.req new file mode 100644 index 0000000..ad8b221 --- /dev/null +++ b/Secret/CA/pki/reqs/core.small.example.org.req @@ -0,0 +1,15 @@ +-----BEGIN CERTIFICATE REQUEST----- +MIICZjCCAU4CAQAwITEfMB0GA1UEAwwWY29yZS5zbWFsbC5leGFtcGxlLm9yZzCC +ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKWghZkQmS8hi6Td3jZcHF16 +Q3gJdCh2NdtSANJ0g1Pjoj137E1WkHzzJpRHay2i1LsiTB1zo2zDcIyg/Yk/i+tZ +uCJiQqd818nudLuOOCD3E0g68aPjbhjQjdzvrlQz2zBQCfJfJXqkCZplXMr8RDV2 +dF5L/s1VqT69Nk6NpbxT9D2fWcepqwicCOgKE5eXB6aghhVEbiIThZauZIqAxQmD +wU2IO+4Mt3DrxyYVxrZjtP9QcfE17TBvskQGhly9kH+A3cnSzAdV88Ep9Ta9v698 +GGxHQVVbb+zT79gtXYMCcUBPlSQUORQqHqQ2ZfU4tm5C87vBuapa4IcoalzlgcMC +AwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQAN5bB3nLJEky5WyxE4JQ9luHmS3heY +r3OtS66sNUlGxvUkwZ3Vl5TRMppn1g9S6OnuwImtXXYIw2U7kh/n3M3maMLKjqAb +XlA6hAz1+MTHcx1TN5d3VLLe/qUcMzViyx4Pijia3gFnS+AUeXYyFNgcWjjFjuDo +lUhGDG/WHD0OMhDoY6qaoNerwU63JdCoh4eh8tWvRSKS2C+OSIihssF2PhkVj7yC +JW3SLgwcT9XHvRHKXxcNHT7aToEqzaYaTZGpUUMNoomsfuvsKgblyPXNZr546ffG +AnHzQUX+Nygtp5OugfO65m0Yq1v7sz138QgRLw0CRxK1IW/8e/312vVJ +-----END CERTIFICATE REQUEST----- diff --git a/Secret/CA/pki/reqs/gate.small.example.org.req b/Secret/CA/pki/reqs/gate.small.example.org.req new file mode 100644 index 0000000..c648ce0 --- /dev/null +++ b/Secret/CA/pki/reqs/gate.small.example.org.req @@ -0,0 +1,15 @@ +-----BEGIN CERTIFICATE REQUEST----- +MIICZjCCAU4CAQAwITEfMB0GA1UEAwwWZ2F0ZS5zbWFsbC5leGFtcGxlLm9yZzCC +ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMGEraQdjIYd64fl3DPGAJcA +t84DkjxH6tErpu8q3rxYBlsANoCWL+LCfKZ8cUD5Z6Fs9wvS1EGBmJlmCJPlv7Tc +z5U2KBTfTXH22F0qFyWsStzovdkX1TZRv6UAn2brwM764x+tH0VA14i/k2LPmAm6 +HH90yJAvpS14iGS5+zrFRCmhkpmHgjXYlhgnI4mmiR4/0h4I2lW/U6od1YoXZG9g +HQfHhYdzM7TtpcQLeeSSRRwOzABqod5ETWca/vy16MD4RGCm+wrS9NmK6tPc1MIY +HxxXw3KSKm/ngZoI54qSzkXWF+GFqaVwmSaqmrDH/FVYuFSbiaqzWlDbPf0hJzcC +AwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQA9SuOBX0MduLi6Tuf9NK1tNXCq669U +KnHf1Okt+lGaknYBgfwdWzNUyoWrdIqfT5Ryk8bAV4+pKH4WRjIRoNJ9uwJ8vRl/ +I7IVVG94wvT/agfPZaui7bbATGTeL5zCKloIHecbfse7XoLD4zUm1HTa98eTOakI +wwUpXBFPdDt5/WFDYFA2yLwaE94dv1A90z4GwqRFE1Qd080niGPMgwImVTNYqkIc +Pdm0txM0hSBtv120HDzaSwRAiYUPfUUUuoDsdGMc2KfFcZn1Tjnxn/pgsbXc1jK2 +wrQ4h+Pkloz8urEvohMCiWlCz87PnwUUaKYWGgnJcNqtkVg7q6VIYR+Q +-----END CERTIFICATE REQUEST----- diff --git a/Secret/CA/pki/reqs/small.example.org.req b/Secret/CA/pki/reqs/small.example.org.req new file mode 100644 index 0000000..0aa2f95 --- /dev/null +++ b/Secret/CA/pki/reqs/small.example.org.req @@ -0,0 +1,15 @@ +-----BEGIN CERTIFICATE REQUEST----- +MIICYTCCAUkCAQAwHDEaMBgGA1UEAwwRc21hbGwuZXhhbXBsZS5vcmcwggEiMA0G +CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDCs8Yf4OZUXB4NNCwCu1/WhH37Ywz6 +DTOlkoav9+hyhmn7Rf2QFJ1V3SJQsL5xlNpo/zxG7yJKhK6OhC751oz9RC7r/pVe +RYY/94ZHAMHYZLQ/Vci1/GnDG6pUxfS2pkA/nxX/6zseXtfU666tvOLPSv7fPWk2 +N3lnlb9DsOLWKWA2GPh9Mmd5uzCV7I2TRlYTcpOWrHApUybB2Mc4SoMtVruQD6QJ +/ebYcv0LSE841CgxD+Nj0D3R4qvhEBLHJ4UDXX0BQC47li7xpqIyqL2XKpBuELZv +mHrpnwYB3gvJGJ6DTC2lW5kOGWl38F3iPTfGTXPHsOj7XBZFKXTkMZl7AgMBAAGg +ADANBgkqhkiG9w0BAQsFAAOCAQEAenoC7hzNcGxnfQ314qpsIX6s+8A/Yrhc8y0Q +rojHMzS2T8HAsm+S1RR6lVmbYHwufdEgZB0DpDMCwJhVG9FYn4Givef5ByW7+ohm +ejc+WpYw26tpjj/DZzYAaxFe/Np0JK5gPcXuRIXtetFaQTDEfbiD5X8K0sit4aMT +4jlmaiULsVv4eOsFHbXJImWVQ0azyXdCWRJgIbsVUsFZxaN6rnzCbGsNR/y5ynHQ +q1b+EQ/nAEY93QwJiX+kRBs4B8GR/2qEqUxeVcZhh/LPImtgihI3uThf/bNKxDAv +ZxW4LgucVfVrfVZtA2DB5T1cD5CC26tgI7+/SoYFx3hOhhuiBA== +-----END CERTIFICATE REQUEST----- diff --git a/Secret/CA/pki/safessl-easyrsa.cnf b/Secret/CA/pki/safessl-easyrsa.cnf new file mode 100644 index 0000000..8d7993c --- /dev/null +++ b/Secret/CA/pki/safessl-easyrsa.cnf @@ -0,0 +1,140 @@ +# For use with Easy-RSA 3.1 and OpenSSL or LibreSSL + +RANDFILE = Secret/CA/pki/.rnd + +#################################################################### +[ ca ] +default_ca = CA_default # The default ca section + +#################################################################### +[ CA_default ] + +dir = Secret/CA/pki # Where everything is kept +certs = Secret/CA/pki # Where the issued certs are kept +crl_dir = Secret/CA/pki # Where the issued crl are kept +database = Secret/CA/pki/index.txt # database index file. +new_certs_dir = Secret/CA/pki/certs_by_serial # default place for new certs. + +certificate = Secret/CA/pki/ca.crt # The CA certificate +serial = Secret/CA/pki/serial # The current serial number +crl = Secret/CA/pki/crl.pem # The current CRL +private_key = Secret/CA/pki/private/ca.key # The private key +RANDFILE = Secret/CA/pki/.rand # private random number file + +x509_extensions = basic_exts # The extentions to add to the cert + +# This allows a V2 CRL. Ancient browsers don't like it, but anything Easy-RSA +# is designed for will. In return, we get the Issuer attached to CRLs. +crl_extensions = crl_ext + +default_days = 1080 # how long to certify for +default_crl_days= 180 # how long before next CRL +default_md = sha256 # use public key default MD +preserve = no # keep passed DN ordering + +# This allows to renew certificates which have not been revoked +unique_subject = no + +# A few difference way of specifying how similar the request should look +# For type CA, the listed attributes must be the same, and the optional +# and supplied fields are just that :-) +policy = policy_anything + +# For the 'anything' policy, which defines allowed DN fields +[ policy_anything ] +countryName = optional +stateOrProvinceName = optional +localityName = optional +organizationName = optional +organizationalUnitName = optional +commonName = supplied +name = optional +emailAddress = optional + +#################################################################### +# Easy-RSA request handling +# We key off $DN_MODE to determine how to format the DN +[ req ] +default_bits = 2048 +default_keyfile = privkey.pem +default_md = sha256 +distinguished_name = cn_only +x509_extensions = easyrsa_ca # The extentions to add to the self signed cert + +# A placeholder to handle the $EXTRA_EXTS feature: +#%EXTRA_EXTS% # Do NOT remove or change this line as $EXTRA_EXTS support requires it + +#################################################################### +# Easy-RSA DN (Subject) handling + +# Easy-RSA DN for cn_only support: +[ cn_only ] +commonName = Common Name (eg: your user, host, or server name) +commonName_max = 64 +commonName_default = core + +# Easy-RSA DN for org support: +[ org ] +countryName = Country Name (2 letter code) +countryName_default = US +countryName_min = 2 +countryName_max = 2 + +stateOrProvinceName = State or Province Name (full name) +stateOrProvinceName_default = California + +localityName = Locality Name (eg, city) +localityName_default = San Francisco + +0.organizationName = Organization Name (eg, company) +0.organizationName_default = Copyleft Certificate Co + +organizationalUnitName = Organizational Unit Name (eg, section) +organizationalUnitName_default = My Organizational Unit + +commonName = Common Name (eg: your user, host, or server name) +commonName_max = 64 +commonName_default = core + +emailAddress = Email Address +emailAddress_default = me@example.net +emailAddress_max = 64 + +#################################################################### +# Easy-RSA cert extension handling + +# This section is effectively unused as the main script sets extensions +# dynamically. This core section is left to support the odd usecase where +# a user calls openssl directly. +[ basic_exts ] +basicConstraints = CA:FALSE +subjectKeyIdentifier = hash +authorityKeyIdentifier = keyid,issuer:always + +# The Easy-RSA CA extensions +[ easyrsa_ca ] + +# PKIX recommendations: + +subjectKeyIdentifier=hash +authorityKeyIdentifier=keyid:always,issuer:always + +# This could be marked critical, but it's nice to support reading by any +# broken clients who attempt to do so. +basicConstraints = CA:true + +# Limit key usage to CA tasks. If you really want to use the generated pair as +# a self-signed cert, comment this out. +keyUsage = cRLSign, keyCertSign + +# nsCertType omitted by default. Let's try to let the deprecated stuff die. +# nsCertType = sslCA + +# CRL extensions. +[ crl_ext ] + +# Only issuerAltName and authorityKeyIdentifier make any sense in a CRL. + +# issuerAltName=issuer:copy +authorityKeyIdentifier=keyid:always,issuer:always + diff --git a/Secret/CA/pki/serial b/Secret/CA/pki/serial new file mode 100644 index 0000000..b7cb96f --- /dev/null +++ b/Secret/CA/pki/serial @@ -0,0 +1 @@ +DCCAF785FE1F49DD878444FEE564818B diff --git a/Secret/CA/pki/serial.old b/Secret/CA/pki/serial.old new file mode 100644 index 0000000..56f8f2c --- /dev/null +++ b/Secret/CA/pki/serial.old @@ -0,0 +1 @@ +dccaf785fe1f49dd878444fee564818a diff --git a/Secret/CA/vars b/Secret/CA/vars new file mode 100644 index 0000000..be4fd0c --- /dev/null +++ b/Secret/CA/vars @@ -0,0 +1,210 @@ +# Easy-RSA 3 parameter settings + +# NOTE: If you installed Easy-RSA from your distro's package manager, don't edit +# this file in place -- instead, you should copy the entire easy-rsa directory +# to another location so future upgrades don't wipe out your changes. + +# HOW TO USE THIS FILE +# +# vars.example contains built-in examples to Easy-RSA settings. You MUST name +# this file 'vars' if you want it to be used as a configuration file. If you do +# not, it WILL NOT be automatically read when you call easyrsa commands. +# +# It is not necessary to use this config file unless you wish to change +# operational defaults. These defaults should be fine for many uses without the +# need to copy and edit the 'vars' file. +# +# All of the editable settings are shown commented and start with the command +# 'set_var' -- this means any set_var command that is uncommented has been +# modified by the user. If you're happy with a default, there is no need to +# define the value to its default. + +# NOTES FOR WINDOWS USERS +# +# Paths for Windows *MUST* use forward slashes, or optionally double-esscaped +# backslashes (single forward slashes are recommended.) This means your path to +# the openssl binary might look like this: +# "C:/Program Files/OpenSSL-Win32/bin/openssl.exe" + +# A little housekeeping: DON'T EDIT THIS SECTION +# +# Easy-RSA 3.x doesn't source into the environment directly. +# Complain if a user tries to do this: +if [ -z "$EASYRSA_CALLER" ]; then + echo "You appear to be sourcing an Easy-RSA 'vars' file." >&2 + echo "This is no longer necessary and is disallowed. See the section called" >&2 + echo "'How to use this file' near the top comments for more details." >&2 + return 1 +fi + +# DO YOUR EDITS BELOW THIS POINT + +# This variable is used as the base location of configuration files needed by +# easyrsa. More specific variables for specific files (e.g., EASYRSA_SSL_CONF) +# may override this default. +# +# The default value of this variable is the location of the easyrsa script +# itself, which is also where the configuration files are located in the +# easy-rsa tree. + +#set_var EASYRSA "${0%/*}" + +# If your OpenSSL command is not in the system PATH, you will need to define the +# path to it here. Normally this means a full path to the executable, otherwise +# you could have left it undefined here and the shown default would be used. +# +# Windows users, remember to use paths with forward-slashes (or escaped +# back-slashes.) Windows users should declare the full path to the openssl +# binary here if it is not in their system PATH. + +#set_var EASYRSA_OPENSSL "openssl" +# +# This sample is in Windows syntax -- edit it for your path if not using PATH: +#set_var EASYRSA_OPENSSL "C:/Program Files/OpenSSL-Win32/bin/openssl.exe" + +# Edit this variable to point to your soon-to-be-created key directory. By +# default, this will be "$PWD/pki" (i.e. the "pki" subdirectory of the +# directory you are currently in). +# +# WARNING: init-pki will do a rm -rf on this directory so make sure you define +# it correctly! (Interactive mode will prompt before acting.) + +#set_var EASYRSA_PKI "$PWD/pki" + +# Define X509 DN mode. +# This is used to adjust what elements are included in the Subject field as the DN +# (this is the "Distinguished Name.") +# Note that in cn_only mode the Organizational fields further below aren't used. +# +# Choices are: +# cn_only - use just a CN value +# org - use the "traditional" Country/Province/City/Org/OU/email/CN format + +set_var EASYRSA_DN "cn_only" + +# Organizational fields (used with 'org' mode and ignored in 'cn_only' mode.) +# These are the default values for fields which will be placed in the +# certificate. Don't leave any of these fields blank, although interactively +# you may omit any specific field by typing the "." symbol (not valid for +# email.) + +#set_var EASYRSA_REQ_COUNTRY "US" +#set_var EASYRSA_REQ_PROVINCE "California" +#set_var EASYRSA_REQ_CITY "San Francisco" +#set_var EASYRSA_REQ_ORG "Copyleft Certificate Co" +#set_var EASYRSA_REQ_EMAIL "me@example.net" +#set_var EASYRSA_REQ_OU "My Organizational Unit" + +# Choose a size in bits for your keypairs. The recommended value is 2048. Using +# 2048-bit keys is considered more than sufficient for many years into the +# future. Larger keysizes will slow down TLS negotiation and make key/DH param +# generation take much longer. Values up to 4096 should be accepted by most +# software. Only used when the crypto alg is rsa (see below.) + +#set_var EASYRSA_KEY_SIZE 2048 + +# The default crypto mode is rsa; ec can enable elliptic curve support. +# Note that not all software supports ECC, so use care when enabling it. +# Choices for crypto alg are: (each in lower-case) +# * rsa +# * ec + +#set_var EASYRSA_ALGO rsa + +# Define the named curve, used in ec mode only: + +#set_var EASYRSA_CURVE secp384r1 + +# In how many days should the root CA key expire? + +#set_var EASYRSA_CA_EXPIRE 3650 + +# In how many days should certificates expire? + +#set_var EASYRSA_CERT_EXPIRE 1080 + +# How many days until the next CRL publish date? Note that the CRL can still be +# parsed after this timeframe passes. It is only used for an expected next +# publication date. + +# How many days before its expiration date a certificate is allowed to be +# renewed? +#set_var EASYRSA_CERT_RENEW 30 + +#set_var EASYRSA_CRL_DAYS 180 + +# Support deprecated "Netscape" extensions? (choices "yes" or "no".) The default +# is "no" to discourage use of deprecated extensions. If you require this +# feature to use with --ns-cert-type, set this to "yes" here. This support +# should be replaced with the more modern --remote-cert-tls feature. If you do +# not use --ns-cert-type in your configs, it is safe (and recommended) to leave +# this defined to "no". When set to "yes", server-signed certs get the +# nsCertType=server attribute, and also get any NS_COMMENT defined below in the +# nsComment field. + +#set_var EASYRSA_NS_SUPPORT "no" + +# When NS_SUPPORT is set to "yes", this field is added as the nsComment field. +# Set this blank to omit it. With NS_SUPPORT set to "no" this field is ignored. + +#set_var EASYRSA_NS_COMMENT "Easy-RSA Generated Certificate" + +# A temp file used to stage cert extensions during signing. The default should +# be fine for most users; however, some users might want an alternative under a +# RAM-based FS, such as /dev/shm or /tmp on some systems. + +#set_var EASYRSA_TEMP_FILE "$EASYRSA_PKI/extensions.temp" + +# !! +# NOTE: ADVANCED OPTIONS BELOW THIS POINT +# PLAY WITH THEM AT YOUR OWN RISK +# !! + +# Broken shell command aliases: If you have a largely broken shell that is +# missing any of these POSIX-required commands used by Easy-RSA, you will need +# to define an alias to the proper path for the command. The symptom will be +# some form of a 'command not found' error from your shell. This means your +# shell is BROKEN, but you can hack around it here if you really need. These +# shown values are not defaults: it is up to you to know what you're doing if +# you touch these. +# +#alias awk="/alt/bin/awk" +#alias cat="/alt/bin/cat" + +# X509 extensions directory: +# If you want to customize the X509 extensions used, set the directory to look +# for extensions here. Each cert type you sign must have a matching filename, +# and an optional file named 'COMMON' is included first when present. Note that +# when undefined here, default behaviour is to look in $EASYRSA_PKI first, then +# fallback to $EASYRSA for the 'x509-types' dir. You may override this +# detection with an explicit dir here. +# +#set_var EASYRSA_EXT_DIR "$EASYRSA/x509-types" + +# OpenSSL config file: +# If you need to use a specific openssl config file, you can reference it here. +# Normally this file is auto-detected from a file named openssl-easyrsa.cnf from the +# EASYRSA_PKI or EASYRSA dir (in that order.) NOTE that this file is Easy-RSA +# specific and you cannot just use a standard config file, so this is an +# advanced feature. + +#set_var EASYRSA_SSL_CONF "$EASYRSA/openssl-easyrsa.cnf" + +# Default CN: +# This is best left alone. Interactively you will set this manually, and BATCH +# callers are expected to set this themselves. + +#set_var EASYRSA_REQ_CN "ChangeMe" + +# Cryptographic digest to use. +# Do not change this default unless you understand the security implications. +# Valid choices include: md5, sha1, sha256, sha224, sha384, sha512 + +#set_var EASYRSA_DIGEST "sha256" + +# Batch mode. Leave this disabled unless you intend to call Easy-RSA explicitly +# in batch mode without any user input, confirmation on dangerous operations, +# or most output. Setting this to any non-blank string enables batch mode. + +#set_var EASYRSA_BATCH "" + diff --git a/Secret/CA/x509-types b/Secret/CA/x509-types new file mode 120000 index 0000000..d2e7e3d --- /dev/null +++ b/Secret/CA/x509-types @@ -0,0 +1 @@ +/usr/share/easy-rsa/x509-types \ No newline at end of file diff --git a/Secret/become.yml b/Secret/become.yml new file mode 100644 index 0000000..9598ef2 --- /dev/null +++ b/Secret/become.yml @@ -0,0 +1,24 @@ +become_front: !vault | + $ANSIBLE_VAULT;1.1;AES256 + 3563626131333733666466393166323135383838666338666131336335326 + 3656437663032653333623461633866653462636664623938356563306264 + 3438660a35396630353065383430643039383239623730623861363961373 + 3376663366566326137386566623164313635303532393335363063333632 + 363163316436380a336562323739306231653561613837313435383230313 + 1653565653431356362 +become_core: !vault | + $ANSIBLE_VAULT;1.1;AES256 + 3464643665363937393937633432323039653530326465346238656530303 + 8633066663935316365376438353439333034666366363739616130643261 + 3232380a66356462303034636332356330373465623337393938616161386 + 4653864653934373766656265613636343334356361396537343135393663 + 313562613133380a373334393963623635653264663538656163613433383 + 5353439633234666134 +become_gate: !vault | + $ANSIBLE_VAULT;1.1;AES256 + 3138306434313739626461303736666236336666316535356561343566643 + 6613733353434333962393034613863353330623761623664333632303839 + 3838350a37396462343738303331356134373634306238633030303831623 + 0636537633139366333373933396637633034383132373064393939363231 + 636264323132370a393135666335303361326330623438613630333638393 + 1303632663738306634 diff --git a/Secret/front-dh2048.pem b/Secret/front-dh2048.pem new file mode 100644 index 0000000..4c70ed0 --- /dev/null +++ b/Secret/front-dh2048.pem @@ -0,0 +1,8 @@ +-----BEGIN DH PARAMETERS----- +MIIBCAKCAQEAjBSxtr3Eq9dSD4S7cCewQ0ojCDq0+ZfodrCNacwlAFJWGJPCatjf +6DgFmEf8M2MDYIHq2VhNxhmArWfd6D6Y44NZnYZa537pSD7gYO/Al0g2Wn2O57on +Sn9Dt4vmob2N2L3HxUYITXcc3Cq8q93kMTnEINLChwqwsFmRdeGFWCVBDMMJMX4E +dcRtsv9pWvrps34CZ8jIWwJ9x3n6uhNAfW4Argt7LCI+/wguWnH4b54Ya3QrgveQ ++H0Qi+Zr36HyNuHREzbqYHO1PNbzpAo1BfcvuZyoW6AuF17FaHSOFBpgT0ojmYOw +7NTWRHNuiuhFNOmNPLq1eRt7CWHCfERbiwIBAg== +-----END DH PARAMETERS----- diff --git a/Secret/front-ta.key b/Secret/front-ta.key new file mode 100644 index 0000000..4267587 --- /dev/null +++ b/Secret/front-ta.key @@ -0,0 +1,21 @@ +# +# 2048 bit OpenVPN static key +# +-----BEGIN OpenVPN Static key V1----- +fdb61812ceb4d5ba83f0016642320cfd +f1e6632d8a6b08e5a20e009a81ed3e31 +3f4340500a8b3ad21fbb7a42aacb9f36 +dd86d96bae740065e2edea03add75272 +e806c05694fdfb666a8e84ea650e35d5 +c39f20053a525ff16fbba2c28b836a60 +98e3e482205de399c0e965e82b61a83c +25ff589e395681e8a08ec22115ea4e95 +23b026fa239594cda3b80df28e48a9f9 +023b8b0c0a79ec031cde847781557475 +9eb2702fe2b766c06c6a15d83c3070c3 +f8b7e33dae75ac3814b4e17c07148934 +4e055c8451f663ec555a67a9a86a8616 +9e2c736ee6330ecbafd8c9144bc93350 +8fac74ec0fe2ec823fba7423c54be1d8 +5d8c79c0cec56b4cc7cc7e6dcee71991 +-----END OpenVPN Static key V1----- diff --git a/Secret/gate-dh2048.pem b/Secret/gate-dh2048.pem new file mode 100644 index 0000000..c5e982d --- /dev/null +++ b/Secret/gate-dh2048.pem @@ -0,0 +1,8 @@ +-----BEGIN DH PARAMETERS----- +MIIBCAKCAQEAlgb0GzS0P+nJEuJ1y4WawxY6/eeO8pUvestFoq+8VbDvm+6xd2WF +mq8X6MpMqwnmqrEbftqRUvAZ+tO/J14AhPGVL9JLXkpOIXkCGR63jpI34UOD9Np7 +XUpNJyHVLcj/pnlOiPEuhiXFFBEez1kXQx8JxEqx1HofO699/8NmlSBxqFPJefCC +6dNYSYfIiF0odZVB+7N0FyHsw5ukCWh+lJAQU3nz9q2WP2+KdtLUCsEyz+w0kI1B +CWAmsekbF05D4vcOTMQ2W1UVthBMJObU2IHg/xfA/9ZUBRFguQzo0H/0AUxM9Fo7 +8AYeoFtNJnw/ZhHXKhGKJQGctcbncCpa4wIBAg== +-----END DH PARAMETERS----- diff --git a/Secret/gate-ta.key b/Secret/gate-ta.key new file mode 100644 index 0000000..87806ad --- /dev/null +++ b/Secret/gate-ta.key @@ -0,0 +1,21 @@ +# +# 2048 bit OpenVPN static key +# +-----BEGIN OpenVPN Static key V1----- +1c3632d86e265c77f3ff112183cd715c +f64febfc4ebd48b6b34847a5718a4c68 +2d86a5fffbd46b157586c59148a62582 +f13c511edf584938f9a985528b141e03 +e1ef39dfdde9ac2b72f3738fd2eb759c +74e774ccdd4376720c6f598233748dee +56013726afb984218ed858f099c231b0 +70b18d01d37d81eb42044b2a2752bacf +3a51f3e3da1fb5fd0826b4940934b4b8 +800a216c252af314144746945c6a78b6 +9e3f4c8b4871c992a10cf413a778402c +bbaa65c0a82fac9557257abbb3e7bc56 +4e3da795966c7fa86662ea6b9b97cb19 +4cd73356e4b9310ea1f1d5e4c7c17f5c +2f0e6595af00060a0d4e101fa18236d5 +8820a9e4b6535f72080ff5207e1eeceb +-----END OpenVPN Static key V1----- diff --git a/Secret/root-pub.pem b/Secret/root-pub.pem new file mode 100644 index 0000000..6bf4d3b --- /dev/null +++ b/Secret/root-pub.pem @@ -0,0 +1,41 @@ +-----BEGIN PGP PUBLIC KEY BLOCK----- + +mQGNBGJR9PkBDAD7JbJZwNjgLSd17hpFpt/iZ6Pu40ySbLMmgcN1SDI4LX+U7+iU +A6RwyB8nAfl8U9QEKvtReIm/ivhvuqDmDQ+CyqMm0z275Q92KC9rTYTyicdmJIDx +1LYKFBilPbiezfxJHgkZXbFoTDpFQmUb2f5JNO2VtfW/fzFS63KGpohMUtoUXd2T +NRNfp6ddekqhZmocoluiR0wXuf1r4SgDLT2wuKHx/VLeXCbqu38DOCkTPlpRWQ97 +6wfpD1VDjQKe/QKVyAnmOBNER0QdUjw4nt/RqfjQzVJe9W2r2irEsmVkVwKCzSKT +FV2VvErZJmSdaoZVGwPFxdwO4mYv32UIVhWPKaHLsgeCHqBgsI71t3EOdD48Tv2K +UT9AZW/19vGz0zrChp/kSkUzSRXvslp3zyBIkB4M2GU3Jefs3H3YxmEZdOyvwL+5 +/ElaygF22FfA3+i918o4oQx0EeA/fct7lluWIgw6qm0wpzOzup76D+55w6v3bejw +5bfXZeAuqLWqGi0AEQEAAbQbcm9vdEBjb3JlLnNtYWxsLmV4YW1wbGUub3JniQHU +BBMBCgA+FiEEvVkpqWf/D5qXiXDgYVWiyheIXX0FAmJR9PkCGwMFCQPCZwAFCwkI +BwIGFQoJCAsCBBYCAwECHgECF4AACgkQYVWiyheIXX252AwAol7986+y8tl9bFyK +EBAhPD/BizOVz4ZYerPhfKqf1wMPu1v1Kwg0isgEuKqNYGVuGgtQsdPa3FybYb+h +B0zWhCbnIPagPBu3xrdJKbWheZfXX54RlIG0t10GdbrnkjGqrVpsnCAb9/ZO7NTR +tGTcaw0+f7NUEH8n3StGw0ko3MOmqKMPGSgsM1tE+lQ24G5bjH2Kh2CLdKsaDMPb +jL49METTH7DiSPrY9Bqd+ouVmUHJeXMBmhNE9U6BgJtGPf2qMywLBdcG/MYD4dv1 +W2Imc6tYAQ2e7TXdUC+TBW0hjM6XCFeU+zQz+YSxOoAy5AkjashrizPMe/eVvxcI +EDs5G3prrUt2tXilYYL4eA5scGDChTDbB0Sy/AZeTgZkMOlQVS/pS4Lzy5Fv3S6r +zgc5PwpvYaM/iu0WtXkwnzBzsnObwDqnj4tbi9Y5dRa8Bc9AU9p4YKcoCHGv9ZdS +6a6ppw5/mEnkp4ueo+/BKV/gA5hhG6J7+gg6iWDRSnx1YerbuQGNBGJR9PkBDADu +hO/UghmK/AsqW9CtpxAox8WP6vyIQre4VnlUKeJ2Ghtj3JXDdQYmPOTqUCGWNHyv +lp16ehD07Z0InVIqfbnXeCK6B2TS4fP9PMD5vwjJ8SkDhXzhfsCXwx3oN+9A2X1O +BjeKMgsfn+54CwLUWaEjLr8fiCQEI4FGzaxOZ0EohY6gEv3jX8npaD+Kyb/UELvP +SopvVdEVA+V1VpDxtQLCzXloyQA4v6q23QgC7u4WP29Wa4xQr0Vg+z26104j37XU +bwDiVXNwCrF6QWvU8SwhC28uq4rmLDTc4seIoUNZf6hToGxGH2EAJlWFz6R0Zie9 +ZI53D+mV85bCFA88flOHdo9pepK8s2AZEuj/GRias6LVjesA8qA7dekNc31NvOYq +t51JuEs8n5f8PF3xXnEzlBum/zdk19zuuCMjKwrwBgdB6qj2IMatQR3/uRp6FJkv +8I54QMlooDPS5NczqHMWAMGniMNsaIobdK1xeZYLQuLUZkjNdFTUVCD0EAGk80kA +EQEAAYkBtgQYAQoAIBYhBL1ZKaln/w+al4lw4GFVosoXiF19BQJiUfT5AhsMAAoJ +EGFVosoXiF1906EMAKJxQ8S15CegmlxwTtXiL58gOdutByBqb3r+JNUMTuuW4Y9u +id9rjSvcX1oonXCtTiBZJS4jsALfw52TGqtnxaTqQjXBxQxJ+MTSA4I2EHqw87Yi +FT9IruFbOqbqWM7GJzrIz9vzVgNEZItYZEmpUSlxdu8zBZv/jKqZWOQrDH3g4abH +kxchiHQVlozFhNt0jOUnr6SJhUo8qgKt6hc0i7H/7OmPy83slhmXkH+KhVPv9Cd9 +uVvbPUvHLIfaKQqfr4tWqDImF0FTEp9S2dycBRIu9CtiMjbuywAgPWiNNGlhqIT9 +N7O9DxcqHB4NTvMu74CFx7ZJK6M+6OmSTN6t+r54jvfYOTL/ER4kPuQqInZ/ueGq +3ee+hB5BwucZEp2zoUs+U0HtGnDYthAZtQWY5iUTi25tCY5BTvArAvra0ra/lyLW +ocWIB2Xbr0ZoN9c1r/4QBZ/Huhys1HsInksRYaObc+BvxkMPp/QTO0ce0YduMi7X +CeKf45rHtaHmTMG+Kw== +=JHND +-----END PGP PUBLIC KEY BLOCK----- diff --git a/Secret/root-sec.pem b/Secret/root-sec.pem new file mode 100644 index 0000000..8f4e143 --- /dev/null +++ b/Secret/root-sec.pem @@ -0,0 +1,81 @@ +-----BEGIN PGP PRIVATE KEY BLOCK----- + +lQVYBGJR9PkBDAD7JbJZwNjgLSd17hpFpt/iZ6Pu40ySbLMmgcN1SDI4LX+U7+iU +A6RwyB8nAfl8U9QEKvtReIm/ivhvuqDmDQ+CyqMm0z275Q92KC9rTYTyicdmJIDx +1LYKFBilPbiezfxJHgkZXbFoTDpFQmUb2f5JNO2VtfW/fzFS63KGpohMUtoUXd2T +NRNfp6ddekqhZmocoluiR0wXuf1r4SgDLT2wuKHx/VLeXCbqu38DOCkTPlpRWQ97 +6wfpD1VDjQKe/QKVyAnmOBNER0QdUjw4nt/RqfjQzVJe9W2r2irEsmVkVwKCzSKT +FV2VvErZJmSdaoZVGwPFxdwO4mYv32UIVhWPKaHLsgeCHqBgsI71t3EOdD48Tv2K +UT9AZW/19vGz0zrChp/kSkUzSRXvslp3zyBIkB4M2GU3Jefs3H3YxmEZdOyvwL+5 +/ElaygF22FfA3+i918o4oQx0EeA/fct7lluWIgw6qm0wpzOzup76D+55w6v3bejw +5bfXZeAuqLWqGi0AEQEAAQAL/ROF2ifk+Fbw26TstekdMEEyykkLLUwinAiNxMps +qs64JrdGsP80i0djHxzSp/i0sYIDb1bldlvH60kQKmrHsCF8LBOcDyv4geuu8wS1 +2XRbJn93rfhejyoYZtQNiLj7jTWH2rA4ms5fQpZYs3BnUT+SmDdvliNlOUnXWKdD +8ctxE1fK9ir30MI6T6XSGFQUmIISUuo5/Z3IjP0iWxBoJ2gB6kDb7uGvKy1LFvQC +nRrMWYwCTtRUsGA7G2ctYntGcpITKZ5jdrkWsbqLuJzo2rsJxeXRR/P7cW0ITRq6 +an80M7ECGnYFfM/cuZUKpAUtCpMYqhEHCO0pXJc7rHuWILnit0A0p2n9XIy+7k3T +dk1gOV7+72l22WTO6oVsnz8woVjoBWnAHCEp2nhOsdHFEJPUUm7bGmAx5lds7ttY +uRqax/WftgOHYUsDgmfl0ZmO7ZeJNjvwj6nsAyBX0PN9hSZy9j1kty92csDINVMx +7ObSyxAiUrHgGswYw0C10aATZwYA/ToaLniEFHSvHeSSGrUjXkvG6de5UlE7J/kd +73Xlg4QFSq+Vz0zGisWAe3+vGvA4LYg2J3ACVgf/XNIM2i1U0XlDmwK3CsbRR+OZ +nQKfLZdA+NvcpUptFEwdrVJl3UOZP8v14xBujbAD7D2yqVomXBnulokcwGV778US +WxM6qKJnf3JbrE4m4/hHDvdYilPA0uUIAMLujvnE3WmlGyICVN4rn4rzaXINl2QX +tPXEFFy2XmWxTNEUSskXOLHYFMdHBgD95cOe3sjFXx23ePZ9ncabSqaamFhJ5tVz +8DgFhpt8Fme2zoH61dbeQ7n4iVWqn+ev9z7MLwJjykI2W8Y3H5EHFRZ2PGk7eyh2 +KqXDJe0/btQMFWqXejPmsA5Avwtu2cX3I/j8DIs7i6wFwkkpcN0I+9aCnoQfADZ0 +rF9kS1XpJwNL6QEF8m8GmF8bD2DHuBedBRyPg6+r+dQpH3dwr9ASwL1jRaMGULNG ++jAqe2uO5ixzGU+gHvTk8wwSRtjddOsF/1rLmB0c+jXE1PJqGpA31NNmIXP/Qrwc ++splVxZhmckz7nJsAd69iQ1A6m524sLfTd9XrHyf+Pqkwo40yYNGbHJ00Mn2FBWK +pUrteo7v+ZheHv5uvesnmEQw7S/7VjFIZCuB4tvxi7saU9NgsDnZJSDHLVvSHWiE +u59c5PK7zv9p6HP1EO6yf8F2YWRUloTGJEikOt29d56E7kZH+arGVkqMH+zGG7tY +CMBOXa9APH/fuqZktpjPcKfWu3fZ1gi3O+D6tBtyb290QGNvcmUuc21hbGwuZXhh +bXBsZS5vcmeJAdQEEwEKAD4WIQS9WSmpZ/8PmpeJcOBhVaLKF4hdfQUCYlH0+QIb +AwUJA8JnAAULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAKCRBhVaLKF4hdfbnYDACi +Xv3zr7Ly2X1sXIoQECE8P8GLM5XPhlh6s+F8qp/XAw+7W/UrCDSKyAS4qo1gZW4a +C1Cx09rcXJthv6EHTNaEJucg9qA8G7fGt0kptaF5l9dfnhGUgbS3XQZ1uueSMaqt +WmycIBv39k7s1NG0ZNxrDT5/s1QQfyfdK0bDSSjcw6aoow8ZKCwzW0T6VDbgbluM +fYqHYIt0qxoMw9uMvj0wRNMfsOJI+tj0Gp36i5WZQcl5cwGaE0T1ToGAm0Y9/aoz +LAsF1wb8xgPh2/VbYiZzq1gBDZ7tNd1QL5MFbSGMzpcIV5T7NDP5hLE6gDLkCSNq +yGuLM8x795W/FwgQOzkbemutS3a1eKVhgvh4DmxwYMKFMNsHRLL8Bl5OBmQw6VBV +L+lLgvPLkW/dLqvOBzk/Cm9hoz+K7Ra1eTCfMHOyc5vAOqePi1uL1jl1FrwFz0BT +2nhgpygIca/1l1LprqmnDn+YSeSni56j78EpX+ADmGEbonv6CDqJYNFKfHVh6tud +BVgEYlH0+QEMAO6E79SCGYr8Cypb0K2nECjHxY/q/IhCt7hWeVQp4nYaG2PclcN1 +BiY85OpQIZY0fK+WnXp6EPTtnQidUip9udd4IroHZNLh8/08wPm/CMnxKQOFfOF+ +wJfDHeg370DZfU4GN4oyCx+f7ngLAtRZoSMuvx+IJAQjgUbNrE5nQSiFjqAS/eNf +yeloP4rJv9QQu89Kim9V0RUD5XVWkPG1AsLNeWjJADi/qrbdCALu7hY/b1ZrjFCv +RWD7PbrXTiPftdRvAOJVc3AKsXpBa9TxLCELby6riuYsNNzix4ihQ1l/qFOgbEYf +YQAmVYXPpHRmJ71kjncP6ZXzlsIUDzx+U4d2j2l6kryzYBkS6P8ZGJqzotWN6wDy +oDt16Q1zfU285iq3nUm4Szyfl/w8XfFecTOUG6b/N2TX3O64IyMrCvAGB0HqqPYg +xq1BHf+5GnoUmS/wjnhAyWigM9Lk1zOocxYAwaeIw2xoiht0rXF5lgtC4tRmSM10 +VNRUIPQQAaTzSQARAQABAAv+PjFimb48q81Rmf9r18POhluky4SByYPgBMxjgYsU +VigziSPs1xTOAC1zoRc40sIn2t8Ce/uVLVBB2Iuw74xt512XbHteElDBfoAXb2ed +Ao8D2zu01tVmoYvKYhEnrPio4C3l0H6BAQOCOkHgwH8IcbSQOEgW4A9j95LMgXsR +9d9xU5LwKZgB/X7lxqoZf1HHruLoWuR9CqZD8Acc7zF8IVBTfnuet61edaUHoEAt +y5ZJ0TZk1WsGQ6XvMCdQ5DDB50sdzgN/IC5qVVIAqBdo8Mr9pulu82tWhq1t3MyF +G2EzP7aPp33QhWdrOCJLRWgEPS7eHm2q3ybCzHjPJhqx+wOs0uJcOFPI1zavwNTL +LAuF/+zG3d5uAhD2K7GPVNXLFbps91giNhVQZudG5o49wF8Qac7XZ6zj+zrBi1w+ +MnWLFOiIaWY1YAXB9Bms+3B27YXVz9bqx0J3cFJpXLpkoVz58mksUWgIOTAF12iG +s50QdsqqbTJbuCyi2vZUN11jBgDulCiL+rUnfd/Q91exHxSmHXE3AqUEOiuMoiBO +OTWWBuOacP4YD3zEJbqqBsM0f93IdPBAP+5SUKjeWgte3U7Hnu8FgA+rNDtu1TJO +DT9jMs5LjMdENRY9GDOzzGM2XGBF4EWwDnrMFqhO1Va0mC+KyNynaW3Cqxaw3mh/ +s4fcDMqAyPVgJ9a+dXR3SFW+tKS/8lEnoU1WKuZKzW/qEXuyQkwhGlQPTBbezoJS +bqVNW8JRTIBswX69Hz8A32Q+IcsGAP/vqr2/Z6lbCZa1Rj3uZ68+ZwtheK6YJQrz +rlz0OxjZtR9EfOmszuA89+VuoRdXlja3GxUZ7CLi59lKdWB3aHOBTzN12unEdP/w +Q2JC6RdnZGDa/nhiyXesfx4EoHxn6tHJEAKh7vefzBZNlhUJIFcdGsbRo+8Y2PdB +21Mx+kUfQhDboZ/0kPNqR5kPTzaKYe699Y/Fcb9mmoszwmnt9GeT1X55ijVLWIAU +5hsHpJmAvy8MfdAL/vkNZIdoU25MuwX/V2JFcetc9Gldi4+YYsPxl63hmHaPnet5 +Tm3xlU7UzI/jSsE4oW7enrNZFSBZB9ClGw4BnD+7vLU3Wd9/dNZbzQ0mUs6FyRhq +Eu3LeODltwmrN/gOzSniqbB2NudRJj1osmeoIMhqo5QYB9GmdNHckixhwsJqVTzC +8jWJkapQMQql3bPtIrIweri2RfayvO9hsacjnWbDfGt+xPJzMshGhEXAJWvH1HHu +9HXV4UqduapXc45VVvwh5xn59L2iU9Ui5BCJAbYEGAEKACAWIQS9WSmpZ/8PmpeJ +cOBhVaLKF4hdfQUCYlH0+QIbDAAKCRBhVaLKF4hdfdOhDACicUPEteQnoJpccE7V +4i+fIDnbrQcgam96/iTVDE7rluGPbonfa40r3F9aKJ1wrU4gWSUuI7AC38Odkxqr +Z8Wk6kI1wcUMSfjE0gOCNhB6sPO2IhU/SK7hWzqm6ljOxic6yM/b81YDRGSLWGRJ +qVEpcXbvMwWb/4yqmVjkKwx94OGmx5MXIYh0FZaMxYTbdIzlJ6+kiYVKPKoCreoX +NIux/+zpj8vN7JYZl5B/ioVT7/Qnfblb2z1LxyyH2ikKn6+LVqgyJhdBUxKfUtnc +nAUSLvQrYjI27ssAID1ojTRpYaiE/TezvQ8XKhweDU7zLu+Ahce2SSujPujpkkze +rfq+eI732Dky/xEeJD7kKiJ2f7nhqt3nvoQeQcLnGRKds6FLPlNB7Rpw2LYQGbUF +mOYlE4tubQmOQU7wKwL62tK2v5ci1qHFiAdl269GaDfXNa/+EAWfx7ocrNR7CJ5L +EWGjm3Pgb8ZDD6f0EztHHtGHbjIu1wnin+Oax7Wh5kzBvis= +=EDaP +-----END PGP PRIVATE KEY BLOCK----- diff --git a/Secret/root.gnupg/openpgp-revocs.d/BD5929A967FF0F9A978970E06155A2CA17885D7D.rev b/Secret/root.gnupg/openpgp-revocs.d/BD5929A967FF0F9A978970E06155A2CA17885D7D.rev new file mode 100644 index 0000000..5083603 --- /dev/null +++ b/Secret/root.gnupg/openpgp-revocs.d/BD5929A967FF0F9A978970E06155A2CA17885D7D.rev @@ -0,0 +1,35 @@ +This is a revocation certificate for the OpenPGP key: + +pub rsa3072 2022-04-09 [SC] [expires: 2024-04-08] + BD5929A967FF0F9A978970E06155A2CA17885D7D +uid root@core.small.example.org + +A revocation certificate is a kind of "kill switch" to publicly +declare that a key shall not anymore be used. It is not possible +to retract such a revocation certificate once it has been published. + +Use it to revoke this key in case of a compromise or loss of +the secret key. However, if the secret key is still accessible, +it is better to generate a new revocation certificate and give +a reason for the revocation. For details see the description of +of the gpg command "--generate-revocation" in the GnuPG manual. + +To avoid an accidental use of this file, a colon has been inserted +before the 5 dashes below. Remove this colon with a text editor +before importing and publishing this revocation certificate. + +:-----BEGIN PGP PUBLIC KEY BLOCK----- +Comment: This is a revocation certificate + +iQG2BCABCgAgFiEEvVkpqWf/D5qXiXDgYVWiyheIXX0FAmJR9PoCHQAACgkQYVWi +yheIXX3dlQwAnd6+tZJt793clW/JxxAXvF0si88itE8XgIOfma3Nnl6Ash0hr/lm +DxE4h6bjBewcDfN0V4Z+0Lp3cpJTvKsiWGBYId2B8Mh/yofloVlWIaPEoFEsQ6kF +2zjyRIM9/XlTuskvNyyO5zDRrDhM1zs9mWEx36zZ4ahP/l9y7+dQT47JFN1ZYkJx +jQurb4wupwyXjsekiE8Gt9/HjQznzE/2G547yKEIlItti0Os6Yd47ZicJyWUlqcK +7hCE1vpjOtmADC0kY+UCkLTNKxpJm+40skBbAgQQ3rhANmI+qYygwTlgb4B9eXbM +mmtZpuWMqDoshM21GDJUsSj/i8an2z3+QJ+oUD5UFkc6gwHHtkvxP2ZE9fLODT2L +4Lh3Kv4PnYju7ZhOK13ZXIOdctW/jpJb/uqZhCCkR7Kei1W4HnN7oyn6JVZP6YmU +lopUAsDTp/pDuF5Per4E0bQkLjwam5wUxkb2R3xDwbk/i8yw7AYpVMosx2wIwXvd +WtWpqyGBPHPo +=9WFL +-----END PGP PUBLIC KEY BLOCK----- diff --git a/Secret/root.gnupg/private-keys-v1.d/25C516A431AB23545D43E3E036DB2977DB38FAF3.key b/Secret/root.gnupg/private-keys-v1.d/25C516A431AB23545D43E3E036DB2977DB38FAF3.key new file mode 100644 index 0000000..c12b5e7 Binary files /dev/null and b/Secret/root.gnupg/private-keys-v1.d/25C516A431AB23545D43E3E036DB2977DB38FAF3.key differ diff --git a/Secret/root.gnupg/private-keys-v1.d/C857414E531A51C8E3160070AF7AEB99E5419BFF.key b/Secret/root.gnupg/private-keys-v1.d/C857414E531A51C8E3160070AF7AEB99E5419BFF.key new file mode 100644 index 0000000..09e5648 Binary files /dev/null and b/Secret/root.gnupg/private-keys-v1.d/C857414E531A51C8E3160070AF7AEB99E5419BFF.key differ diff --git a/Secret/root.gnupg/pubring.kbx b/Secret/root.gnupg/pubring.kbx new file mode 100644 index 0000000..d4774d6 Binary files /dev/null and b/Secret/root.gnupg/pubring.kbx differ diff --git a/Secret/root.gnupg/trustdb.gpg b/Secret/root.gnupg/trustdb.gpg new file mode 100644 index 0000000..8ada384 Binary files /dev/null and b/Secret/root.gnupg/trustdb.gpg differ diff --git a/Secret/ssh_admin/id_rsa b/Secret/ssh_admin/id_rsa new file mode 100644 index 0000000..f4936a8 --- /dev/null +++ b/Secret/ssh_admin/id_rsa @@ -0,0 +1,38 @@ +-----BEGIN OPENSSH PRIVATE KEY----- +b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn +NhAAAAAwEAAQAAAYEA18V56hWlKt1gJplvz/DjJt3HwBiaC9VAvMo27Ec7et0ZrrCA9grz +0yXzv7GzQMQyhzwb2CaosAWPFodlQQ16DtpVgCvSTkr1zGWUZgYe2JOvjbD0m3meh9w4M3 +Zirm7OBOVxHZJjoor8ohgVosMwygDqlr2+tMlgwzRLh8hjc5yo8i/pwDs7pdYT+X9t7193 +lYU8VdM3QpZLLyKaRrGGNxL4TMWrJ47xjoRAs9T6v/Tz8WpGNZASjBY/Moe+VH2CckYlHB +VFWQ/3UMgzI+4LYob+ADYlACIJ/eCOBrfbfGtjoi8qyoSQME0K7OAgPrLmPt7g3KdhbkAL +7s6WtpqFLnzXrUJ/WGAqQkGoqPCNzfTeTzqjxrTU//Bb9cMFrJf09+tzSZmu5a7UFyoKud +mGJmlDZx8Txaiz//RC2gCmyq103pdHsPy8lRDukCen1O5RNy2DBeQ54JXjqbjh8kHSCSr0 +qAm+4pQ7hvHpGXd2RETobch5a+1HB67ZmTGI4ZUXAAAFkFdJqqNXSaqjAAAAB3NzaC1yc2 +EAAAGBANfFeeoVpSrdYCaZb8/w4ybdx8AYmgvVQLzKNuxHO3rdGa6wgPYK89Ml87+xs0DE +Moc8G9gmqLAFjxaHZUENeg7aVYAr0k5K9cxllGYGHtiTr42w9Jt5nofcODN2Yq5uzgTlcR +2SY6KK/KIYFaLDMMoA6pa9vrTJYMM0S4fIY3OcqPIv6cA7O6XWE/l/be9fd5WFPFXTN0KW +Sy8imkaxhjcS+EzFqyeO8Y6EQLPU+r/08/FqRjWQEowWPzKHvlR9gnJGJRwVRVkP91DIMy +PuC2KG/gA2JQAiCf3gjga323xrY6IvKsqEkDBNCuzgID6y5j7e4NynYW5AC+7OlraahS58 +161Cf1hgKkJBqKjwjc303k86o8a01P/wW/XDBayX9Pfrc0mZruWu1BcqCrnZhiZpQ2cfE8 +Wos//0QtoApsqtdN6XR7D8vJUQ7pAnp9TuUTctgwXkOeCV46m44fJB0gkq9KgJvuKUO4bx +6Rl3dkRE6G3IeWvtRweu2ZkxiOGVFwAAAAMBAAEAAAGAKECcx8CV+XMm9sx1AXPMzHlfRE +TSqBZ2Z0HKETYQsJECs4YV6NCOP/u6hy5dZF21l2jtQNulaIEA+pDzoLkk5hRxEuIZ76Uo +SaNBle7aXkje3S3/0+lSW8IHcgJJ0oS1RlCPU5b1o2MOUibwElcbiPO2z7xCEXPn60KcPI +5zjyPQmK27i7MBI6TWQRs2pQtIQcqDQPeQPYnQKNDpuvpvMWMGkzvk/BI8mfuuHl5DEQBf +adALnP5tl1inHYQZS6XGElx7PrVuRahv/h3Img7WAI8G7whRmxha3nje2Xk4hY3M2mlaUJ +odHVaYwpv1uBmeevfUJ38AGAYmGIeijuqC6tx6/4Zn1qc6DsH272nOnbYmuHHJpb8p8LbV +xiHM8VsSAsqt6LRUKoaQddrZrhL2N0LT2iZ0KIFKz3OnMXYM5R9N8K5hq5o012Kxk4mbHt +e0fF3IFBoUeySZMRnPYbHRML7CcHdJQqHa2w+HwR06WdauHw9SLHXVMUm7VB3KfuohAAAA +wG0ARc3IXG2+nYAP5MvcluSeYIyqqXb/l9H2hnioXzGn684t/O1ZCtuBKC7jXYKL7+UeSZ +Ww0j1TvVnOFqSH5wwHfuY5+fHusf1/HDuhmfoo029dWthC11PjzZYZOFl4D5CgO2SX0Pbu +Gzw7PAUubjdIGmbiYFClnTPP9g72fmNPlflTrDjIDh7oSjQCJ48c/UDNS6t95bIZmA35Yn +BN0u0DZPHl1vtsLjWH3p/mBJPYCqUc6QDZ2nFE9xy0VJT6HwAAAMEA7lorbF3zkG6wKoH1 +PHqzNl0hvObOfKh9XilX96ijJQUfx+jR3ScU16xEwgUDPkN06agYtT9b/BCzcOheug4Ve/ +2WWopTI0m2ZgXDIlTwt7yIktNxgIdLrDyp8F6mhbQnhpcVL8Peekl/Bp1YbVHz/t4VrWQs +IBZJ8peb+Wlv/HuCWYjrHxM2J62ThXN5CS/lmzkXopLucexb5GKTJ0We2COIxR9AQSN7+p +PL83sv32ZmqF0OD36QFAvAXFIdzRs5AAAAwQDnv0y/UophQqQbZAs8LnQzmKNkMyQFYY3S +Lx86ZtQx6XXPAVvxgIoj/lPQuC4g55QUS/LXep+pP9fUFvvWlbHgqMJZWT+okJiA+z5R86 +P3AUGfPtL4OdroZPRgnHc1IMpDSo2v671uT97AKIi8lOHNO6EJdZcjIjIWcJKAVD5nFl6Q +sQIdKLWsl3k7IcN+wT2ABD1zRQ3Yl0O5t0l8GpW39fmzjsmiwdWuvcm2x2TxTmfaqdVmkR +qOUKDCECbDIs8AAAAXSW5zdGl0dXRlIEFkbWluaXN0cmF0b3IBAgME +-----END OPENSSH PRIVATE KEY----- diff --git a/Secret/ssh_admin/id_rsa.pub b/Secret/ssh_admin/id_rsa.pub new file mode 100644 index 0000000..bddc724 --- /dev/null +++ b/Secret/ssh_admin/id_rsa.pub @@ -0,0 +1 @@ +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXxXnqFaUq3WAmmW/P8OMm3cfAGJoL1UC8yjbsRzt63RmusID2CvPTJfO/sbNAxDKHPBvYJqiwBY8Wh2VBDXoO2lWAK9JOSvXMZZRmBh7Yk6+NsPSbeZ6H3DgzdmKubs4E5XEdkmOiivyiGBWiwzDKAOqWvb60yWDDNEuHyGNznKjyL+nAOzul1hP5f23vX3eVhTxV0zdClksvIppGsYY3EvhMxasnjvGOhECz1Pq/9PPxakY1kBKMFj8yh75UfYJyRiUcFUVZD/dQyDMj7gtihv4ANiUAIgn94I4Gt9t8a2OiLyrKhJAwTQrs4CA+suY+3uDcp2FuQAvuzpa2moUufNetQn9YYCpCQaio8I3N9N5POqPGtNT/8Fv1wwWsl/T363NJma7lrtQXKgq52YYmaUNnHxPFqLP/9ELaAKbKrXTel0ew/LyVEO6QJ6fU7lE3LYMF5DngleOpuOHyQdIJKvSoCb7ilDuG8ekZd3ZEROhtyHlr7UcHrtmZMYjhlRc= A Small Institute Administrator diff --git a/Secret/ssh_front/etc/ssh/ssh_host_ecdsa_key b/Secret/ssh_front/etc/ssh/ssh_host_ecdsa_key new file mode 100644 index 0000000..f94382b --- /dev/null +++ b/Secret/ssh_front/etc/ssh/ssh_host_ecdsa_key @@ -0,0 +1,9 @@ +-----BEGIN OPENSSH PRIVATE KEY----- +b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAaAAAABNlY2RzYS +1zaGEyLW5pc3RwMjU2AAAACG5pc3RwMjU2AAAAQQSzLt7eeDB3cROmpdOSSu6wsBWeGCSC +CZOOI1CEdWnpcF8FgOetXw+e7TzOr/duVi3ZvmHJFu6OgSgRjcTGV1BfAAAAqMDPzA7Az8 +wOAAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLMu3t54MHdxE6al +05JK7rCwFZ4YJIIJk44jUIR1aelwXwWA561fD57tPM6v925WLdm+YckW7o6BKBGNxMZXUF +8AAAAgOFibEyeBVJSR2TchtszdL84Fmurj7V8w5aqzx58AagoAAAAKcm9vdEBmcm9udAEC +AwQFBg== +-----END OPENSSH PRIVATE KEY----- diff --git a/Secret/ssh_front/etc/ssh/ssh_host_ecdsa_key.pub b/Secret/ssh_front/etc/ssh/ssh_host_ecdsa_key.pub new file mode 100644 index 0000000..d6cfeed --- /dev/null +++ b/Secret/ssh_front/etc/ssh/ssh_host_ecdsa_key.pub @@ -0,0 +1 @@ +ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLMu3t54MHdxE6al05JK7rCwFZ4YJIIJk44jUIR1aelwXwWA561fD57tPM6v925WLdm+YckW7o6BKBGNxMZXUF8= root@front diff --git a/Secret/ssh_front/etc/ssh/ssh_host_ed25519_key b/Secret/ssh_front/etc/ssh/ssh_host_ed25519_key new file mode 100644 index 0000000..7baa1b3 --- /dev/null +++ b/Secret/ssh_front/etc/ssh/ssh_host_ed25519_key @@ -0,0 +1,7 @@ +-----BEGIN OPENSSH PRIVATE KEY----- +b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW +QyNTUxOQAAACCdCBsZuSJ3ku24tAz5egtKOJwRqJikCDCUbyxMgs+00wAAAJCOKYLDjimC +wwAAAAtzc2gtZWQyNTUxOQAAACCdCBsZuSJ3ku24tAz5egtKOJwRqJikCDCUbyxMgs+00w +AAAECe3IokbTa8Rqm1FRPlBTBk2gpdhBDgFHlf/U0WETPBvJ0IGxm5IneS7bi0DPl6C0o4 +nBGomKQIMJRvLEyCz7TTAAAACnJvb3RAZnJvbnQBAgM= +-----END OPENSSH PRIVATE KEY----- diff --git a/Secret/ssh_front/etc/ssh/ssh_host_ed25519_key.pub b/Secret/ssh_front/etc/ssh/ssh_host_ed25519_key.pub new file mode 100644 index 0000000..db7cc73 --- /dev/null +++ b/Secret/ssh_front/etc/ssh/ssh_host_ed25519_key.pub @@ -0,0 +1 @@ +ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ0IGxm5IneS7bi0DPl6C0o4nBGomKQIMJRvLEyCz7TT root@front diff --git a/Secret/ssh_front/etc/ssh/ssh_host_rsa_key b/Secret/ssh_front/etc/ssh/ssh_host_rsa_key new file mode 100644 index 0000000..eed7cac --- /dev/null +++ b/Secret/ssh_front/etc/ssh/ssh_host_rsa_key @@ -0,0 +1,38 @@ +-----BEGIN OPENSSH PRIVATE KEY----- +b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn +NhAAAAAwEAAQAAAYEA7jYjJkUPmhpY/ycO8YNAs3rgakeii7dFdDkO91BQONN/nruqzAd4 +DypOxqaASXLer2mWKwx3ozTZp4ZrGK6c6Ma2ZkkBAqmrtnMoLGZJE7kSySyUYmL8Rsug8I +bB6zmvpL0XaJ/bLTO4iYjYyrcWVxeRt2Sgzo/HJJ+0Dt8/XcykrUcwFuE1S5N3I8I84Mih +Ap456G5TGsi5mZ0kslCi8X+UnMgkWdSJHbnHJuKhlB2e+u/jeyXjvHvSH7fFv4RsfUnqdV +XSoYBTVfzAvKdNpQfpPGQ9xsvsvN3M/+ETZuH/bnbfMcvUeEoMWUx1Z83iCZPR4yhxgB3r ++z7uWY9DfdRIuiOhBrOCDORduxh4pYy7GaWOqQ4+5pJcYM3Ghboy5Ks6PlWwEcroMSlJ/o +a5sIzA31bh1YR6No23RMqFqrzPEk21Ol5GU6Sm5PKa31EBudkQhJvbcWobtjuYG/MJpqsF +3/fH6rOs4wqmjVnPBgQp6HzNxTD1RgNGF4qSsTQzAAAFgIJ7DpyCew6cAAAAB3NzaC1yc2 +EAAAGBAO42IyZFD5oaWP8nDvGDQLN64GpHoou3RXQ5DvdQUDjTf567qswHeA8qTsamgEly +3q9plisMd6M02aeGaxiunOjGtmZJAQKpq7ZzKCxmSRO5EskslGJi/EbLoPCGwes5r6S9F2 +if2y0zuImI2Mq3FlcXkbdkoM6PxySftA7fP13MpK1HMBbhNUuTdyPCPODIoQKeOehuUxrI +uZmdJLJQovF/lJzIJFnUiR25xybioZQdnvrv43sl47x70h+3xb+EbH1J6nVV0qGAU1X8wL +ynTaUH6TxkPcbL7LzdzP/hE2bh/2523zHL1HhKDFlMdWfN4gmT0eMocYAd6/s+7lmPQ33U +SLojoQazggzkXbsYeKWMuxmljqkOPuaSXGDNxoW6MuSrOj5VsBHK6DEpSf6GubCMwN9W4d +WEejaNt0TKhaq8zxJNtTpeRlOkpuTymt9RAbnZEISb23FqG7Y7mBvzCaarBd/3x+qzrOMK +po1ZzwYEKeh8zcUw9UYDRheKkrE0MwAAAAMBAAEAAAGBAKcm3+VDwp3s7RQlsSuxYR5QE9 +cf6yRE9vyF6UWLWq91YXDd2QyQFSP3GQ312cEwVKgb3B7bAbxJIo2WGeJY7Iu+nFEL2ySm +MHK3PbJiF9c6H7+Ag6LCOKnoy0bcGIjZkrFzalClE2QVjeEcYJtme8ujI0Hf36Lyatf9JJ +jm+Iz2Q3u/nzP+1anxkUFLU/KbdbfjlVjOyYva27m59f0V7jCtyHd3TWKna4urR962WpEX +c+47lJFeVf51mE1fY+hun2N99CC9Rx+NzONiKdOYMjp4j9N6HYRXzfte2cwpj9GBEZYv0+ +8qXfqlVJX/HE+e4tbuManZRJnTW+W2JDJP953/pDaFPza9JQwDtjNZuu057Hk8qezp59nG +zRGwdvHffb6t9I1JzX+Gl84aLoQ71mhUfkosLVZeuSKJ4fxVjf0rXr3HkJk4WRSvCFebx0 +P35RxPRk51CMBMSwQ9732j5/3dmg2tqAG/iRp/I+hyb2v+6IOLKxMTGAcLsDnf0g2dcQAA +AMB4JsKYa8qH+Q0hj5egc1ht9fWy71mnoPKRdsAiAL7irYIax61Ygh4CmnMCXDG6O8l7DA +KnDLDjG4qoBfGGxLhotoZ7qJiZMBckzjw0pt9GY5n1euzOh4vccbEF6jI+UmUPhkQsFNF3 +89GgVupAqDjUD7P44EyVp0kMMcuF1fGPhPFayKBbnyYlqhmEvzGcGZK10weJ0IvEFAkbV7 +NJc0yZFiJe/1tW5abLAbYm0jXutC5H8ZHYRVP0Hl4t23G89OsAAADBAPdXrsJPY8t1ug9s +y8ZtqWFZXMy0By6EYp+sPZesFJgJ4xoxfQ1ySG8bQqFvv0rfcyOiNr+9VlsQ5WuIsj6aIj +/wUGl9DblgbNHyV3mQFuwB55qXmtH3yNwqmDT4Sx8hxhulgH5z7hvT1RWqqTMk6un2rH+Z +b912IWb1hp/hbVZDKzyea5ug4C0syxZdQTfUZH8VKjYJZdnipYkMf5+K4a0twwM/fm2pPw +wNovth9Q/PYZcm1pN2/KJgsns3QUPj2wAAAMEA9oyixSjTSi6JFLBtQvJQuyEvEHUjdZAq +6OlGQ9IKSU2WDRvCgjzTv0gqaVr7LoKc78TbWd4XViZzYW9WLuWZUU/FC0N+WrMVW37rMJ +2bOt7s4rMEoFyPNyc0GZasiXUx4dAduOB6fhVzZdscf1Ob3GXBovqLH0vHei10fEaIJWUS +OSw1tuT6K8oXUtJkCaf+6KzZhYDYSKAN0BoX+Hou97zvQMSRtEQOq3XSep0TEa0sMc6pS/ +p+zyhmlsC6agyJAAAACnJvb3RAZnJvbnQ= +-----END OPENSSH PRIVATE KEY----- diff --git a/Secret/ssh_front/etc/ssh/ssh_host_rsa_key.pub b/Secret/ssh_front/etc/ssh/ssh_host_rsa_key.pub new file mode 100644 index 0000000..2e3afab --- /dev/null +++ b/Secret/ssh_front/etc/ssh/ssh_host_rsa_key.pub @@ -0,0 +1 @@ +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDuNiMmRQ+aGlj/Jw7xg0CzeuBqR6KLt0V0OQ73UFA403+eu6rMB3gPKk7GpoBJct6vaZYrDHejNNmnhmsYrpzoxrZmSQECqau2cygsZkkTuRLJLJRiYvxGy6DwhsHrOa+kvRdon9stM7iJiNjKtxZXF5G3ZKDOj8ckn7QO3z9dzKStRzAW4TVLk3cjwjzgyKECnjnoblMayLmZnSSyUKLxf5ScyCRZ1Ikduccm4qGUHZ767+N7JeO8e9Ift8W/hGx9Sep1VdKhgFNV/MC8p02lB+k8ZD3Gy+y83cz/4RNm4f9udt8xy9R4SgxZTHVnzeIJk9HjKHGAHev7Pu5Zj0N91Ei6I6EGs4IM5F27GHiljLsZpY6pDj7mklxgzcaFujLkqzo+VbARyugxKUn+hrmwjMDfVuHVhHo2jbdEyoWqvM8STbU6XkZTpKbk8prfUQG52RCEm9txahu2O5gb8wmmqwXf98fqs6zjCqaNWc8GBCnofM3FMPVGA0YXipKxNDM= root@front diff --git a/Secret/ssh_monkey/config b/Secret/ssh_monkey/config new file mode 100644 index 0000000..4110150 --- /dev/null +++ b/Secret/ssh_monkey/config @@ -0,0 +1 @@ +HashKnownHosts no diff --git a/Secret/ssh_monkey/id_rsa b/Secret/ssh_monkey/id_rsa new file mode 100644 index 0000000..a4084a1 --- /dev/null +++ b/Secret/ssh_monkey/id_rsa @@ -0,0 +1,38 @@ +-----BEGIN OPENSSH PRIVATE KEY----- +b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn +NhAAAAAwEAAQAAAYEAng3cBVl5eiYHNpaS0ziOyz+JEtSP7A2EDnuVg/vaZ0yEJdo/qCJL +xHc1Dp5VSWpexic5KEJ3S87Z7SE6fkaDKW7Y2Gg/6mT88eXMmytYDM0JHufRa64mmfJ7f5 +Ggm9adhoiH8MAoicBMNa7ILwZfxtr5al5//NW7OMXCLE73ohGqGwPYS82Dy2PwWXRBcZz2 +qcuLNTX1MyElMnKInatIwtbgNQXiU98hO7dfT1GZLk0YABJXgahf81ERbt7oPntUeWnuJE +9M4fIHILXrNEBkifGe4uh0K20LxyO7Z3L3xAhwxuBrS6r5l5hLlGDj8k36xYtRC9fXt2lY +xiMOk2cVaWj7q1Z/vLZuih0vsnB07s/Ge8tvtZh9zI6LLGH77n7rCOXxgktvHXSD9JlN4P +1ZmOVaYwHOwiz30UdEY/RYZYGE6+wZHlSF6ROaaFrX6yebg6WTK4Yv1S16YO4oRgvnJB// +r65O4yX7fsNXF7WjyV3Iw/NWs9T3IUf7AabIsVTLAAAFgF6mN1depjdXAAAAB3NzaC1yc2 +EAAAGBAJ4N3AVZeXomBzaWktM4jss/iRLUj+wNhA57lYP72mdMhCXaP6giS8R3NQ6eVUlq +XsYnOShCd0vO2e0hOn5Ggylu2NhoP+pk/PHlzJsrWAzNCR7n0WuuJpnye3+RoJvWnYaIh/ +DAKInATDWuyC8GX8ba+Wpef/zVuzjFwixO96IRqhsD2EvNg8tj8Fl0QXGc9qnLizU19TMh +JTJyiJ2rSMLW4DUF4lPfITu3X09RmS5NGAASV4GoX/NREW7e6D57VHlp7iRPTOHyByC16z +RAZInxnuLodCttC8cju2dy98QIcMbga0uq+ZeYS5Rg4/JN+sWLUQvX17dpWMYjDpNnFWlo ++6tWf7y2boodL7JwdO7PxnvLb7WYfcyOiyxh++5+6wjl8YJLbx10g/SZTeD9WZjlWmMBzs +Is99FHRGP0WGWBhOvsGR5UhekTmmha1+snm4OlkyuGL9UtemDuKEYL5yQf/6+uTuMl+37D +Vxe1o8ldyMPzVrPU9yFH+wGmyLFUywAAAAMBAAEAAAGAdDYmj3xhWFG7vgRqgom0XHcj10 +eZZuvtLCTsI3Y7+PYGuDpH0d0drqAjz9LVTLy8YKAYY6SzSHcYP0XOV2iLKhzJrhzA2hxU +65uWnIT7IbZkPWgf0DflRA5JhdvSpqLfgjrDEV6Ir/hHULVplUHvjCwXdYF0Q7f3B+BITA +HoDC9GzsQ99kZu4E5kO7HCKMJLjz8M5Rv+ZRC64+PY1W1Ke5A4nGPuLNMEAX9rwctygNvI +iMzzsG7X1fTGh6m4Q7CznSCKPn0oPr1PNoIwUiMQzxH41L+v08AFbQ45O+kzxR/JsCS8u0 +42LVATCenxHYbVofKM26KjEYUbl/fxNmKEqRrbpaRIHM4H0aX2T1pYp0MU8dOX7N4p7ue5 +OnDanKFOyPbijkQUcK4wewH6BJ+T0coJOOl66imMlTYhRKhJpHIoKTWmnOMDzwS0hO8bZ5 +NuepYzjIdrC9juq0HtG3Wg8yqKLpJlTWCsWnk0ijYuccm7YKm67L0UDPAtz4M+4cRxAAAA +wDOqhuiqzJXx3ZM9RLJLDk9+K1/fZG+KZtQB4fD3n7pTJn2kRj2SvWtCFEEeeFznyQ5F0W +6Lkmzt/lSlKGM6NpnMpGb44uAKNoheZ1xz1Rbbwav643vXne0aC60fa+7kGk+LSnTm+sKi +GxNhrb1ZYn05dz6lTT71fIExAVWQaevwZKrd7+S2t2TSEemoHEKElCx7FGl4A+OQmyNeaC +dMKAcfepXmftqW09fesIdtmiSZmfT7+SR4Q5hHuYjC/WEwsgAAAMEAyS4Rr2xaN+ndQB8r +Xi9/VqIOQATlfYbssVheDhvsdHVdB9QUhZhjqdSIeCEzRo1JntCo2e0bXsq2ifXgudwsau +Vc4nN4OoJqynns2zzqWcPopo8HTgsIx1RdC7syOljVfMuy1VqZ55kcA4BvcHGx3gKQp1jE +B34wOh1T/UFQdttznvYw1YdkHY8KA2AICOiB2dyiOUdvTpFjPxIeMTQcW7PD4LhSE489yY +nxvF1UDqG+AMFp0r2/sbIZWI2HYvyTAAAAwQDJH2pTN9x2ljEdNDNr5sr/bx9gr3Vk5hav +eZHbvd3cCEe7FSyudU7M55rJmad2LM8BD8LbrfoHxWIsxbWQjGW+AV8ltafI+jRcZL9d/X +QPB/y59p32y/S9u0w7vtqXCpAAiTe8h6u4T5Dinib1kMIfClyd+ZJflEVc9G16ShVlVuEn +04UFLcEpzGdqKVqwTv7QJNPsvcz6K5kNQQPEmNMXy9k+FQ0bH8ADR6DfP6LVzS4CfTvvIc +jU/0Zfsu/boekAAAALbW9ua2V5QGNvcmU= +-----END OPENSSH PRIVATE KEY----- diff --git a/Secret/ssh_monkey/id_rsa.pub b/Secret/ssh_monkey/id_rsa.pub new file mode 100644 index 0000000..2909d30 --- /dev/null +++ b/Secret/ssh_monkey/id_rsa.pub @@ -0,0 +1 @@ +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeDdwFWXl6Jgc2lpLTOI7LP4kS1I/sDYQOe5WD+9pnTIQl2j+oIkvEdzUOnlVJal7GJzkoQndLztntITp+RoMpbtjYaD/qZPzx5cybK1gMzQke59FrriaZ8nt/kaCb1p2GiIfwwCiJwEw1rsgvBl/G2vlqXn/81bs4xcIsTveiEaobA9hLzYPLY/BZdEFxnPapy4s1NfUzISUycoidq0jC1uA1BeJT3yE7t19PUZkuTRgAEleBqF/zURFu3ug+e1R5ae4kT0zh8gcgtes0QGSJ8Z7i6HQrbQvHI7tncvfECHDG4GtLqvmXmEuUYOPyTfrFi1EL19e3aVjGIw6TZxVpaPurVn+8tm6KHS+ycHTuz8Z7y2+1mH3MjossYfvufusI5fGCS28ddIP0mU3g/VmY5VpjAc7CLPfRR0Rj9FhlgYTr7BkeVIXpE5poWtfrJ5uDpZMrhi/VLXpg7ihGC+ckH/+vrk7jJft+w1cXtaPJXcjD81az1PchR/sBpsixVMs= monkey@core diff --git a/Secret/vault-password b/Secret/vault-password new file mode 100644 index 0000000..39798aa --- /dev/null +++ b/Secret/vault-password @@ -0,0 +1 @@ +alitysortstagess diff --git a/ansible.cfg b/ansible.cfg new file mode 100644 index 0000000..49150ac --- /dev/null +++ b/ansible.cfg @@ -0,0 +1,5 @@ +[defaults] +interpreter_python=/usr/bin/python3 +vault_password_file=Secret/vault-password +inventory=hosts +roles_path=roles_t diff --git a/hosts b/hosts new file mode 100644 index 0000000..0d8927d --- /dev/null +++ b/hosts @@ -0,0 +1,18 @@ +all: + vars: + ansible_user: sysadm + ansible_ssh_extra_args: -i Secret/ssh_admin/id_rsa + hosts: + front: + ansible_host: 192.168.57.3 + ansible_become_password: "{{ become_front }}" + core: + ansible_host: 192.168.56.1 + ansible_become_password: "{{ become_core }}" + gate: + ansible_host: 192.168.56.2 + ansible_become_password: "{{ become_gate }}" + children: + campus: + hosts: + gate: diff --git a/inst b/inst new file mode 100755 index 0000000..6b6a003 --- /dev/null +++ b/inst @@ -0,0 +1,473 @@ +#!/usr/bin/perl -w +# +# DO NOT EDIT. This file was tangled from an institute.org file. + +use strict; +use IO::File; + +sub note_missing_file_p ($); +sub note_missing_directory_p ($); + +{ + my $missing = 0; + if (note_missing_file_p "ansible.cfg") { $missing += 1; } + if (note_missing_file_p "hosts") { $missing += 1; } + if (note_missing_directory_p "Secret") { $missing += 1; } + if (note_missing_file_p "Secret/become.yml") { $missing += 1; } + if (note_missing_directory_p "playbooks") { $missing += 1; } + if (note_missing_file_p "playbooks/site.yml") { $missing += 1; } + if (note_missing_directory_p "roles") { $missing += 1; } + if (note_missing_directory_p "public") { $missing += 1; } + if (note_missing_directory_p "private") { $missing += 1; } + + for my $filename (glob "private/*") { + my $perm = (stat $filename)[2]; + if ($perm & 077) { + print "$filename: not private\n"; + } + } + die "$missing missing files\n" if $missing != 0; +} + +sub note_missing_file_p ($) { + my ($filename) = @_; + if (! -f $filename) { + print "$filename: missing\n"; + return 1; + } else { + return 0; + } +} + +sub note_missing_directory_p ($) { + my ($dirname) = @_; + if (! -d $dirname) { + print "$dirname: missing\n"; + return 1; + } else { + return 0; + } +} + +sub mysystem (@) { + my $line = join (" ", @_); + print "$line\n"; + my $status = system $line; + die "status: $status\nCould not run $line: $!\n" if $status != 0; +} + +mysystem "ansible-playbook playbooks/check-inst-vars.yml >/dev/null"; + +our ($domain_name, $domain_priv, $front_addr, $gate_wifi_addr); +do "./private/vars.pl"; + +if (defined $ARGV[0] && $ARGV[0] eq "CA") { + die "usage: $0 CA" if @ARGV != 1; + die "Secret/CA/easyrsa: not an executable\n" + if ! -x "Secret/CA/easyrsa"; + die "Secret/CA/pki/: already exists\n" if -e "Secret/CA/pki"; + mysystem "cd Secret/CA; ./easyrsa init-pki"; + mysystem "cd Secret/CA; ./easyrsa build-ca nopass"; + # Common Name: small.example.org + + my $dom = $domain_name; + my $pvt = $domain_priv; + mysystem "cd Secret/CA; ./easyrsa build-server-full $dom nopass"; + mysystem "cd Secret/CA; ./easyrsa build-server-full gate.$pvt nopass"; + mysystem "cd Secret/CA; ./easyrsa build-server-full core.$pvt nopass"; + mysystem "cd Secret/CA; ./easyrsa build-client-full core nopass"; + umask 077; + mysystem "openvpn --genkey --secret Secret/front-ta.key"; + mysystem "openvpn --genkey --secret Secret/gate-ta.key"; + mysystem "openssl dhparam -out Secret/front-dh2048.pem 2048"; + mysystem "openssl dhparam -out Secret/gate-dh2048.pem 2048"; + + mysystem "mkdir --mode=700 Secret/root.gnupg"; + mysystem ("gpg --homedir Secret/root.gnupg", + " --batch --quick-generate-key --passphrase ''", + " root\@core.$pvt"); + mysystem ("gpg --homedir Secret/root.gnupg", + " --export --armor --output root-pub.pem", + " root\@core.$pvt"); + chmod 0440, "root-pub.pem"; + mysystem ("gpg --homedir Secret/root.gnupg", + " --export-secret-key --armor --output root-sec.pem", + " root\@core.$pvt"); + chmod 0400, "root-sec.pem"; + + mysystem "mkdir Secret/ssh_admin"; + chmod 0700, "Secret/ssh_admin"; + mysystem ("ssh-keygen -q -t rsa" + ." -C A\\ Small\\ Institute\\ Administrator", + " -N '' -f Secret/ssh_admin/id_rsa"); + + mysystem "mkdir Secret/ssh_monkey"; + chmod 0700, "Secret/ssh_monkey"; + mysystem "echo 'HashKnownHosts no' >Secret/ssh_monkey/config"; + mysystem ("ssh-keygen -q -t rsa -C monkey\@core", + " -N '' -f Secret/ssh_monkey/id_rsa"); + + mysystem "mkdir Secret/ssh_front"; + chmod 0700, "Secret/ssh_front"; + mysystem "ssh-keygen -A -f Secret/ssh_front -C $dom"; + exit; +} + +if (defined $ARGV[0] && $ARGV[0] eq "config") { + die "Secret/CA/easyrsa: not executable\n" + if ! -x "Secret/CA/easyrsa"; + shift; + my $cmd = "ansible-playbook -e \@Secret/become.yml"; + if (defined $ARGV[0] && $ARGV[0] eq "-n") { + shift; + $cmd .= " --check --diff" + } + if (@ARGV == 0) { + ; + } elsif (defined $ARGV[0]) { + my $hosts = lc $ARGV[0]; + die "$hosts: contains illegal characters" + if $hosts !~ /^!?[a-z][-a-z0-9,!]+$/; + $cmd .= " -l $hosts"; + } else { + die "usage: $0 config [-n] [HOSTS]\n"; + } + $cmd .= " playbooks/site.yml"; + mysystem $cmd; + exit; +} + +use YAML::XS qw(LoadFile DumpFile); + +sub read_members_yaml () { + my $path; + $path = "private/members.yml"; + if (-e $path) { return LoadFile ($path); } + $path = "private/members-empty.yml"; + if (-e $path) { return LoadFile ($path); } + die "private/members.yml: not found\n"; +} + +sub write_members_yaml ($) { + my ($yaml) = @_; + my $old_umask = umask 077; + my $path = "private/members.yml"; + print "$path: "; STDOUT->flush; + eval { #DumpFile ("$path.tmp", $yaml); + dump_members_yaml ("$path.tmp", $yaml); + rename ("$path.tmp", $path) + or die "Could not rename $path.tmp: $!\n"; }; + my $err = $@; + umask $old_umask; + if ($err) { + print "ERROR\n"; + } else { + print "updated\n"; + } + die $err if $err; +} + +sub dump_members_yaml ($$) { + my ($pathname, $yaml) = @_; + my $O = new IO::File; + open ($O, ">$pathname") or die "Could not open $pathname: $!\n"; + print $O "---\n"; + if (keys %{$yaml->{"members"}}) { + print $O "members:\n"; + for my $user (sort keys %{$yaml->{"members"}}) { + print_member ($O, $yaml->{"members"}->{$user}); + } + print $O "usernames:\n"; + for my $user (sort keys %{$yaml->{"members"}}) { + print $O "- $user\n"; + } + } else { + print $O "members:\n"; + print $O "usernames: []\n"; + } + if (@{$yaml->{"revoked"}}) { + print $O "revoked:\n"; + for my $name (@{$yaml->{"revoked"}}) { + print $O "- $name\n"; + } + } else { + print $O "revoked: []\n"; + } + close $O or die "Could not close $pathname: $!\n"; +} + +sub print_member ($$) { + my ($out, $member) = @_; + print $out " ", $member->{"username"}, ":\n"; + print $out " username: ", $member->{"username"}, "\n"; + print $out " status: ", $member->{"status"}, "\n"; + if (@{$member->{"clients"} || []}) { + print $out " clients:\n"; + for my $name (@{$member->{"clients"} || []}) { + print $out " - ", $name, "\n"; + } + } else { + print $out " clients: []\n"; + } + print $out " password_front: ", $member->{"password_front"}, "\n"; + print $out " password_core: ", $member->{"password_core"}, "\n"; + if (defined $member->{"password_fetchmail"}) { + print $out " password_fetchmail: !vault |\n"; + for my $line (split /\n/, $member->{"password_fetchmail"}) { + print $out " $line\n"; + } + } + my @standard_keys = ( "username", "status", "clients", + "password_front", "password_core", + "password_fetchmail" ); + my @other_keys = (sort + grep { my $k = $_; + ! grep { $_ eq $k } @standard_keys } + keys %$member); + for my $key (@other_keys) { + print $out " $key: ", $member->{$key}, "\n"; + } +} + +sub valid_username (@); +sub shell_escape ($); +sub strip_vault ($); + +if (defined $ARGV[0] && $ARGV[0] eq "new") { + my $user = valid_username (@ARGV); + my $yaml = read_members_yaml (); + my $members = $yaml->{"members"}; + die "$user: already exists\n" if defined $members->{$user}; + + my $pass = `apg -n 1 -x 12 -m 12`; chomp $pass; + print "Initial password: $pass\n"; + my $epass = shell_escape $pass; + my $front = `mkpasswd -m sha-512 "$epass"`; chomp $front; + my $core = `mkpasswd -m sha-512 "$epass"`; chomp $core; + my $vault = strip_vault `ansible-vault encrypt_string "$epass"`; + mysystem ("ansible-playbook -e \@Secret/become.yml", + " playbooks/nextcloud-new.yml", + " -e user=$user", " -e pass=\"$epass\""); + $members->{$user} = { "username" => $user, + "status" => "current", + "password_front" => $front, + "password_core" => $core, + "password_fetchmail" => $vault }; + write_members_yaml + { "members" => $members, + "revoked" => $yaml->{"revoked"} }; + mysystem ("ansible-playbook -e \@Secret/become.yml", + " -t accounts -l core,front playbooks/site.yml"); + exit; +} + +sub valid_username (@) { + my $sub = $_[0]; + die "usage: $0 $sub USER\n" + if @_ != 2; + my $username = lc $_[1]; + die "$username: does not begin with an alphabetic character\n" + if $username !~ /^[a-z]/; + die "$username: contains non-alphanumeric character(s)\n" + if $username !~ /^[a-z0-9]+$/; + return $username; +} + +sub shell_escape ($) { + my ($string) = @_; + my $result = "$string"; + $result =~ s/([\$`"\\ ])/\\$1/g; + return ($result); +} + +sub strip_vault ($) { + my ($string) = @_; + die "Unexpected result from ansible-vault: $string\n" + if $string !~ /^ *!vault [|]/; + my @lines = split /^ */m, $string; + return (join "", @lines[1..$#lines]); +} + +use MIME::Base64; + +if (defined $ARGV[0] && $ARGV[0] eq "pass") { + my $I = new IO::File; + open $I, "gpg --homedir Secret/root.gnupg --quiet --decrypt |" + or die "Error running gpg: $!\n"; + my $msg_yaml = LoadFile ($I); + close $I or die "Error closing pipe from gpg: $!\n"; + + my $user = $msg_yaml->{"username"}; + die "Could not find a username in the decrypted input.\n" + if ! defined $user; + my $pass64 = $msg_yaml->{"password"}; + die "Could not find a password in the decrypted input.\n" + if ! defined $pass64; + + my $mem_yaml = read_members_yaml (); + my $members = $mem_yaml->{"members"}; + my $member = $members->{$user}; + die "No such member: $user\n" if ! defined $member; + + my $pass = decode_base64 $pass64; + my $epass = shell_escape $pass; + my $front = `mkpasswd -m sha-512 "$epass"`; chomp $front; + my $core = `mkpasswd -m sha-512 "$epass"`; chomp $core; + my $vault = strip_vault `ansible-vault encrypt_string "$epass"`; + $member->{"password_front"} = $front; + $member->{"password_core"} = $core; + $member->{"password_fetchmail"} = $vault; + + mysystem ("ansible-playbook -e \@Secret/become.yml", + "playbooks/nextcloud-pass.yml", + "-e user=$user", "-e \"pass=$epass\""); + write_members_yaml $mem_yaml; + mysystem ("ansible-playbook -e \@Secret/become.yml", + "-t accounts playbooks/site.yml"); + my $O = new IO::File; + open ($O, "| sendmail $user\@$domain_priv") + or die "Could not pipe to sendmail: $!\n"; + print $O "From: +To: <$user> +Subject: Password change. + +Your new password has been distributed to the servers. + +As always: please email root with any questions or concerns.\n"; + close $O or die "pipe to sendmail failed: $!\n"; + exit; +} + +if (defined $ARGV[0] && $ARGV[0] eq "old") { + my $user = valid_username (@ARGV); + my $yaml = read_members_yaml (); + my $members = $yaml->{"members"}; + my $member = $members->{$user}; + die "$user: does not exist\n" if ! defined $member; + + mysystem ("ansible-playbook -e \@Secret/become.yml", + "playbooks/nextcloud-old.yml -e user=$user"); + $member->{"status"} = "former"; + write_members_yaml { "members" => $members, + "revoked" => [ sort @{$member->{"clients"}}, + @{$yaml->{"revoked"}} ] }; + mysystem ("ansible-playbook -e \@Secret/become.yml", + "-t accounts playbooks/site.yml"); + exit; +} + +sub write_template ($$$$$$$$$); +sub read_file ($); +sub add_client ($$$); + +if (defined $ARGV[0] && $ARGV[0] eq "client") { + die "Secret/CA/easyrsa: not found\n" if ! -x "Secret/CA/easyrsa"; + my $type = $ARGV[1]||""; + my $name = $ARGV[2]||""; + my $user = $ARGV[3]||""; + if ($type eq "campus") { + die "usage: $0 client campus NAME\n" if @ARGV != 3; + die "$name: invalid host name\n" if $name !~ /^[a-z][-a-z0-9]+$/; + } elsif ($type eq "android" || $type eq "debian") { + die "usage: $0 client $type NAME USER\n" if @ARGV != 4; + die "$name: invalid host name\n" if $name !~ /^[a-z][-a-z0-9]+$/; + } else { + die "usage: $0 client [debian|android|campus]\n" if @ARGV != 4; + } + my $yaml; + my $member; + if ($type ne "campus") { + $yaml = read_members_yaml; + my $members = $yaml->{"members"}; + if (@ARGV == 4) { + $member = $members->{$user}; + die "$user: does not exist\n" if ! defined $member; + } + if (defined $member) { + my ($owner) = grep { grep { $_ eq $name } @{$_->{"clients"}} } + values %{$members}; + die "$name: owned by $owner->{username}\n" + if defined $owner && $owner->{username} ne $member->{username}; + } + } + + die "Secret/CA: no certificate authority found" + if ! -d "Secret/CA/pki/issued"; + + if (! -f "Secret/CA/pki/issued/$name.crt") { + mysystem "cd Secret/CA; ./easyrsa build-client-full $name nopass"; + } else { + print "Using existing key/cert...\n"; + } + + if ($type ne "campus") { + my $clients = $member->{"clients"}; + if (! grep { $_ eq $name } @$clients) { + $member->{"clients"} = [ $name, @$clients ]; + write_members_yaml $yaml; + } + } + + umask 077; + my $DEV = $type eq "android" ? "tun" : "ovpn"; + my $CA = read_file "Secret/CA/pki/ca.crt"; + my $CRT = read_file "Secret/CA/pki/issued/$name.crt"; + my $KEY = read_file "Secret/CA/pki/private/$name.key"; + my $UP = $type eq "android" ? "" : " +script-security 2 +up /etc/openvpn/update-systemd-resolved +up-restart"; + + if ($type ne "campus") { + my $TA = read_file "Secret/front-ta.key"; + write_template ($DEV,$UP,$CA,$CRT,$KEY,$TA, $front_addr, + $domain_name, "public.ovpn"); + print "Wrote public VPN configuration to public.ovpn.\n"; + } + my $TA = read_file "Secret/gate-ta.key"; + write_template ($DEV,$UP,$CA,$CRT,$KEY,$TA, $gate_wifi_addr, + "gate.$domain_priv", "campus.ovpn"); + print "Wrote campus VPN configuration to campus.ovpn.\n"; + + exit; +} + +sub write_template ($$$$$$$$$) { + my ($DEV,$UP,$CA,$CRT,$KEY,$TA,$ADDR,$NAME,$FILE) = @_; + my $O = new IO::File; + open ($O, ">$FILE.tmp") or die "Could not open $FILE.tmp: $!\n"; + print $O "client +dev-type tun +dev $DEV +remote $ADDR +nobind +user nobody +group nogroup +persist-key +persist-tun +remote-cert-tls server +verify-x509-name $NAME name +cipher AES-256-GCM +auth SHA256$UP +verb 3 +key-direction 1 +\n$CA +\n$CRT +\n$KEY +\n$TA\n"; + close $O or die "Could not close $FILE.tmp: $!\n"; + rename ("$FILE.tmp", $FILE) + or die "Could not rename $FILE.tmp: $!\n"; +} + +sub read_file ($) { + my ($path) = @_; + my $I = new IO::File; + open ($I, "<$path") or die "$path: could not read: $!\n"; + local $/; + my $c = <$I>; + close $I or die "$path: could not close: $!\n"; + return $c; +} + +die "usage: $0 [CA|config|new|pass|old|client] ...\n"; diff --git a/jquery.js b/jquery.js new file mode 100644 index 0000000..7556941 --- /dev/null +++ b/jquery.js @@ -0,0 +1,2 @@ +/*! jQuery v3.6.0 -ajax,-ajax/jsonp,-ajax/load,-ajax/script,-ajax/var/location,-ajax/var/nonce,-ajax/var/rquery,-ajax/xhr,-manipulation/_evalUrl,-deprecated/ajax-event-alias,-effects,-effects/Tween,-effects/animatedSelector | (c) OpenJS Foundation and other contributors | jquery.org/license */ +!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(g,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,v=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,y=n.hasOwnProperty,a=y.toString,l=a.call(Object),m={},b=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType&&"function"!=typeof e.item},x=function(e){return null!=e&&e===e.window},w=g.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function C(e,t,n){var r,i,o=(n=n||w).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function T(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.6.0 -ajax,-ajax/jsonp,-ajax/load,-ajax/script,-ajax/var/location,-ajax/var/nonce,-ajax/var/rquery,-ajax/xhr,-manipulation/_evalUrl,-deprecated/ajax-event-alias,-effects,-effects/Tween,-effects/animatedSelector",E=function(e,t){return new E.fn.init(e,t)};function d(e){var t=!!e&&"length"in e&&e.length,n=T(e);return!b(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+R+")"+R+"*"),U=new RegExp(R+"|>"),V=new RegExp(W),X=new RegExp("^"+B+"$"),Q={ID:new RegExp("^#("+B+")"),CLASS:new RegExp("^\\.("+B+")"),TAG:new RegExp("^("+B+"|[*])"),ATTR:new RegExp("^"+M),PSEUDO:new RegExp("^"+W),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+R+"*(even|odd|(([+-]|)(\\d*)n|)"+R+"*(?:([+-]|)"+R+"*(\\d+)|))"+R+"*\\)|)","i"),bool:new RegExp("^(?:"+I+")$","i"),needsContext:new RegExp("^"+R+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+R+"*((?:-\\d)?\\d*)"+R+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,G=/^(?:input|select|textarea|button)$/i,K=/^h\d$/i,J=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+R+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){C()},ae=xe(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{O.apply(t=P.call(d.childNodes),d.childNodes),t[d.childNodes.length].nodeType}catch(e){O={apply:t.length?function(e,t){q.apply(e,P.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,d=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==d&&9!==d&&11!==d)return n;if(!r&&(C(e),e=e||T,E)){if(11!==d&&(u=Z.exec(t)))if(i=u[1]){if(9===d){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return O.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&p.getElementsByClassName&&e.getElementsByClassName)return O.apply(n,e.getElementsByClassName(i)),n}if(p.qsa&&!k[t+" "]&&(!v||!v.test(t))&&(1!==d||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===d&&(U.test(t)||_.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&p.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=A)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+be(l[o]);c=l.join(",")}try{return O.apply(n,f.querySelectorAll(c)),n}catch(e){k(t,!0)}finally{s===A&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>x.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[A]=!0,e}function ce(e){var t=T.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)x.attrHandle[n[r]]=t}function de(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function pe(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in p=se.support={},i=se.isXML=function(e){var t=e&&e.namespaceURI,n=e&&(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},C=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:d;return r!=T&&9===r.nodeType&&r.documentElement&&(a=(T=r).documentElement,E=!i(T),d!=T&&(n=T.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),p.scope=ce(function(e){return a.appendChild(e).appendChild(T.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),p.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),p.getElementsByTagName=ce(function(e){return e.appendChild(T.createComment("")),!e.getElementsByTagName("*").length}),p.getElementsByClassName=J.test(T.getElementsByClassName),p.getById=ce(function(e){return a.appendChild(e).id=A,!T.getElementsByName||!T.getElementsByName(A).length}),p.getById?(x.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},x.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(x.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},x.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),x.find.TAG=p.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):p.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},x.find.CLASS=p.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(p.qsa=J.test(T.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+R+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+R+"*(?:value|"+I+")"),e.querySelectorAll("[id~="+A+"-]").length||v.push("~="),(t=T.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+R+"*name"+R+"*="+R+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+A+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=T.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+R+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(p.matchesSelector=J.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){p.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",W)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=J.test(a.compareDocumentPosition),y=t||J.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},D=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!p.sortDetached&&t.compareDocumentPosition(e)===n?e==T||e.ownerDocument==d&&y(d,e)?-1:t==T||t.ownerDocument==d&&y(d,t)?1:u?H(u,e)-H(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==T?-1:t==T?1:i?-1:o?1:u?H(u,e)-H(u,t):0;if(i===o)return de(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?de(a[r],s[r]):a[r]==d?-1:s[r]==d?1:0}),T},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(C(e),p.matchesSelector&&E&&!k[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||p.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){k(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return Q.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&V.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+R+")"+e+"("+R+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function D(e,n,r){return b(n)?E.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?E.grep(e,function(e){return e===n!==r}):"string"!=typeof n?E.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(E.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||L,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:j.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof E?t[0]:t,E.merge(this,E.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:w,!0)),k.test(r[1])&&E.isPlainObject(t))for(r in t)b(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=w.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):b(e)?void 0!==n.ready?n.ready(e):e(E):E.makeArray(e,this)}).prototype=E.fn,L=E(w);var q=/^(?:parents|prev(?:Until|All))/,O={children:!0,contents:!0,next:!0,prev:!0};function P(e,t){while((e=e[t])&&1!==e.nodeType);return e}E.fn.extend({has:function(e){var t=E(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,pe=/^$|^module$|\/(?:java|ecma)script/i;le=w.createDocumentFragment().appendChild(w.createElement("div")),(ce=w.createElement("input")).setAttribute("type","radio"),ce.setAttribute("checked","checked"),ce.setAttribute("name","t"),le.appendChild(ce),m.checkClone=le.cloneNode(!0).cloneNode(!0).lastChild.checked,le.innerHTML="",m.noCloneChecked=!!le.cloneNode(!0).lastChild.defaultValue,le.innerHTML="",m.option=!!le.lastChild;var he={thead:[1,"","
"],col:[2,"","
"],tr:[2,"","
"],td:[3,"","
"],_default:[0,"",""]};function ge(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&S(e,t)?E.merge([e],n):n}function ve(e,t){for(var n=0,r=e.length;n",""]);var ye=/<|&#?\w+;/;function me(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),d=[],p=0,h=e.length;p\s*$/g;function ke(e,t){return S(e,"table")&&S(11!==t.nodeType?t:t.firstChild,"tr")&&E(e).children("tbody")[0]||e}function De(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function Le(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function je(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n
",2===lt.childNodes.length),E.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(m.createHTMLDocument?((r=(t=w.implementation.createHTMLDocument("")).createElement("base")).href=w.location.href,t.head.appendChild(r)):t=w),o=!n&&[],(i=k.exec(e))?[t.createElement(i[1])]:(i=me([e],t,o),o&&o.length&&E(o).remove(),E.merge([],i.childNodes)));var r,i,o},E.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=E.css(e,"position"),c=E(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=E.css(e,"top"),u=E.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),b(t)&&(t=t.call(e,n,E.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):c.css(f)}},E.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){E.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===E.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===E.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=E(e).offset()).top+=E.css(e,"borderTopWidth",!0),i.left+=E.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-E.css(r,"marginTop",!0),left:t.left-i.left-E.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===E.css(e,"position"))e=e.offsetParent;return e||re})}}),E.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;E.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),E.each(["top","left"],function(e,n){E.cssHooks[n]=Me(m.pixelPosition,function(e,t){if(t)return t=Be(e,n),Pe.test(t)?E(e).position()[n]+"px":t})}),E.each({Height:"height",Width:"width"},function(a,s){E.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){E.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?E.css(e,t,i):E.style(e,t,n,i)},s,n?e:void 0,n)}})}),E.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),E.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){E.fn[n]=function(e,t){return 0/dev/null ) + + echo "Dumping nextcloud database." + ( cd /Nextcloud/ + umask 07 + BAK=`date +"%Y%m%d"`-dbbackup.bak.gz + CNF=/Nextcloud/dbbackup.cnf + mysqldump --defaults-file=$CNF nextcloud | gzip > $BAK + chmod 440 $BAK ) + fi + +} + +function finish () { + + if [ ! $presync ] + then + echo "Putting nextcloud back into service." + ( cd /Nextcloud/nextcloud/ + sudo -u www-data php occ maintenance:mode --off &>/dev/null ) + fi + + if [ $mounted ] + then + echo "Unmounting /backup/." + umount /backup + cryptsetup luksClose backup + mounted= + fi + echo "Done." + echo "The backup device can be safely disconnected." + +} + +start + +for D in /home /[A-Z]*; do + echo "Updating /backup$D/." + ionice --class Idle --ignore \ + rsync -av --delete --exclude=.NoBackups $D/ /backup$D/ +done + +finish diff --git a/private/core-dhcpd.conf b/private/core-dhcpd.conf new file mode 100644 index 0000000..6ff58eb --- /dev/null +++ b/private/core-dhcpd.conf @@ -0,0 +1,28 @@ +option domain-name "small.private"; +option domain-name-servers 192.168.56.1; + +default-lease-time 3600; +max-lease-time 7200; + +ddns-update-style none; + +authoritative; + +log-facility daemon; + +option rfc3442-routes code 121 = array of integer 8; + +subnet 192.168.56.0 netmask 255.255.255.0 { + option subnet-mask 255.255.255.0; + option broadcast-address 192.168.56.255; + option routers 192.168.56.2; + option ntp-servers 192.168.56.1; + option rfc3442-routes 24, 10,177,86, 192,168,56,1, 0, 192,168,56,2; +} + +host core { + hardware ethernet 08:00:27:45:3b:a2; fixed-address 192.168.56.1; } +host gate { + hardware ethernet 08:00:27:e0:79:ab; fixed-address 192.168.56.2; } +host server { + hardware ethernet 08:00:27:f3:41:66; fixed-address 192.168.56.3; } diff --git a/private/db.campus_vpn b/private/db.campus_vpn new file mode 100644 index 0000000..edc0ab0 --- /dev/null +++ b/private/db.campus_vpn @@ -0,0 +1,14 @@ +; +; BIND reverse data file for a small institute's campus VPN. +; +$TTL 604800 +@ IN SOA small.private. root.small.private. ( + 1 ; Serial + 604800 ; Refresh + 86400 ; Retry + 2419200 ; Expire + 604800 ) ; Negative Cache TTL +; +@ IN NS core.small.private. +$TTL 7200 +1 IN PTR gate-c.small.private. diff --git a/private/db.domain b/private/db.domain new file mode 100644 index 0000000..d830023 --- /dev/null +++ b/private/db.domain @@ -0,0 +1,24 @@ +; +; BIND data file for a small institute's PRIVATE domain names. +; +$TTL 604800 +@ IN SOA small.private. root.small.private. ( + 1 ; Serial + 604800 ; Refresh + 86400 ; Retry + 2419200 ; Expire + 604800 ) ; Negative Cache TTL +; +@ IN NS core.small.private. +$TTL 7200 +mail IN CNAME core.small.private. +smtp IN CNAME core.small.private. +ns IN CNAME core.small.private. +www IN CNAME core.small.private. +test IN CNAME core.small.private. +live IN CNAME core.small.private. +ntp IN CNAME core.small.private. +sip IN A 10.177.86.1 +; +core IN A 192.168.56.1 +gate IN A 192.168.56.2 diff --git a/private/db.private b/private/db.private new file mode 100644 index 0000000..f8758d1 --- /dev/null +++ b/private/db.private @@ -0,0 +1,15 @@ +; +; BIND reverse data file for a small institute's private Ethernet. +; +$TTL 604800 +@ IN SOA small.private. root.small.private. ( + 1 ; Serial + 604800 ; Refresh + 86400 ; Retry + 2419200 ; Expire + 604800 ) ; Negative Cache TTL +; +@ IN NS core.small.private. +$TTL 7200 +1 IN PTR core.small.private. +2 IN PTR gate.small.private. diff --git a/private/db.public_vpn b/private/db.public_vpn new file mode 100644 index 0000000..3a6aedf --- /dev/null +++ b/private/db.public_vpn @@ -0,0 +1,15 @@ +; +; BIND reverse data file for a small institute's public VPN. +; +$TTL 604800 +@ IN SOA small.private. root.small.private. ( + 1 ; Serial + 604800 ; Refresh + 86400 ; Retry + 2419200 ; Expire + 604800 ) ; Negative Cache TTL +; +@ IN NS core.small.private. +$TTL 7200 +1 IN PTR front-p.small.private. +2 IN PTR core-p.small.private. diff --git a/private/members-empty.yml b/private/members-empty.yml new file mode 100644 index 0000000..60e422a --- /dev/null +++ b/private/members-empty.yml @@ -0,0 +1,4 @@ +--- +members: +usernames: [] +revoked: [] diff --git a/private/vars.yml b/private/vars.yml new file mode 100644 index 0000000..f67b971 --- /dev/null +++ b/private/vars.yml @@ -0,0 +1,50 @@ +--- +private_net_cidr: 192.168.56.0/24 +public_vpn_net_cidr: 10.177.86.0/24 +campus_vpn_net_cidr: 10.84.138.0/24 +gate_wifi_net_cidr: 192.168.57.0/24 + +private_net: "{{ private_net_cidr | ipaddr('network') }}" +private_net_mask: "{{ private_net_cidr | ipaddr('netmask') }}" +private_net_and_mask: "{{ private_net }} {{ private_net_mask }}" +public_vpn_net: "{{ public_vpn_net_cidr | ipaddr('network') }}" +public_vpn_net_mask: "{{ public_vpn_net_cidr | ipaddr('netmask') }}" +public_vpn_net_and_mask: + "{{ public_vpn_net }} {{ public_vpn_net_mask }}" +campus_vpn_net: "{{ campus_vpn_net_cidr | ipaddr('network') }}" +campus_vpn_net_mask: "{{ campus_vpn_net_cidr | ipaddr('netmask') }}" +campus_vpn_net_and_mask: + "{{ campus_vpn_net }} {{ campus_vpn_net_mask }}" +gate_wifi_net: "{{ gate_wifi_net_cidr | ipaddr('network') }}" +gate_wifi_net_mask: "{{ gate_wifi_net_cidr | ipaddr('netmask') }}" +gate_wifi_net_and_mask: + "{{ gate_wifi_net }} {{ gate_wifi_net_mask }}" +gate_wifi_broadcast: "{{ gate_wifi_net_cidr | ipaddr('broadcast') }}" + +core_addr_cidr: "{{ private_net_cidr | ipaddr('1') }}" +gate_addr_cidr: "{{ private_net_cidr | ipaddr('2') }}" +gate_wifi_addr_cidr: "{{ gate_wifi_net_cidr | ipaddr('1') }}" +wifi_wan_addr_cidr: "{{ gate_wifi_net_cidr | ipaddr('2') }}" +front_private_addr_cidr: "{{ public_vpn_net_cidr | ipaddr('1') }}" + +core_addr: "{{ core_addr_cidr | ipaddr('address') }}" +gate_addr: "{{ gate_addr_cidr | ipaddr('address') }}" +gate_wifi_addr: "{{ gate_wifi_addr_cidr | ipaddr('address') }}" +wifi_wan_addr: "{{ wifi_wan_addr_cidr | ipaddr('address') }}" +front_private_addr: + "{{ front_private_addr_cidr | ipaddr('address') }}" + +nextcloud_dbpass: ippAgmaygyob + +nextcloud_region: US + +gate_lan_mac: ff:ff:ff:ff:ff:ff +gate_wifi_mac: ff:ff:ff:ff:ff:ff +gate_isp_mac: ff:ff:ff:ff:ff:ff + +wifi_wan_mac: 94:83:c4:19:7d:57 +wifi_wan_name: campus-wifi-ap + +membership_rolls: +- "../private/members.yml" +- "../private/members-empty.yml" diff --git a/private/webupdate b/private/webupdate new file mode 100644 index 0000000..55fc456 --- /dev/null +++ b/private/webupdate @@ -0,0 +1,10 @@ +#!/bin/bash -e +# +# DO NOT EDIT. This file was tangled from institute.org. + +cd /WWW/live/ + +rsync -avz --delete --chmod=g-w \ + --filter='exclude *~' \ + --filter='exclude .git*' \ + ./ {{ domain_name }}:/home/www/ diff --git a/public/vars.yml b/public/vars.yml new file mode 100644 index 0000000..3700461 --- /dev/null +++ b/public/vars.yml @@ -0,0 +1,7 @@ +--- +domain_name: small.example.org +domain_priv: small.private + +front_addr: 192.168.15.5 + +full_name: Small Institute LLC diff --git a/roles_t/campus/files/nrpe.cfg b/roles_t/campus/files/nrpe.cfg new file mode 100644 index 0000000..192e571 --- /dev/null +++ b/roles_t/campus/files/nrpe.cfg @@ -0,0 +1,5 @@ +command[inst_root]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p / + +command[inst_swap]=/usr/lib/nagios/plugins/check_swap -w 20% -c 10% + +command[inst_sensors]=/usr/local/sbin/inst_sensors diff --git a/roles_t/campus/handlers/main.yml b/roles_t/campus/handlers/main.yml new file mode 100644 index 0000000..adb4f4e --- /dev/null +++ b/roles_t/campus/handlers/main.yml @@ -0,0 +1,36 @@ +--- +- name: Update hostname. + become: yes + command: hostname -F /etc/hostname + +- name: Reload Systemd. + become: yes + command: systemctl daemon-reload + +- name: Restart Systemd resolved. + become: yes + systemd: + service: systemd-resolved + state: restarted + +- name: Restart systemd-timesyncd. + become: yes + systemd: + service: systemd-timesyncd + state: restarted + +- name: Update CAs. + become: yes + command: update-ca-certificates + +- name: Restart Postfix. + become: yes + systemd: + service: postfix + state: restarted + +- name: Reload NRPE server. + become: yes + systemd: + service: nagios-nrpe-server + state: reloaded diff --git a/roles_t/campus/tasks/main.yml b/roles_t/campus/tasks/main.yml new file mode 100644 index 0000000..9773336 --- /dev/null +++ b/roles_t/campus/tasks/main.yml @@ -0,0 +1,174 @@ +--- +- name: Include public variables. + include_vars: ../public/vars.yml +- name: Include private variables. + include_vars: ../private/vars.yml + +- name: Configure hostname. + become: yes + copy: + content: "{{ item.content }}" + dest: "{{ item.file }}" + loop: + - { file: /etc/hostname, + content: "{{ inventory_hostname }}" } + - { file: /etc/mailname, + content: "{{ inventory_hostname }}.{{ domain_priv }}" } + when: inventory_hostname != ansible_hostname + notify: Update hostname. + +- name: Install systemd-resolved. + become: yes + apt: pkg=systemd-resolved + when: + - ansible_distribution == 'Debian' + - 11 < ansible_distribution_major_version|int + +- name: Enable/Start systemd-networkd. + become: yes + systemd: + service: systemd-networkd + enabled: yes + state: started + +- name: Enable/Start systemd-resolved. + become: yes + systemd: + service: systemd-resolved + enabled: yes + state: started + +- name: Link /etc/resolv.conf. + become: yes + file: + path: /etc/resolv.conf + src: /run/systemd/resolve/resolv.conf + state: link + force: yes + when: + - ansible_distribution == 'Debian' + - 12 > ansible_distribution_major_version|int + +- name: Configure resolved. + become: yes + lineinfile: + path: /etc/systemd/resolved.conf + regexp: "{{ item.regexp }}" + line: "{{ item.line }}" + loop: + - { regexp: '^ *DNS *=', line: "DNS={{ core_addr }}" } + - { regexp: '^ *FallbackDNS *=', line: "FallbackDNS=8.8.8.8" } + - { regexp: '^ *Domains *=', line: "Domains={{ domain_priv }}" } + notify: + - Reload Systemd. + - Restart Systemd resolved. + +- name: Configure timesyncd. + become: yes + lineinfile: + path: /etc/systemd/timesyncd.conf + line: NTP=ntp.{{ domain_priv }} + notify: Restart systemd-timesyncd. + +- name: Add {{ ansible_user }} to system groups. + become: yes + user: + name: "{{ ansible_user }}" + append: yes + groups: root,adm + +- name: Trust the institute CA. + become: yes + copy: + src: ../Secret/CA/pki/ca.crt + dest: /usr/local/share/ca-certificates/{{ domain_name }}.crt + mode: u=r,g=r,o=r + owner: root + group: root + notify: Update CAs. + +- name: Install basic software. + become: yes + apt: pkg=unattended-upgrades + +- name: Install Postfix. + become: yes + apt: pkg=postfix + +- name: Configure Postfix. + become: yes + lineinfile: + path: /etc/postfix/main.cf + regexp: "^ *{{ item.p }} *=" + line: "{{ item.p }} = {{ item.v }}" + loop: + - p: smtpd_relay_restrictions + v: permit_mynetworks reject_unauth_destination + - { p: message_size_limit, v: 104857600 } + - { p: delay_warning_time, v: 1h } + - { p: maximal_queue_lifetime, v: 4h } + - { p: bounce_queue_lifetime, v: 4h } + - { p: home_mailbox, v: Maildir/ } + - { p: myhostname, + v: "{{ inventory_hostname }}.{{ domain_priv }}" } + - { p: mydestination, + v: "{{ postfix_mydestination | default('') }}" } + - { p: relayhost, v: "[smtp.{{ domain_priv }}]" } + - { p: inet_interfaces, v: loopback-only } + notify: Restart Postfix. + +- name: Enable/Start Postfix. + become: yes + systemd: + service: postfix + enabled: yes + state: started + +- name: Hard-wire important IP addresses. + become: yes + lineinfile: + path: /etc/hosts + regexp: "{{ item.regexp }}" + line: "{{ item.line }}" + insertafter: EOF + vars: + name: "{{ inventory_hostname }}" + loop: + - regexp: "^{{ front_addr }}[ ].*" + line: "{{ front_addr }} {{ domain_name }}" + - regexp: "^127.0.1.1[ ].*" + line: "127.0.1.1 {{ name }}.localdomain {{ name }}" + +- name: Install NRPE. + become: yes + apt: + pkg: [ nagios-nrpe-server, lm-sensors ] + +- name: Install inst_sensors NAGIOS plugin. + become: yes + copy: + src: ../core/files/inst_sensors + dest: /usr/local/sbin/inst_sensors + mode: u=rwx,g=rx,o=rx + +- name: Configure NRPE server. + become: yes + copy: + content: | + allowed_hosts=127.0.0.1,::1,{{ core_addr }} + dest: /etc/nagios/nrpe_local.cfg + notify: Reload NRPE server. + +- name: Configure NRPE commands. + become: yes + copy: + src: nrpe.cfg + dest: /etc/nagios/nrpe.d/institute.cfg + notify: Reload NRPE server. + +- name: Enable/Start NRPE server. + become: yes + systemd: + service: nagios-nrpe-server + enabled: yes + state: started diff --git a/roles_t/core/files/inst_sensors b/roles_t/core/files/inst_sensors new file mode 100644 index 0000000..1bca115 --- /dev/null +++ b/roles_t/core/files/inst_sensors @@ -0,0 +1,76 @@ +#!/bin/sh + +PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin" +export PATH +PROGNAME=`basename $0` +REVISION="2.3.1" + +. /usr/lib/nagios/plugins/utils.sh + +print_usage() { + echo "Usage: $PROGNAME" [--ignore-fault] +} + +print_help() { + print_revision $PROGNAME $REVISION + echo "" + print_usage + echo "" + echo "This plugin checks hardware status using the lm_sensors package." + echo "" + support + exit $STATE_OK +} + +brief_data() { + echo "$1" | sed -n -E -e ' + /^ *Core [0-9]+:/ { s/^ *Core [0-9]+: +([-+]?[0-9.]+).*/ \1/; H } + $ { x; s/\n//g; p }' +} + +case "$1" in + --help) + print_help + exit $STATE_OK + ;; + -h) + print_help + exit $STATE_OK + ;; + --version) + print_revision $PROGNAME $REVISION + exit $STATE_OK + ;; + -V) + print_revision $PROGNAME $REVISION + exit $STATE_OK + ;; + *) + sensordata=`sensors 2>&1` + status=$? + if test ${status} -eq 127; then + text="SENSORS UNKNOWN - command not found" + text="$text (did you install lmsensors?)" + exit=$STATE_UNKNOWN + elif test ${status} -ne 0; then + text="WARNING - sensors returned state $status" + exit=$STATE_WARNING + elif echo ${sensordata} | egrep ALARM > /dev/null; then + text="SENSOR CRITICAL -`brief_data "${sensordata}"`" + exit=$STATE_CRITICAL + elif echo ${sensordata} | egrep FAULT > /dev/null \ + && test "$1" != "-i" -a "$1" != "--ignore-fault"; then + text="SENSOR UNKNOWN - Sensor reported fault" + exit=$STATE_UNKNOWN + else + text="SENSORS OK -`brief_data "${sensordata}"`" + exit=$STATE_OK + fi + + echo "$text" + if test "$1" = "-v" -o "$1" = "--verbose"; then + echo ${sensordata} + fi + exit $exit + ;; +esac diff --git a/roles_t/core/files/nextcloud.conf b/roles_t/core/files/nextcloud.conf new file mode 100644 index 0000000..0a58a5c --- /dev/null +++ b/roles_t/core/files/nextcloud.conf @@ -0,0 +1,31 @@ +Alias /nextcloud "/var/www/nextcloud/" + + + Require all granted + AllowOverride All + Options FollowSymlinks MultiViews + + + Dav off + + + + + + RewriteEngine on + # LogLevel alert rewrite:trace3 + RewriteRule ^\.well-known/carddav \ + /nextcloud/remote.php/dav [R=301,L] + RewriteRule ^\.well-known/caldav \ + /nextcloud/remote.php/dav [R=301,L] + RewriteRule ^\.well-known/webfinger \ + /nextcloud/index.php/.well-known/webfinger [R=301,L] + RewriteRule ^\.well-known/nodeinfo \ + /nextcloud/index.php/.well-known/nodeinfo [R=301,L] + + + + + Header always set \ + Strict-Transport-Security "max-age=15552000; includeSubDomains" + diff --git a/roles_t/core/handlers/main.yml b/roles_t/core/handlers/main.yml new file mode 100644 index 0000000..418014a --- /dev/null +++ b/roles_t/core/handlers/main.yml @@ -0,0 +1,79 @@ +--- +- name: Update hostname. + become: yes + command: hostname -F /etc/hostname + +- name: Reload Systemd. + become: yes + command: systemctl daemon-reload + +- name: Restart Systemd resolved. + become: yes + systemd: + service: systemd-resolved + state: restarted + +- name: Apply netplan. + become: yes + command: netplan apply + +- name: Restart DHCP server. + become: yes + systemd: + service: isc-dhcp-server + state: restarted + +- name: Reload BIND9. + become: yes + systemd: + service: bind9 + state: reloaded + +- name: Update CAs. + become: yes + command: update-ca-certificates + +- name: Restart Postfix. + become: yes + systemd: + service: postfix + state: restarted + +- name: Postmap transport. + become: yes + command: + chdir: /etc/postfix/ + cmd: postmap transport + notify: Restart Postfix. + +- name: New aliases. + become: yes + command: newaliases + +- name: Restart Dovecot. + become: yes + systemd: + service: dovecot + state: restarted + +- name: Restart Apache2. + become: yes + systemd: + service: apache2 + state: restarted + +- name: Restart OpenVPN. + become: yes + systemd: + service: openvpn@front + state: restarted + +- name: Reload NAGIOS4. + become: yes + systemd: + service: nagios4 + state: reloaded + +- name: Import root PGP key. + become: no + command: gpg --import ~/.gnupg-root-pub.pem diff --git a/roles_t/core/tasks/main.yml b/roles_t/core/tasks/main.yml new file mode 100644 index 0000000..bbf3053 --- /dev/null +++ b/roles_t/core/tasks/main.yml @@ -0,0 +1,977 @@ +--- +- name: Include public variables. + include_vars: ../public/vars.yml + tags: accounts +- name: Include private variables. + include_vars: ../private/vars.yml + tags: accounts +- name: Include members. + include_vars: "{{ lookup('first_found', membership_rolls) }}" + tags: accounts + +- name: Configure hostname. + become: yes + copy: + content: "{{ item.name }}\n" + dest: "{{ item.file }}" + loop: + - { name: "core.{{ domain_priv }}", file: /etc/mailname } + - { name: "{{ inventory_hostname }}", file: /etc/hostname } + notify: Update hostname. + +- name: Install systemd-resolved. + become: yes + apt: pkg=systemd-resolved + when: + - ansible_distribution == 'Debian' + - 11 < ansible_distribution_major_version|int + +- name: Enable/Start systemd-networkd. + become: yes + systemd: + service: systemd-networkd + enabled: yes + state: started + +- name: Enable/Start systemd-resolved. + become: yes + systemd: + service: systemd-resolved + enabled: yes + state: started + +- name: Link /etc/resolv.conf. + become: yes + file: + path: /etc/resolv.conf + src: /run/systemd/resolve/resolv.conf + state: link + force: yes + when: + - ansible_distribution == 'Debian' + - 12 > ansible_distribution_major_version|int + +- name: Configure resolved. + become: yes + lineinfile: + path: /etc/systemd/resolved.conf + regexp: "{{ item.regexp }}" + line: "{{ item.line }}" + loop: + - { regexp: '^ *DNS *=', line: "DNS=127.0.0.1" } + - { regexp: '^ *FallbackDNS *=', line: "FallbackDNS=8.8.8.8" } + - { regexp: '^ *Domains *=', line: "Domains={{ domain_priv }}" } + - { regexp: '^ *Cache *=', line: "Cache=no" } + - { regexp: '^ *DNSStubListener *=', line: "DNSStubListener=no" } + notify: + - Reload Systemd. + - Restart Systemd resolved. + +- name: Install netplan. + become: yes + apt: pkg=netplan.io + +- name: Configure netplan. + become: yes + copy: + content: | + network: + renderer: networkd + ethernets: + {{ ansible_default_ipv4.interface }}: + dhcp4: false + addresses: [ {{ core_addr_cidr }} ] + nameservers: + search: [ {{ domain_priv }} ] + addresses: [ {{ core_addr }} ] + gateway4: {{ gate_addr }} + dest: /etc/netplan/60-core.yaml + mode: u=rw,g=r,o= + notify: Apply netplan. + +- name: Install DHCP server. + become: yes + apt: pkg=isc-dhcp-server + +- name: Configure DHCP interface. + become: yes + lineinfile: + path: /etc/default/isc-dhcp-server + line: INTERFACESv4="{{ ansible_default_ipv4.interface }}" + regexp: ^INTERFACESv4= + notify: Restart DHCP server. + +- name: Configure DHCP subnet. + become: yes + copy: + src: ../private/core-dhcpd.conf + dest: /etc/dhcp/dhcpd.conf + notify: Restart DHCP server. + +- name: Enable/Start DHCP server. + become: yes + systemd: + service: isc-dhcp-server + enabled: yes + state: started + +- name: Install BIND9. + become: yes + apt: pkg=bind9 + +- name: Configure BIND9 with named.conf.options. + become: yes + copy: + content: | + acl "trusted" { + {{ private_net_cidr }}; + {{ public_vpn_net_cidr }}; + {{ campus_vpn_net_cidr }}; + {{ gate_wifi_net_cidr }}; + localhost; + }; + + options { + directory "/var/cache/bind"; + + forwarders { + 8.8.4.4; + 8.8.8.8; + }; + + allow-query { any; }; + allow-recursion { trusted; }; + allow-query-cache { trusted; }; + + //============================================================ + // If BIND logs error messages about the root key being + // expired, you will need to update your keys. + // See https://www.isc.org/bind-keys + //============================================================ + //dnssec-validation auto; + // If Secure DNS is too much of a headache... + dnssec-enable no; + dnssec-validation no; + + auth-nxdomain no; # conform to RFC1035 + //listen-on-v6 { any; }; + listen-on { {{ core_addr }}; }; + }; + dest: /etc/bind/named.conf.options + notify: Reload BIND9. + +- name: Configure BIND9 with named.conf.local. + become: yes + copy: + content: | + include "/etc/bind/zones.rfc1918"; + + zone "{{ domain_priv }}." { + type master; + file "/etc/bind/db.domain"; + }; + + zone "{{ private_net_cidr | ipaddr('revdns') + | regex_replace('^0\.','') }}" { + type master; + file "/etc/bind/db.private"; + }; + + zone "{{ public_vpn_net_cidr | ipaddr('revdns') + | regex_replace('^0\.','') }}" { + type master; + file "/etc/bind/db.public_vpn"; + }; + + zone "{{ campus_vpn_net_cidr | ipaddr('revdns') + | regex_replace('^0\.','') }}" { + type master; + file "/etc/bind/db.campus_vpn"; + }; + dest: /etc/bind/named.conf.local + notify: Reload BIND9. + +- name: Install BIND9 zonefiles. + become: yes + copy: + src: ../private/db.{{ item }} + dest: /etc/bind/db.{{ item }} + loop: [ domain, private, public_vpn, campus_vpn ] + notify: Reload BIND9. + +- name: Enable/Start BIND9. + become: yes + systemd: + service: bind9 + enabled: yes + state: started + +- name: Add {{ ansible_user }} to system groups. + become: yes + user: + name: "{{ ansible_user }}" + append: yes + groups: root,adm + +- name: Create monkey. + become: yes + user: + name: monkey + system: yes + append: yes + groups: staff + +- name: Add {{ ansible_user }} to staff groups. + become: yes + user: + name: "{{ ansible_user }}" + append: yes + groups: monkey,staff + +- name: Create /home/monkey/.ssh/. + become: yes + file: + path: /home/monkey/.ssh + state: directory + mode: u=rwx,g=,o= + owner: monkey + group: monkey + +- name: Configure monkey@core. + become: yes + copy: + src: ../Secret/ssh_monkey/{{ item.name }} + dest: /home/monkey/.ssh/{{ item.name }} + mode: "{{ item.mode }}" + owner: monkey + group: monkey + loop: + - { name: config, mode: "u=rw,g=r,o=" } + - { name: id_rsa.pub, mode: "u=rw,g=r,o=r" } + - { name: id_rsa, mode: "u=rw,g=,o=" } + +- name: Configure Monkey SSH known hosts. + become: yes + vars: + pubkeypath: ../Secret/ssh_front/etc/ssh + pubkeyfile: "{{ pubkeypath }}/ssh_host_ecdsa_key.pub" + pubkey: "{{ lookup('file', pubkeyfile) }}" + lineinfile: + regexp: "^{{ domain_name }}" + line: "{{ domain_name }},{{ front_addr }} {{ pubkey }}" + path: /home/monkey/.ssh/known_hosts + create: yes + owner: monkey + group: monkey + mode: "u=rw,g=r,o=" + +- name: Install basic software. + become: yes + apt: pkg=unattended-upgrades + +- name: Install expect. + become: yes + apt: pkg=expect + +- name: Create user accounts. + become: yes + user: + name: "{{ item }}" + password: "{{ members[item].password_core }}" + update_password: always + home: /home/{{ item }} + loop: "{{ usernames }}" + when: members[item].status == 'current' + tags: accounts + +- name: Disable former users. + become: yes + user: + name: "{{ item }}" + password: "!" + loop: "{{ usernames }}" + when: members[item].status != 'current' + tags: accounts + +- name: Revoke former user authorized_keys. + become: yes + file: + path: /home/{{ item }}/.ssh/authorized_keys + state: absent + loop: "{{ usernames }}" + when: members[item].status != 'current' + tags: accounts + +- name: Trust the institute CA. + become: yes + copy: + src: ../Secret/CA/pki/ca.crt + dest: /usr/local/share/ca-certificates/{{ domain_name }}.crt + mode: u=r,g=r,o=r + owner: root + group: root + notify: Update CAs. + +- name: Install server certificate/key. + become: yes + copy: + src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }} + dest: /etc/server.{{ item.typ }} + mode: "{{ item.mode }}" + loop: + - { path: "issued/core.{{ domain_priv }}", typ: crt, + mode: "u=r,g=r,o=r" } + - { path: "private/core.{{ domain_priv }}", typ: key, + mode: "u=r,g=,o=" } + notify: + - Restart Postfix. + - Restart Dovecot. + - Restart OpenVPN. + +- name: Install NTP. + become: yes + apt: pkg=ntp + +- name: Install Postfix. + become: yes + apt: pkg=postfix + +- name: Configure Postfix. + become: yes + lineinfile: + path: /etc/postfix/main.cf + regexp: "^ *{{ item.p }} *=" + line: "{{ item.p }} = {{ item.v }}" + loop: + - p: smtpd_relay_restrictions + v: permit_mynetworks reject_unauth_destination + - { p: smtpd_tls_security_level, v: none } + - { p: smtp_tls_security_level, v: none } + - { p: message_size_limit, v: 104857600 } + - { p: delay_warning_time, v: 1h } + - { p: maximal_queue_lifetime, v: 4h } + - { p: bounce_queue_lifetime, v: 4h } + - { p: home_mailbox, v: Maildir/ } + - p: mynetworks + v: >- + {{ private_net_cidr }} + {{ public_vpn_net_cidr }} + {{ campus_vpn_net_cidr }} + 127.0.0.0/8 + [::ffff:127.0.0.0]/104 + [::1]/128 + - { p: relayhost, v: "[{{ front_private_addr }}]" } + - { p: inet_interfaces, v: "127.0.0.1 {{ core_addr }}" } + - { p: transport_maps, v: "hash:/etc/postfix/transport" } + notify: Restart Postfix. + +- name: Configure Postfix transport. + become: yes + copy: + content: | + .{{ domain_name }} local:$myhostname + .{{ domain_priv }} local:$myhostname + dest: /etc/postfix/transport + notify: Postmap transport. + +- name: Enable/Start Postfix. + become: yes + systemd: + service: postfix + enabled: yes + state: started + +- name: Install institute email aliases. + become: yes + blockinfile: + block: | + webmaster: root + admin: root + www-data: root + monkey: root + root: {{ ansible_user }} + path: /etc/aliases + marker: "# {mark} INSTITUTE MANAGED BLOCK" + notify: New aliases. + +- name: Install Dovecot IMAPd. + become: yes + apt: pkg=dovecot-imapd + +- name: Configure Dovecot IMAPd. + become: yes + copy: + content: | + protocols = imap + ssl = required + ssl_cert = + ServerName live + ServerAlias live.{{ domain_priv }} + ServerAdmin webmaster@core.{{ domain_priv }} + + DocumentRoot /WWW/live + + Require all granted + AllowOverride None + + + UserDir Public/HTML + + Require all granted + AllowOverride None + + + ErrorLog ${APACHE_LOG_DIR}/live-error.log + CustomLog ${APACHE_LOG_DIR}/live-access.log combined + + IncludeOptional /etc/apache2/sites-available/live-vhost.conf + + dest: /etc/apache2/sites-available/live.conf + mode: u=rw,g=r,o=r + notify: Restart Apache2. + +- name: Install test web site. + become: yes + copy: + content: | + + ServerName test + ServerAlias test.{{ domain_priv }} + ServerAdmin webmaster@core.{{ domain_priv }} + + DocumentRoot /WWW/test + + Require all granted + AllowOverride None + + + UserDir Public/HTML + + Require all granted + AllowOverride None + + + ErrorLog ${APACHE_LOG_DIR}/test-error.log + CustomLog ${APACHE_LOG_DIR}/test-access.log combined + + IncludeOptional /etc/apache2/sites-available/test-vhost.conf + + dest: /etc/apache2/sites-available/test.conf + mode: u=rw,g=r,o=r + notify: Restart Apache2. + +- name: Install campus web site. + become: yes + copy: + content: | + + ServerName www + ServerAlias www.{{ domain_priv }} + ServerAdmin webmaster@core.{{ domain_priv }} + + DocumentRoot /WWW/campus + + Options Indexes FollowSymLinks MultiViews ExecCGI + AddHandler cgi-script .cgi + Require all granted + AllowOverride None + + + UserDir Public/HTML + + Require all granted + AllowOverride None + + + ErrorLog ${APACHE_LOG_DIR}/campus-error.log + CustomLog ${APACHE_LOG_DIR}/campus-access.log combined + + IncludeOptional /etc/apache2/sites-available/www-vhost.conf + + dest: /etc/apache2/sites-available/www.conf + mode: u=rw,g=r,o=r + notify: Restart Apache2. + +- name: Enable web sites. + become: yes + command: + cmd: a2ensite -q {{ item }} + creates: /etc/apache2/sites-enabled/{{ item }}.conf + loop: [ live, test, www ] + notify: Restart Apache2. + +- name: Enable/Start Apache2. + become: yes + systemd: + service: apache2 + enabled: yes + state: started + +- name: "Install Monkey's webupdate script." + become: yes + copy: + src: ../private/webupdate + dest: /usr/local/sbin/webupdate + mode: u=rx,g=rx,o= + owner: monkey + group: staff + +- name: "Create Monkey's webupdate job." + become: yes + cron: + minute: "*/15" + job: "[ -d /WWW/live ] && /usr/local/sbin/webupdate" + name: webupdate + user: monkey + +- name: Install OpenVPN. + become: yes + apt: pkg=openvpn + +- name: Enable IP forwarding. + become: yes + sysctl: + name: net.ipv4.ip_forward + value: "1" + state: present + +- name: Install OpenVPN secret. + become: yes + copy: + src: ../Secret/front-ta.key + dest: /etc/openvpn/ta.key + mode: u=r,g=,o= + notify: Restart OpenVPN. + +- name: Install OpenVPN client certificate/key. + become: yes + copy: + src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }} + dest: /etc/openvpn/client.{{ item.typ }} + mode: "{{ item.mode }}" + loop: + - { path: "issued/core", typ: crt, mode: "u=r,g=r,o=r" } + - { path: "private/core", typ: key, mode: "u=r,g=,o=" } + notify: Restart OpenVPN. + +- name: Configure OpenVPN. + become: yes + copy: + content: | + client + dev-type tun + dev ovpn + remote {{ front_addr }} + nobind + user nobody + group nogroup + persist-key + persist-tun + cipher AES-256-GCM + auth SHA256 + remote-cert-tls server + verify-x509-name {{ domain_name }} name + verb 3 + ca /usr/local/share/ca-certificates/{{ domain_name }}.crt + cert client.crt + key client.key + tls-auth ta.key 1 + dest: /etc/openvpn/front.conf + mode: u=r,g=r,o= + notify: Restart OpenVPN. + +- name: Enable/Start OpenVPN. + become: yes + systemd: + service: openvpn@front + state: started + enabled: yes + +- name: Install NAGIOS4. + become: yes + apt: + pkg: [ nagios4, monitoring-plugins-basic, nagios-nrpe-plugin, + lm-sensors ] + +- name: Install inst_sensors NAGIOS plugin. + become: yes + copy: + src: inst_sensors + dest: /usr/local/sbin/inst_sensors + mode: u=rwx,g=rx,o=rx + +- name: Configure NAGIOS4. + become: yes + lineinfile: + path: /etc/nagios4/nagios.cfg + regexp: "{{ item.regexp }}" + line: "{{ item.line }}" + backrefs: yes + loop: + - { regexp: "^( *cfg_file *= *localhost.cfg)", line: "# \\1" } + - { regexp: "^( *admin_email *= *)", line: "\\1{{ ansible_user }}@localhost" } + notify: Reload NAGIOS4. + +- name: Configure NAGIOS4 contacts. + become: yes + lineinfile: + path: /etc/nagios4/objects/contacts.cfg + regexp: "^( *email +)" + line: "\\1sysadm@localhost" + backrefs: yes + notify: Reload NAGIOS4. + +- name: Configure NAGIOS4 monitors. + become: yes + template: + src: nagios.cfg + dest: /etc/nagios4/conf.d/institute.cfg + notify: Reload NAGIOS4. + +- name: Enable/Start NAGIOS4. + become: yes + systemd: + service: nagios4 + enabled: yes + state: started + +- name: Install backup script. + become: yes + copy: + src: ../private/backup + dest: /usr/local/sbin/backup + mode: u=rx,g=r,o= + +- name: Install packages required by Nextcloud. + become: yes + apt: + pkg: [ apache2, mariadb-server, php, php-apcu, php-bcmath, + php-curl, php-gd, php-gmp, php-json, php-mysql, + php-mbstring, php-intl, php-imagick, php-xml, php-zip, + libapache2-mod-php ] + +- name: Enable Apache2 modules for Nextcloud. + become: yes + apache2_module: + name: "{{ item }}" + loop: [ rewrite, headers, env, dir, mime ] + +- name: Install Nextcloud web configuration. + become: yes + copy: + src: nextcloud.conf + dest: /etc/apache2/sites-available/nextcloud.conf + notify: Restart Apache2. + +- name: Enable Nextcloud web configuration. + become: yes + command: + cmd: a2ensite nextcloud + creates: /etc/apache2/sites-enabled/nextcloud.conf + notify: Restart Apache2. + +- name: Add {{ ansible_user }} to web server group. + become: yes + user: + name: "{{ ansible_user }}" + append: yes + groups: www-data + +- name: Create Nextcloud cron job. + become: yes + cron: + minute: 11,26,41,56 + job: >- + [ -r /var/www/nextcloud/cron.php ] + && /usr/bin/php -f /var/www/nextcloud/cron.php + name: Nextcloud + user: www-data + +- name: Link /var/www/nextcloud. + become: yes + file: + path: /var/www/nextcloud + src: /Nextcloud/nextcloud + state: link + force: yes + follow: no + +- name: Set PHP memory_limit for Nextcloud. + become: yes + lineinfile: + path: /etc/php/7.4/apache2/php.ini + regexp: memory_limit *= + line: memory_limit = 512M + +- name: Include PHP parameters for Nextcloud. + become: yes + copy: + content: | + ; priority=20 + apc.enable_cli=1 + opcache.enable=1 + opcache.enable_cli=1 + opcache.interned_strings_buffer=8 + opcache.max_accelerated_files=10000 + opcache.memory_consumption=128 + opcache.save_comments=1 + opcache.revalidate_freq=1 + dest: /etc/php/7.4/mods-available/nextcloud.ini + notify: Restart Apache2. + +- name: Enable Nextcloud PHP modules. + become: yes + command: + cmd: phpenmod {{ item }} + creates: /etc/php/7.4/apache2/conf.d/20-{{ item }}.ini + loop: [ nextcloud, apcu ] + notify: Restart Apache2. + +- name: Test for /Nextcloud/nextcloud/. + stat: + path: /Nextcloud/nextcloud + register: nextcloud +- debug: + msg: "/Nextcloud/ does not yet exist" + when: not nextcloud.stat.exists + +- name: Configure Nextcloud trusted domains. + become: yes + replace: + path: /var/www/nextcloud/config/config.php + regexp: "^( *)'trusted_domains' *=>[^)]*[)],$" + replace: |- + \1'trusted_domains' => + \1array ( + \1 0 => 'core.{{ domain_priv }}', + \1), + when: nextcloud.stat.exists + +- name: Configure Nextcloud dbpasswd. + become: yes + lineinfile: + path: /var/www/nextcloud/config/config.php + regexp: "^ *'dbpassword' *=> *'.*', *$" + line: " 'dbpassword' => '{{ nextcloud_dbpass }}'," + insertbefore: "^[)];" + firstmatch: yes + when: nextcloud.stat.exists + +- name: Configure Nextcloud memcache. + become: yes + lineinfile: + path: /var/www/nextcloud/config/config.php + regexp: "^ *'memcache.local' *=> *'.*', *$" + line: " 'memcache.local' => '\\\\OC\\\\Memcache\\\\APCu'," + insertbefore: "^[)];" + firstmatch: yes + when: nextcloud.stat.exists + +- name: Configure Nextcloud for Pretty URLs. + become: yes + lineinfile: + path: /var/www/nextcloud/config/config.php + regexp: "{{ item.regexp }}" + line: "{{ item.line }}" + insertbefore: "^[)];" + firstmatch: yes + vars: + url: http://core.{{ domain_priv }}/nextcloud + loop: + - regexp: "^ *'overwrite.cli.url' *=>" + line: " 'overwrite.cli.url' => '{{ url }}'," + - regexp: "^ *'htaccess.RewriteBase' *=>" + line: " 'htaccess.RewriteBase' => '/nextcloud'," + when: nextcloud.stat.exists + +- name: Configure Nextcloud phone region. + become: yes + lineinfile: + path: /var/www/nextcloud/config/config.php + regexp: "^ *'default_phone_region' *=> *'.*', *$" + line: " 'default_phone_region' => '{{ nextcloud_region }}'," + insertbefore: "^[)];" + firstmatch: yes + when: nextcloud.stat.exists + +- name: Create /Nextcloud/dbbackup.cnf. + no_log: yes + become: yes + copy: + content: | + [mysqldump] + no-tablespaces + single-transaction + host=localhost + user=nextclouduser + password={{ nextcloud_dbpass }} + dest: /Nextcloud/dbbackup.cnf + mode: g=,o= + force: no + when: nextcloud.stat.exists + +- name: Update /Nextcloud/dbbackup.cnf password. + become: yes + lineinfile: + path: /Nextcloud/dbbackup.cnf + regexp: password= + line: password={{ nextcloud_dbpass }} + when: nextcloud.stat.exists + +- name: Install institute passwd command. + become: yes + template: + src: passwd + dest: /usr/local/bin/passwd + mode: u=rwx,g=rx,o=rx + +- name: Authorize institute passwd command as {{ ansible_user }}. + become: yes + copy: + content: | + ALL ALL=({{ ansible_user }}) NOPASSWD: /usr/local/bin/passwd + dest: /etc/sudoers.d/01passwd + mode: u=r,g=r,o= + owner: root + group: root + +- name: Authorize {{ ansible_user }} to read /etc/shadow. + become: yes + user: + name: "{{ ansible_user }}" + append: yes + groups: shadow + +- name: Authorize {{ ansible_user }} to run /usr/bin/php as www-data. + become: yes + copy: + content: | + {{ ansible_user }} ALL=(www-data) NOPASSWD: /usr/bin/php + dest: /etc/sudoers.d/01www-data-php + mode: u=r,g=r,o= + owner: root + group: root + +- name: Install root PGP key file. + become: no + copy: + src: ../Secret/root-pub.pem + dest: ~/.gnupg-root-pub.pem + mode: u=r,g=r,o=r + notify: Import root PGP key. diff --git a/roles_t/core/templates/nagios.cfg b/roles_t/core/templates/nagios.cfg new file mode 100644 index 0000000..b170d15 --- /dev/null +++ b/roles_t/core/templates/nagios.cfg @@ -0,0 +1,135 @@ +define host { + use linux-server + host_name core + address 127.0.0.1 +} + +define service { + use local-service + host_name core + service_description Root Partition + check_command check_local_disk!20%!10%!/ +} + +define service { + use local-service + host_name core + service_description Current Users + check_command check_local_users!20!50 +} + +define service { + use local-service + host_name core + service_description Zombie Processes + check_command check_local_procs!5!10!Z +} + +define service { + use local-service + host_name core + service_description Total Processes + check_command check_local_procs!150!200!RSZDT +} + +define service { + use local-service + host_name core + service_description Current Load + check_command check_local_load!5.0,4.0,3.0!10.0,6.0,4.0 +} + +define service { + use local-service + host_name core + service_description Swap Usage + check_command check_local_swap!20%!10% +} + +define service { + use local-service + host_name core + service_description SSH + check_command check_ssh +} + +define service { + use local-service + host_name core + service_description HTTP + check_command check_http +} + +define command { + command_name inst_sensors + command_line /usr/local/sbin/inst_sensors +} + +define service { + use local-service + host_name core + service_description Temperature Sensors + check_command inst_sensors +} + +define host { + use linux-server + host_name gate + address {{ gate_addr }} +} + +define service { + use local-service + host_name gate + service_description PING + check_command check_ping!100.0,20%!500.0,60% +} + +define service { + use generic-service + host_name gate + service_description Root Partition + check_command check_nrpe!inst_root +} + +define service { + use generic-service + host_name gate + service_description Current Load + check_command check_nrpe!check_load +} + +define service { + use generic-service + host_name gate + service_description Zombie Processes + check_command check_nrpe!check_zombie_procs +} + +define service { + use generic-service + host_name gate + service_description Total Processes + check_command check_nrpe!check_total_procs +} + +define service { + use generic-service + host_name gate + service_description Swap Usage + check_command check_nrpe!inst_swap +} + +define service { + use generic-service + host_name gate + service_description SSH + check_command check_ssh +} + +define service { + use generic-service + host_name gate + service_description Temperature Sensors + check_command check_nrpe!inst_sensors +} diff --git a/roles_t/core/templates/passwd b/roles_t/core/templates/passwd new file mode 100644 index 0000000..e8e511d --- /dev/null +++ b/roles_t/core/templates/passwd @@ -0,0 +1,78 @@ +#!/bin/perl -wT + +use strict; + +$ENV{PATH} = "/usr/sbin:/usr/bin:/bin"; + +my ($username) = getpwuid $<; +if ($username ne "{{ ansible_user }}") { + { exec ("sudo", "-u", "{{ ansible_user }}", + "/usr/local/bin/passwd", $username) }; + print STDERR "Could not exec sudo: $!\n"; + exit 1; +} + +$username = $ARGV[0]; +my $passwd; +{ + my $SHADOW = new IO::File; + open $SHADOW, "; + close $SHADOW; + die "No /etc/shadow record found: $username\n" if ! defined $line; + (undef, $passwd) = split ":", $line; +} + +system "stty -echo"; +END { system "stty echo"; } + +print "Current password: "; +my $pass = ; chomp $pass; +print "\n"; +my $hash = crypt($pass, $passwd); +die "Sorry...\n" if $hash ne $passwd; + +print "New password: "; +$pass = ; chomp($pass); +die "Passwords must be at least 10 characters long.\n" + if length $pass < 10; +print "\nRetype password: "; +my $pass2 = ; chomp($pass2); +print "\n"; +die "New passwords do not match!\n" + if $pass2 ne $pass; + +use MIME::Base64; +my $epass = encode_base64 $pass; + +use File::Temp qw(tempfile); +my ($TMP, $tmp) = tempfile; +close $TMP; + +my $O = new IO::File; +open $O, ("| gpg --encrypt --armor" + ." --trust-model always --recipient root\@core" + ." > $tmp") or die "Error running gpg > $tmp: $!\n"; +print $O <flush; +copy $tmp, $O; +#print $O `cat $tmp`; +close $O or die "Error closing pipe to sendmail: $!\n"; + +print " +Your request was sent to Root. PLEASE WAIT for email confirmation +that the change was completed.\n"; +exit; diff --git a/roles_t/front/handlers/main.yml b/roles_t/front/handlers/main.yml new file mode 100644 index 0000000..1b4abd2 --- /dev/null +++ b/roles_t/front/handlers/main.yml @@ -0,0 +1,59 @@ +--- +- name: Update hostname. + become: yes + command: hostname -F /etc/hostname + +- name: Reload SSH server. + become: yes + systemd: + service: ssh + state: reloaded + +- name: Update CAs. + become: yes + command: update-ca-certificates + +- name: Restart Postfix. + become: yes + systemd: + service: postfix + state: restarted + +- name: Postmap header checks. + become: yes + command: + chdir: /etc/postfix/ + cmd: postmap header_checks.cf + notify: Restart Postfix. + +- name: New aliases. + become: yes + command: newaliases + +- name: Restart Dovecot. + become: yes + systemd: + service: dovecot + state: restarted + +- name: Restart Apache2. + become: yes + systemd: + service: apache2 + state: restarted + +- name: Restart OpenVPN. + become: yes + systemd: + service: openvpn@server + state: restarted + +- name: Reload Systemd. + become: yes + command: systemctl daemon-reload + +- name: Restart Kamailio. + become: yes + systemd: + service: kamailio + state: restarted diff --git a/roles_t/front/tasks/main.yml b/roles_t/front/tasks/main.yml new file mode 100644 index 0000000..d30366e --- /dev/null +++ b/roles_t/front/tasks/main.yml @@ -0,0 +1,532 @@ +--- +- name: Include public variables. + include_vars: ../public/vars.yml + tags: accounts + +- name: Include private variables. + include_vars: ../private/vars.yml + tags: accounts + +- name: Include members. + include_vars: "{{ lookup('first_found', membership_rolls) }}" + tags: accounts + +- name: Configure hostname. + become: yes + copy: + content: "{{ domain_name }}\n" + dest: "{{ item }}" + loop: + - /etc/hostname + - /etc/mailname + notify: Update hostname. + +- name: Install systemd-resolved. + become: yes + apt: pkg=systemd-resolved + when: + - ansible_distribution == 'Debian' + - 11 < ansible_distribution_major_version|int + +- name: Enable/Start systemd-networkd. + become: yes + systemd: + service: systemd-networkd + enabled: yes + state: started + +- name: Enable/Start systemd-resolved. + become: yes + systemd: + service: systemd-resolved + enabled: yes + state: started + +- name: Link /etc/resolv.conf. + become: yes + file: + path: /etc/resolv.conf + src: /run/systemd/resolve/resolv.conf + state: link + force: yes + when: + - ansible_distribution == 'Debian' + - 12 > ansible_distribution_major_version|int + +- name: Add {{ ansible_user }} to system groups. + become: yes + user: + name: "{{ ansible_user }}" + append: yes + groups: root,adm + +- name: Install SSH host keys. + become: yes + copy: + src: ../Secret/ssh_front/etc/ssh/{{ item.name }} + dest: /etc/ssh/{{ item.name }} + mode: "{{ item.mode }}" + loop: + - { name: ssh_host_ecdsa_key, mode: "u=rw,g=,o=" } + - { name: ssh_host_ecdsa_key.pub, mode: "u=rw,g=r,o=r" } + - { name: ssh_host_ed25519_key, mode: "u=rw,g=,o=" } + - { name: ssh_host_ed25519_key.pub, mode: "u=rw,g=r,o=r" } + - { name: ssh_host_rsa_key, mode: "u=rw,g=,o=" } + - { name: ssh_host_rsa_key.pub, mode: "u=rw,g=r,o=r" } + notify: Reload SSH server. + +- name: Create monkey. + become: yes + user: + name: monkey + system: yes + +- name: Authorize monkey@core. + become: yes + vars: + pubkeyfile: ../Secret/ssh_monkey/id_rsa.pub + authorized_key: + user: monkey + key: "{{ lookup('file', pubkeyfile) }}" + manage_dir: yes + +- name: Add {{ ansible_user }} to monkey group. + become: yes + user: + name: "{{ ansible_user }}" + append: yes + groups: monkey + +- name: Install rsync. + become: yes + apt: pkg=rsync + +- name: Install basic software. + become: yes + apt: pkg=unattended-upgrades + +- name: Create user accounts. + become: yes + user: + name: "{{ item }}" + password: "{{ members[item].password_front }}" + update_password: always + home: /home/{{ item }} + loop: "{{ usernames }}" + when: members[item].status == 'current' + tags: accounts + +- name: Disable former users. + become: yes + user: + name: "{{ item }}" + password: "!" + loop: "{{ usernames }}" + when: members[item].status != 'current' + tags: accounts + +- name: Revoke former user authorized_keys. + become: yes + file: + path: /home/{{ item }}/.ssh/authorized_keys + state: absent + loop: "{{ usernames }}" + when: members[item].status != 'current' + tags: accounts + +- name: Trust the institute CA. + become: yes + copy: + src: ../Secret/CA/pki/ca.crt + dest: /usr/local/share/ca-certificates/{{ domain_name }}.crt + mode: u=r,g=r,o=r + owner: root + group: root + notify: Update CAs. + +- name: Install server certificate/key. + become: yes + copy: + src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }} + dest: /etc/server.{{ item.typ }} + mode: "{{ item.mode }}" + force: no + loop: + - { path: "issued/{{ domain_name }}", typ: crt, + mode: "u=r,g=r,o=r" } + - { path: "private/{{ domain_name }}", typ: key, + mode: "u=r,g=,o=" } + notify: + - Restart Postfix. + - Restart Dovecot. + +- name: Install Postfix. + become: yes + apt: pkg=postfix + +- name: Configure Postfix. + become: yes + lineinfile: + path: /etc/postfix/main.cf + regexp: "^ *{{ item.p }} *=" + line: "{{ item.p }} = {{ item.v }}" + loop: + - { p: smtpd_tls_cert_file, v: /etc/server.crt } + - { p: smtpd_tls_key_file, v: /etc/server.key } + - p: mynetworks + v: >- + {{ public_vpn_net_cidr }} + 127.0.0.0/8 + [::ffff:127.0.0.0]/104 + [::1]/128 + - p: smtpd_recipient_restrictions + v: >- + permit_mynetworks + reject_unauth_pipelining + reject_unauth_destination + reject_unknown_sender_domain + - p: smtpd_relay_restrictions + v: permit_mynetworks reject_unauth_destination + - { p: message_size_limit, v: 104857600 } + - { p: delay_warning_time, v: 1h } + - { p: maximal_queue_lifetime, v: 4h } + - { p: bounce_queue_lifetime, v: 4h } + - { p: home_mailbox, v: Maildir/ } + - p: smtp_header_checks + v: regexp:/etc/postfix/header_checks.cf + notify: Restart Postfix. + +- name: Install Postfix header_checks. + become: yes + copy: + content: | + /^Received:/ IGNORE + /^User-Agent:/ IGNORE + dest: /etc/postfix/header_checks.cf + notify: Postmap header checks. + +- name: Enable/Start Postfix. + become: yes + systemd: + service: postfix + enabled: yes + state: started + +- name: Install institute email aliases. + become: yes + blockinfile: + block: | + abuse: root + webmaster: root + admin: root + monkey: monkey@{{ front_private_addr }} + root: {{ ansible_user }} + path: /etc/aliases + marker: "# {mark} INSTITUTE MANAGED BLOCK" + notify: New aliases. + +- name: Install Dovecot IMAPd. + become: yes + apt: pkg=dovecot-imapd + +- name: Configure Dovecot IMAPd. + become: yes + copy: + content: | + protocols = imap + ssl = required + ssl_cert = + Require all granted + AllowOverride None +
+ + UserDir /home/www-users + + Require all granted + AllowOverride None + + + ErrorLog ${APACHE_LOG_DIR}/error.log + CustomLog ${APACHE_LOG_DIR}/access.log combined + + + Redirect permanent / https://{{ domain_name }}/ + + + + SSLEngine on + SSLCertificateFile /etc/server.crt + SSLCertificateKeyFile /etc/server.key + IncludeOptional \ + /etc/apache2/sites-available/{{ domain_name }}-vhost.conf + + + SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1 + SSLHonorCipherOrder on + SSLCipherSuite {{ [ 'ECDHE-ECDSA-AES128-GCM-SHA256', + 'ECDHE-ECDSA-AES256-GCM-SHA384', + 'ECDHE-ECDSA-AES128-SHA', + 'ECDHE-ECDSA-AES256-SHA', + 'ECDHE-ECDSA-AES128-SHA256', + 'ECDHE-ECDSA-AES256-SHA384', + 'ECDHE-RSA-AES128-GCM-SHA256', + 'ECDHE-RSA-AES256-GCM-SHA384', + 'ECDHE-RSA-AES128-SHA', + 'ECDHE-RSA-AES256-SHA', + 'ECDHE-RSA-AES128-SHA256', + 'ECDHE-RSA-AES256-SHA384', + 'DHE-RSA-AES128-GCM-SHA256', + 'DHE-RSA-AES256-GCM-SHA384', + 'DHE-RSA-AES128-SHA', + 'DHE-RSA-AES256-SHA', + 'DHE-RSA-AES128-SHA256', + 'DHE-RSA-AES256-SHA256', + '!aNULL', + '!eNULL', + '!LOW', + '!3DES', + '!MD5', + '!EXP', + '!PSK', + '!SRP', + '!DSS', + '!RC4' ] |join(":") }} + dest: /etc/apache2/sites-available/{{ domain_name }}.conf + notify: Restart Apache2. + +- name: Enable web site. + become: yes + command: + cmd: a2ensite -q {{ domain_name }} + creates: /etc/apache2/sites-enabled/{{ domain_name }}.conf + notify: Restart Apache2. + +- name: Enable/Start Apache2. + become: yes + systemd: + service: apache2 + enabled: yes + state: started + +- name: Disable default vhosts. + become: yes + file: + path: /etc/apache2/sites-enabled/{{ item }} + state: absent + loop: [ 000-default.conf, default-ssl.conf ] + notify: Restart Apache2. + +- name: Disable other-vhosts-access-log option. + become: yes + file: + path: /etc/apache2/conf-enabled/other-vhosts-access-log.conf + state: absent + notify: Restart Apache2. + +- name: Create UserDir. + become: yes + file: + path: /home/www-users/ + state: directory + +- name: Create UserDir links. + become: yes + file: + path: /home/www-users/{{ item }} + src: /home/{{ item }}/Public/HTML + state: link + force: yes + loop: "{{ usernames }}" + when: members[item].status == 'current' + tags: accounts + +- name: Disable former UserDir links. + become: yes + file: + path: /home/www-users/{{ item }} + state: absent + loop: "{{ usernames }}" + when: members[item].status != 'current' + tags: accounts + +- name: Install OpenVPN. + become: yes + apt: pkg=openvpn + +- name: Enable IP forwarding. + become: yes + sysctl: + name: net.ipv4.ip_forward + value: "1" + state: present + +- name: Create OpenVPN client configuration directory. + become: yes + file: + path: /etc/openvpn/ccd + state: directory + notify: Restart OpenVPN. + +- name: Install OpenVPN client configuration for Core. + become: yes + copy: + content: | + iroute {{ private_net_and_mask }} + iroute {{ campus_vpn_net_and_mask }} + dest: /etc/openvpn/ccd/core + notify: Restart OpenVPN. + +- name: Disable former VPN clients. + become: yes + copy: + content: "disable\n" + dest: /etc/openvpn/ccd/{{ item }} + loop: "{{ revoked }}" + tags: accounts + +- name: Install OpenVPN server certificate/key. + become: yes + copy: + src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }} + dest: /etc/openvpn/server.{{ item.typ }} + mode: "{{ item.mode }}" + loop: + - { path: "issued/{{ domain_name }}", typ: crt, + mode: "u=r,g=r,o=r" } + - { path: "private/{{ domain_name }}", typ: key, + mode: "u=r,g=,o=" } + notify: Restart OpenVPN. + +- name: Install OpenVPN secrets. + become: yes + copy: + src: ../Secret/{{ item.src }} + dest: /etc/openvpn/{{ item.dest }} + mode: u=r,g=,o= + loop: + - { src: front-dh2048.pem, dest: dh2048.pem } + - { src: front-ta.key, dest: ta.key } + notify: Restart OpenVPN. + +- name: Configure OpenVPN. + become: yes + copy: + content: | + server {{ public_vpn_net_and_mask }} + client-config-dir /etc/openvpn/ccd + route {{ private_net_and_mask }} + route {{ campus_vpn_net_and_mask }} + push "route {{ private_net_and_mask }}" + push "route {{ campus_vpn_net_and_mask }}" + dev-type tun + dev ovpn + topology subnet + client-to-client + keepalive 10 120 + push "dhcp-option DOMAIN {{ domain_priv }}" + push "dhcp-option DNS {{ core_addr }}" + user nobody + group nogroup + persist-key + persist-tun + cipher AES-256-GCM + auth SHA256 + max-clients 20 + ifconfig-pool-persist ipp.txt + status openvpn-status.log + verb 3 + ca /usr/local/share/ca-certificates/{{ domain_name }}.crt + cert server.crt + key server.key + dh dh2048.pem + tls-auth ta.key 0 + dest: /etc/openvpn/server.conf + mode: u=r,g=r,o= + notify: Restart OpenVPN. + +- name: Enable/Start OpenVPN. + become: yes + systemd: + service: openvpn@server + enabled: yes + state: started + +- name: Install Kamailio. + become: yes + apt: pkg=kamailio + +- name: Create Kamailio/Systemd configuration drop. + become: yes + file: + path: /etc/systemd/system/kamailio.service.d + state: directory + +- name: Create Kamailio dependence on OpenVPN server. + become: yes + copy: + content: | + [Unit] + Requires=sys-devices-virtual-net-ovpn.device + After=sys-devices-virtual-net-ovpn.device + dest: /etc/systemd/system/kamailio.service.d/depend.conf + notify: Reload Systemd. + +- name: Configure Kamailio. + become: yes + copy: + content: | + listen=udp:{{ front_private_addr }}:5060 + dest: /etc/kamailio/kamailio-local.cfg + notify: Restart Kamailio. + +- name: Enable/Start Kamailio. + become: yes + systemd: + service: kamailio + enabled: yes + state: started diff --git a/roles_t/gate/handlers/main.yml b/roles_t/gate/handlers/main.yml new file mode 100644 index 0000000..9121860 --- /dev/null +++ b/roles_t/gate/handlers/main.yml @@ -0,0 +1,16 @@ +--- +- name: Apply netplan. + become: yes + command: netplan apply + +- name: Restart DHCP server. + become: yes + systemd: + service: isc-dhcp-server + state: restarted + +- name: Restart OpenVPN. + become: yes + systemd: + service: openvpn@server + state: restarted diff --git a/roles_t/gate/tasks/main.yml b/roles_t/gate/tasks/main.yml new file mode 100644 index 0000000..cf65470 --- /dev/null +++ b/roles_t/gate/tasks/main.yml @@ -0,0 +1,227 @@ +--- +- name: Include public variables. + include_vars: ../public/vars.yml + tags: accounts +- name: Include private variables. + include_vars: ../private/vars.yml + tags: accounts +- name: Include members. + include_vars: "{{ lookup('first_found', membership_rolls) }}" + tags: accounts + +- name: Install netplan (gate). + become: yes + apt: pkg=netplan.io + +- name: Configure netplan (gate). + become: yes + copy: + content: | + network: + ethernets: + lan: + match: + macaddress: {{ gate_lan_mac }} + addresses: [ {{ gate_addr_cidr }} ] + set-name: lan + dhcp4: false + nameservers: + addresses: [ {{ core_addr }} ] + search: [ {{ domain_priv }} ] + routes: + - to: {{ public_vpn_net_cidr }} + via: {{ core_addr }} + wifi: + match: + macaddress: {{ gate_wifi_mac }} + addresses: [ {{ gate_wifi_addr_cidr }} ] + set-name: wifi + dhcp4: false + dest: /etc/netplan/60-gate.yaml + mode: u=rw,g=r,o= + notify: Apply netplan. + +- name: Install netplan (ISP). + become: yes + copy: + content: | + network: + ethernets: + isp: + match: + macaddress: {{ gate_isp_mac }} + set-name: isp + dhcp4: true + dhcp4-overrides: + use-dns: false + dest: /etc/netplan/60-isp.yaml + mode: u=rw,g=r,o= + force: no + notify: Apply netplan. + +- name: Install UFW. + become: + apt: pkg=ufw + +- name: Configure UFW policy. + become: yes + lineinfile: + path: /etc/default/ufw + line: "{{ item.line }}" + regexp: "{{ item.regexp }}" + loop: + - { line: "DEFAULT_INPUT_POLICY=\"ACCEPT\"", + regexp: "^DEFAULT_INPUT_POLICY=" } + - { line: "DEFAULT_OUTPUT_POLICY=\"ACCEPT\"", + regexp: "^DEFAULT_OUTPUT_POLICY=" } + - { line: "DEFAULT_FORWARD_POLICY=\"DROP\"", + regexp: "^DEFAULT_FORWARD_POLICY=" } + +- name: Configure UFW rules. + become: yes + vars: + ACCEPT_RELATED: -m state --state ESTABLISHED,RELATED -j ACCEPT + blockinfile: + path: /etc/ufw/before.rules + block: | + *nat + :POSTROUTING ACCEPT [0:0] + -A POSTROUTING -s {{ private_net_cidr }} -o isp -j MASQUERADE + -A POSTROUTING -s {{ gate_wifi_net_cidr }} -o isp -j MASQUERADE + COMMIT + + *filter + -A FORWARD -i lan -o isp -j ACCEPT + -A FORWARD -i wifi -o isp -j ACCEPT + -A FORWARD -i isp -o lan {{ ACCEPT_RELATED }} + -A FORWARD -i isp -o wifi {{ ACCEPT_RELATED }} + -A FORWARD -i lan -o ovpn -j ACCEPT + -A FORWARD -i ovpn -o lan -j ACCEPT + COMMIT + insertafter: EOF + +- name: Install DHCP server. + become: yes + apt: pkg=isc-dhcp-server + +- name: Configure DHCP interface. + become: yes + lineinfile: + path: /etc/default/isc-dhcp-server + line: INTERFACESv4="wifi" + regexp: ^INTERFACESv4= + notify: Restart DHCP server. + +- name: Configure DHCP for WiFiAP service. + become: yes + copy: + content: | + default-lease-time 3600; + max-lease-time 7200; + ddns-update-style none; + authoritative; + log-facility daemon; + + subnet {{ gate_wifi_net }} netmask {{ gate_wifi_net_mask }} { + option subnet-mask {{ gate_wifi_net_mask }}; + option broadcast-address {{ gate_wifi_broadcast }}; + option routers {{ gate_wifi_addr }}; + } + + host {{ wifi_wan_name }} { + hardware ethernet {{ wifi_wan_mac }}; + fixed-address {{ wifi_wan_addr }}; + } + dest: /etc/dhcp/dhcpd.conf + notify: Restart DHCP server. + +- name: Enable DHCP server. + become: yes + systemd: + service: isc-dhcp-server + enabled: yes + +- name: Install server certificate/key. + become: yes + copy: + src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }} + dest: /etc/server.{{ item.typ }} + mode: "{{ item.mode }}" + loop: + - { path: "issued/gate.{{ domain_priv }}", typ: crt, + mode: "u=r,g=r,o=r" } + - { path: "private/gate.{{ domain_priv }}", typ: key, + mode: "u=r,g=,o=" } + notify: Restart OpenVPN. + +- name: Install OpenVPN. + become: yes + apt: pkg=openvpn + +- name: Enable IP forwarding. + become: yes + sysctl: + name: net.ipv4.ip_forward + value: "1" + state: present + +- name: Create OpenVPN client configuration directory. + become: yes + file: + path: /etc/openvpn/ccd + state: directory + notify: Restart OpenVPN. + +- name: Disable former VPN clients. + become: yes + copy: + content: "disable\n" + dest: /etc/openvpn/ccd/{{ item }} + loop: "{{ revoked }}" + notify: Restart OpenVPN. + tags: accounts + +- name: Install OpenVPN secrets. + become: yes + copy: + src: ../Secret/{{ item.src }} + dest: /etc/openvpn/{{ item.dest }} + mode: u=r,g=,o= + loop: + - { src: gate-dh2048.pem, dest: dh2048.pem } + - { src: gate-ta.key, dest: ta.key } + notify: Restart OpenVPN. + +- name: Configure OpenVPN. + become: yes + copy: + content: | + server {{ campus_vpn_net_and_mask }} + client-config-dir /etc/openvpn/ccd + push "route {{ private_net_and_mask }}" + push "route {{ public_vpn_net_and_mask }}" + dev-type tun + dev ovpn + topology subnet + client-to-client + keepalive 10 120 + push "dhcp-option DOMAIN {{ domain_priv }}" + push "dhcp-option DNS {{ core_addr }}" + user nobody + group nogroup + persist-key + persist-tun + cipher AES-256-GCM + auth SHA256 + max-clients 20 + ifconfig-pool-persist ipp.txt + status openvpn-status.log + verb 3 + ca /usr/local/share/ca-certificates/{{ domain_name }}.crt + cert /etc/server.crt + key /etc/server.key + dh dh2048.pem + tls-auth ta.key 0 + dest: /etc/openvpn/server.conf + mode: u=r,g=r,o= + notify: Restart OpenVPN.