VPN on Front (via hotel Wi-Fi). When /on/ campus, members can use the
much faster and always available (despite Internet connectivity
issues) VPN on Gate (via campus Wi-Fi). A member's Android phones and
-devices can use the same Wi-Fis, VPNs (via the OpenVPN app) and
+devices can use the same Wi-Fis, VPNs (via the WireGuard™ app) and
services. On a desktop or by phone, at home or abroad, members can
access their email and the institute's private web and cloud.
domain name is private and the service is on a directly connected
private network.
-** The VPN Services
-
-The institute's public and campus VPNs have many common configuration
-options that are discussed here. These are included, with example
-certificates and network addresses, in the complete server
-configurations of [[*The Front Role][The Front Role]] and [[*The Gate Role][The Gate Role]], as well as the
-matching client configurations in [[*The Core Role][The Core Role]] and the =.ovpn= files
-generated by [[*The Client Command][The Client Command]]. The configurations are based on the
-documentation for OpenVPN v2.4: the ~openvpn(8)~ manual page and [[https://openvpn.net/community-resources/reference-manual-for-openvpn-2-4/][this
-web page]].
-
-*** The VPN Configuration Options
-
-The institute VPNs use UDP on a subnet topology (rather than
-point-to-point) with "split tunneling". The UDP support accommodates
-real-time, connection-less protocols. The split tunneling is for
-efficiency with frontier bandwidth. The subnet topology, with the
-~client-to-client~ option, allows members to "talk" to each other on
-the VPN subnets using any (experimental) protocol.
-
-#+NAME: openvpn-dev-mode
-#+CAPTION: ~openvpn-dev-mode~
-#+BEGIN_SRC conf
-dev-type tun
-dev ovpn
-topology subnet
-client-to-client
-#+END_SRC
-
-A ~keepalive~ option is included on the servers so that clients detect
-an unreachable server and reset the TLS session. The option's default
-is doubled to 2 minutes out of respect for frontier service
-interruptions.
-
-#+NAME: openvpn-keepalive
-#+CAPTION: ~openvpn-keepalive~
-#+BEGIN_SRC conf
-keepalive 10 120
-#+END_SRC
-
-As mentioned in [[*The Name Service][The Name Service]], the institute uses a campus name
-server. OpenVPN is instructed to push its address and the campus
-search domain.
-
-#+NAME: openvpn-dns
-#+CAPTION: ~openvpn-dns~
-#+BEGIN_SRC conf
-push "dhcp-option DOMAIN {{ domain_priv }}"
-push "dhcp-option DNS {{ core_addr }}"
-#+END_SRC
-
-The institute does not put the OpenVPN server in a ~chroot~ jail, but
-it does drop privileges to run as user ~nobody:nobody~. The
-~persist-~ options are needed because ~nobody~ cannot open the tunnel
-device nor the key files.
-
-#+NAME: openvpn-drop-priv
-#+CAPTION: ~openvpn-drop-priv~
-#+BEGIN_SRC conf
-user nobody
-group nogroup
-persist-key
-persist-tun
-#+END_SRC
-
-The institute does a little additional hardening, sacrificing some
-compatibility with out-of-date clients. Such clients are generally
-frowned upon at the institute. Here ~cipher~ is set to ~AES-256-GCM~,
-the default for OpenVPN v2.4, and ~auth~ is upped to ~SHA256~ from
-~SHA1~.
-
-#+NAME: openvpn-crypt
-#+CAPTION: ~openvpn-crypt~
-#+BEGIN_SRC conf
-cipher AES-256-GCM
-auth SHA256
-#+END_SRC
-
-Finally, a ~max-client~ limit was chosen to frustrate flooding while
-accommodating a few members with a handful of devices each.
-
-#+NAME: openvpn-max
-#+CAPTION: ~openvpn-max~
-#+BEGIN_SRC conf
-max-clients 20
-#+END_SRC
-
-The institute's servers are lightly loaded so a few debugging options
-are appropriate. To help recognize host addresses in the logs, and
-support direct client-to-client communication, host IP addresses are
-made "persistent" in the =ipp.txt= file. The server's status is
-periodically written to the =openvpn-status.log= and verbosity is
-raised from the default level 1 to level 3 (just short of a deluge).
-
-#+NAME: openvpn-debug
-#+CAPTION: ~openvpn-debug~
-#+BEGIN_SRC conf
-ifconfig-pool-persist ipp.txt
-status openvpn-status.log
-verb 3
-#+END_SRC
-
** Accounts
A small institute has just a handful of members. For simplicity (and
/not/ used. (Thus Core's configuration does not depend on
Front's.)
-The institute uses a number of X.509 certificates to authenticate VPN
-clients and servers. They are created by the EasyRSA Certificate
-Authority stored in [[file:Secret/CA/][=Secret/CA/=]].
+The institute uses a couple X.509 certificates to authenticate
+servers. They are created by the EasyRSA Certificate Authority stored
+in [[file:Secret/CA/][=Secret/CA/=]].
- [[file:Secret/CA/pki/ca.crt][=Secret/CA/pki/ca.crt=]] :: The institute CA certificate, used to
sign the other certificates.
- - [[file:Secret/CA/pki/issued/small.example.org.crt][=Secret/CA/pki/issued/small.example.org.crt=]] :: The public Apache,
- Postfix, and OpenVPN servers on Front.
-
- - [[file:Secret/CA/pki/issued/gate.small.private.crt][=Secret/CA/pki/issued/gate.small.private.crt=]] :: The campus
- OpenVPN server on Gate.
+ - [[file:Secret/CA/pki/issued/small.example.org.crt][=Secret/CA/pki/issued/small.example.org.crt=]] :: The public
+ Postfix, Dovecot and Apache servers on Front.
- [[file:Secret/CA/pki/issued/core.small.private.crt][=Secret/CA/pki/issued/core.small.private.crt=]] :: The campus
- Apache (thus Nextcloud), and Dovecot-IMAPd servers.
-
- - [[file:Secret/CA/pki/issued/core.crt][=Secret/CA/pki/issued/core.crt=]] :: Core's client certificate, by
- which it authenticates to Front.
+ Postfix, Dovecot and Apache (thus Nextcloud) servers on Core.
-The ~./inst client~ command creates client certificates and keys, and
-can generate OpenVPN configuration (=.ovpn=) files for Android and
-Debian. The command updates the institute membership roll, requiring
-the member's username, keeping a list of the member's clients (in case
-all authorizations need to be revoked quickly). The list of client
-certificates that have been revoked is stored along with the
-membership roll (in =private/members.yml= as the value of ~revoked~).
+The ~./inst client~ command updates the institute membership roll,
+which lists members and their clients' public keys, and is stored in
+=private/members.yml=.
Finally, the institute uses an OpenPGP key to secure sensitive emails
(containing passwords or private keys) to Core.
- [[file:Secret/root-sec.pem][=Secret/root-sec.pem=]] :: The ASCII armored OpenPGP secret key.
The institute administrator updates a couple encrypted copies of this
-drive after enrolling new members, changing a password, issuing VPN
-credentials, etc.
+drive after enrolling new members, changing a password,
+(de)authorizing a VPN client, etc.
: rsync -a Secret/ Secret2/
: rsync -a Secret/ Secret3/
** Subnets
-The small institute uses a private Ethernet, two VPNs, and an
+The small institute uses a private Ethernet, two VPNs, and a "wild",
untrusted Ethernet for the campus Wi-Fi access point(s) and wired IoT
-appliances). Each must have a unique private network address. Hosts
+appliances. Each must have a unique private network address. Hosts
using the VPNs are also using foreign private networks, e.g. a
notebook on a hotel Wi-Fi. To better the chances that all of these
networks get unique addresses, the small institute uses addresses in
private_net_cidr: 192.168.56.0/24
wild_net_cidr: 192.168.57.0/24
-public_vpn_net_cidr: 10.177.86.0/24
public_wg_net_cidr: 10.177.87.0/24
-campus_vpn_net_cidr: 10.84.138.0/24
campus_wg_net_cidr: 10.84.139.0/24
#+END_SRC
The network addresses are needed in several additional formats, e.g.
-network address and subnet mask (~10.84.138.0 255.255.255.0~). The
+network address and subnet mask (~10.84.139.0 255.255.255.0~). The
following boilerplate uses Ansible's ~ipaddr~ filter to set several
corresponding variables, each with an appropriate suffix,
e.g. ~_net_and_mask~ rather than ~_net_cidr~.
wild_net_and_mask: "{{ wild_net }} {{ wild_net_mask }}"
wild_net_broadcast:
"{{ wild_net_cidr | ansible.utils.ipaddr('broadcast') }}"
-public_vpn_net:
- "{{ public_vpn_net_cidr | ansible.utils.ipaddr('network') }}"
-public_vpn_net_mask:
- "{{ public_vpn_net_cidr | ansible.utils.ipaddr('netmask') }}"
-public_vpn_net_and_mask:
- "{{ public_vpn_net }} {{ public_vpn_net_mask }}"
public_wg_net:
"{{ public_wg_net_cidr | ansible.utils.ipaddr('network') }}"
public_wg_net_mask:
"{{ public_wg_net_cidr | ansible.utils.ipaddr('netmask') }}"
public_wg_net_and_mask:
"{{ public_wg_net }} {{ public_wg_net_mask }}"
-campus_vpn_net:
- "{{ campus_vpn_net_cidr | ansible.utils.ipaddr('network') }}"
-campus_vpn_net_mask:
- "{{ campus_vpn_net_cidr | ansible.utils.ipaddr('netmask') }}"
-campus_vpn_net_and_mask:
- "{{ campus_vpn_net }} {{ campus_vpn_net_mask }}"
campus_wg_net:
"{{ campus_wg_net_cidr | ansible.utils.ipaddr('network') }}"
campus_wg_net_mask:
~192.168.15.0~ in its example configuration of a "NAT Network"
(simulating Front's ISP's network).
-Finally, five host addresses are needed frequently in the Ansible
+Finally, four host addresses are needed frequently in the Ansible
code. The first two are Core's and Gate's addresses on the private
-Ethernet. The next two are Gate's and the campus Wi-Fi's addresses on
-the "wild" subnet, the untrusted Ethernet (~wild_net~) between Gate
-and the campus Wi-Fi access point(s) and IoT appliances. The last is
-Front's address on the public VPN, ~front_vpn_addr~. The following
-code block picks the obvious IP addresses for Core (host 1) and Gate
-(host 2) on the private Ethernet, Gate and a Wi-Fi access point on the
-wild Ethernet, and Front on the public VPN.
+Ethernet. The other two are Gate's and the campus Wi-Fi's addresses
+on the wild Ethernet. The following code block chooses host 1 for
+Core and host 2 for Gate on the private Ethernet. On the wild
+Ethernet, host 1 is Gate and host 2 is the access point (or wired
+IoT appliance).
#+CAPTION: [[file:private/vars.yml][=private/vars.yml=]]
#+BEGIN_SRC conf :tangle private/vars.yml
gate_addr_cidr: "{{ private_net_cidr | ansible.utils.ipaddr('2') }}"
gate_wild_addr_cidr:
"{{ wild_net_cidr | ansible.utils.ipaddr('1') }}"
-wifi_wan_addr_cidr: "{{ wild_net_cidr | ansible.utils.ipaddr('2') }}"
-front_vpn_addr_cidr:
- "{{ public_vpn_net_cidr | ansible.utils.ipaddr('1') }}"
-front_wg_port: 39608
front_wg_addr_cidr:
"{{ public_wg_net_cidr | ansible.utils.ipaddr('1') }}"
-core_wg_addr_cidr:
- "{{ public_wg_net_cidr | ansible.utils.ipaddr('2') }}"
-wg_client_front_addr_cidr:
- "{{ public_wg_net_cidr | ansible.utils.ipaddr('3') }}"
-campus_wg_port: 51820
-campus_wg_addr_cidr:
- "{{ campus_wg_net_cidr | ansible.utils.ipaddr('1') }}"
-wg_appl_addr_cidr:
- "{{ campus_wg_net_cidr | ansible.utils.ipaddr('2') }}"
-wg_client_gate_addr_cidr:
- "{{ campus_wg_net_cidr | ansible.utils.ipaddr('3') }}"
core_addr: "{{ core_addr_cidr | ansible.utils.ipaddr('address') }}"
gate_addr: "{{ gate_addr_cidr | ansible.utils.ipaddr('address') }}"
gate_wild_addr:
"{{ gate_wild_addr_cidr | ansible.utils.ipaddr('address') }}"
-wifi_wan_addr:
- "{{ wifi_wan_addr_cidr | ansible.utils.ipaddr('address') }}"
-front_vpn_addr:
- "{{ front_vpn_addr_cidr | ansible.utils.ipaddr('address') }}"
front_wg_addr:
"{{ front_wg_addr_cidr | ansible.utils.ipaddr('address') }}"
-core_wg_addr:
- "{{ core_wg_addr_cidr | ansible.utils.ipaddr('address') }}"
-wg_client_front_addr:
- "{{ wg_client_front_addr_cidr | ansible.utils.ipaddr('address') }}"
-gate_wg_addr:
- "{{ campus_wg_addr_cidr | ansible.utils.ipaddr('address') }}"
-wg_appl_addr:
- "{{ wg_appl_addr_cidr | ansible.utils.ipaddr('address') }}"
-wg_client_gate_addr:
- "{{ wg_client_gate_addr_cidr | ansible.utils.ipaddr('address') }}"
#+END_SRC
modem and installed them as shown below.
: $ sudo apt install netplan.io systemd-resolved unattended-upgrades \
-: _ ntp isc-dhcp-server bind9 apache2 openvpn \
+: _ ntp isc-dhcp-server bind9 apache2 wireguard \
: _ postfix dovecot-imapd fetchmail expect rsync \
-: _ gnupg openssh-server wireguard
+: _ gnupg openssh-server
The Nextcloud configuration requires Apache2, MariaDB and a number of
PHP modules. Installing them while Core was on a cable modem sped up
cable modem and installed them as shown below.
: $ sudo apt install netplan.io systemd-resolved unattended-upgrades \
-: _ ufw isc-dhcp-server postfix openvpn \
-: _ openssh-server wireguard
+: _ ufw isc-dhcp-server postfix wireguard \
+: _ openssh-server
Next, the administrator concatenated a personal public ssh key and the
key found in [[file:Secret/ssh_admin/][=Secret/ssh_admin/=]] (created by [[*The CA Command][The CA Command]]) into an
perhaps with symbolic links to, for example,
=/etc/letsencrypt/live/small.example.org/fullchain.pem=.
-Note that the OpenVPN server does /not/ use =/etc/server.crt=. It
-uses the institute's CA and server certificates, and expects client
-certificates signed by the institute CA.
-
** Include Particulars
The first task, as in [[*The All Role][The All Role]], is to include the institute
emails. These and a few Front-specific Postfix configurations
settings make up the complete configuration (below).
-Front relays messages from the institute's public VPN via which Core
-relays messages from the campus.
+Front relays messages from the institute's public WireGuard™ subnet
+via which Core relays messages from the campus.
#+NAME: postfix-front-networks
#+CAPTION: ~postfix-front-networks~
#+BEGIN_SRC conf
- p: mynetworks
v: >-
- {{ public_vpn_net_cidr }}
{{ public_wg_net_cidr }}
127.0.0.0/8
[::ffff:127.0.0.0]/104
abuse: root
webmaster: root
admin: root
- monkey: monkey@{{ front_vpn_addr }}
+ monkey: monkey@{{ front_wg_addr }}
root: {{ ansible_user }}
path: /etc/aliases
marker: "# {mark} INSTITUTE MANAGED BLOCK"
tags: accounts
#+END_SRC
-** Configure OpenVPN
-
-Front uses OpenVPN to provide the institute's public VPN service. The
-configuration is straightforward with one complication. OpenVPN needs
-to know how to route to the campus VPN, which is only accessible when
-Core is connected. OpenVPN supports these dynamic routes internally
-with client-specific configuration files. The small institute uses
-one of these, =/etc/openvpn/ccd/core=, so that OpenVPN will know to
-route packets for the campus networks to Core.
+** Configure Public WireGuard™ Subnet
-#+NAME: openvpn-ccd-core
-#+CAPTION: ~openvpn-ccd-core~
-#+BEGIN_SRC conf
-iroute {{ private_net_and_mask }}
-iroute {{ campus_vpn_net_and_mask }}
-#+END_SRC
+Front uses WireGuard™ to provide a public (Internet accessible) VPN
+service. Core has an interface on this VPN and is expected to forward
+packets between it and the institute's other private networks.
-The VPN clients are /not/ configured to route /all/ of their traffic
-through the VPN, so Front pushes routes to the other institute
-networks. The clients thus know to route traffic for the private
-Ethernet or campus VPN to Front on the public VPN. (If the clients
-/were/ configured to route all traffic through the VPN, the one
-default route is all that would be needed.) Front itself is in the
-same situation, outside the institute networks with a default route
-through some ISP, and thus needs the same routes as the clients.
+The following example [[=private/front-wg0.conf=][=private/front-wg0.conf=]] configuration recognizes
+Core by its public key and routes the institute's private networks to
+it. It also recognizes Dick's notebook and his (replacement) phone,
+assigning them host numbers 4 and 6 on the VPN.
-#+NAME: openvpn-front-routes
-#+CAPTION: ~openvpn-front-routes~
+#+CAPTION: =private/front-wg0.conf=
#+BEGIN_SRC conf
-route {{ private_net_and_mask }}
-route {{ campus_vpn_net_and_mask }}
-push "route {{ private_net_and_mask }}"
-push "route {{ campus_vpn_net_and_mask }}"
-#+END_SRC
-
-The complete OpenVPN configuration for Front includes a ~server~
-option, the ~client-config-dir~ option, the routes mentioned above,
-and the common options discussed in [[*The VPN Service][The VPN Service]].
-
-#+NAME: openvpn-front
-#+CAPTION: ~openvpn-front~
-#+BEGIN_SRC conf :noweb no-export
-server {{ public_vpn_net_and_mask }}
-client-config-dir /etc/openvpn/ccd
-<<openvpn-front-routes>>
-<<openvpn-dev-mode>>
-<<openvpn-keepalive>>
-<<openvpn-dns>>
-<<openvpn-drop-priv>>
-<<openvpn-crypt>>
-<<openvpn-max>>
-<<openvpn-debug>>
-ca /usr/local/share/ca-certificates/{{ domain_name }}.crt
-cert server.crt
-key server.key
-dh dh2048.pem
-tls-crypt shared.key
-#+END_SRC
-
-Finally, here are the tasks (and handler) required to install and
-configure the OpenVPN server on Front.
-
-#+CAPTION: [[file:roles_t/front/tasks/main.yml][=roles_t/front/tasks/main.yml=]]
-#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml :noweb no-export
-
-- name: Install OpenVPN.
- become: yes
- apt: pkg=openvpn
-
-- name: Enable IP forwarding.
- become: yes
- sysctl:
- name: net.ipv4.ip_forward
- value: "1"
- state: present
-
-- name: Create OpenVPN client configuration directory.
- become: yes
- file:
- path: /etc/openvpn/ccd
- state: directory
- notify: Restart OpenVPN.
-
-- name: Install OpenVPN client configuration for Core.
- become: yes
- copy:
- content: |
- <<openvpn-ccd-core>>
- dest: /etc/openvpn/ccd/core
- notify: Restart OpenVPN.
-
-- name: Disable former VPN clients.
- become: yes
- copy:
- content: "disable\n"
- dest: /etc/openvpn/ccd/{{ item }}
- loop: "{{ revoked }}"
- tags: accounts
-
-- name: Install OpenVPN server certificate/key.
- become: yes
- copy:
- src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }}
- dest: /etc/openvpn/server.{{ item.typ }}
- mode: "{{ item.mode }}"
- loop:
- - { path: "issued/{{ domain_name }}", typ: crt,
- mode: "u=r,g=r,o=r" }
- - { path: "private/{{ domain_name }}", typ: key,
- mode: "u=r,g=,o=" }
- notify: Restart OpenVPN.
-
-- name: Install OpenVPN secrets.
- become: yes
- copy:
- src: ../Secret/{{ item.src }}
- dest: /etc/openvpn/{{ item.dest }}
- mode: u=r,g=,o=
- loop:
- - { src: front-dh2048.pem, dest: dh2048.pem }
- - { src: front-shared.key, dest: shared.key }
- notify: Restart OpenVPN.
-
-- name: Configure OpenVPN.
- become: yes
- copy:
- content: |
- <<openvpn-front>>
- dest: /etc/openvpn/server.conf
- mode: u=r,g=r,o=
- notify: Restart OpenVPN.
-
-- name: Enable/Start OpenVPN.
- become: yes
- systemd:
- service: openvpn@server
- enabled: yes
- state: started
-#+END_SRC
-
-#+CAPTION: [[file:roles_t/front/handlers/main.yml][=roles_t/front/handlers/main.yml=]]
-#+BEGIN_SRC conf :tangle roles_t/front/handlers/main.yml
-
-- name: Restart OpenVPN.
- become: yes
- systemd:
- service: openvpn@server
- state: restarted
-#+END_SRC
-
-** Configure Public WireGuard™
-
-Front uses WireGuard™ to provide a public VPN service. Core has an
-interface on this VPN (address: ~core_wg_addr~) and is expected to
-forward packets between it and the institute's other private networks.
-
-The following example [[file:Secret/front-wg0.conf][=Secret/front-wg0.conf=]] configuration recognizes
-Core by its public key, ~lGhC51~, and routes the institute's private
-networks to it. It also recognizes a member client, Dick's Notebook,
-by its public key ~4qd4xd...~ assigning it host number 4 on the VPN.
-
-#+CAPTION: [[file:Secret/front-wg0.conf][=Secret/front-wg0.conf=]]
-#+BEGIN_SRC conf :tangle Secret/front-wg0.conf
[Interface]
Address = 10.177.87.1/24
-PrivateKey = AJkzVxfTm/KvRjzTN/9X2jYy+CAugiwZfN5F3JTegms=
ListenPort = 39608
-PostUp = resolvectl dns wg0 192.168.56.1
-PostUp = resolvectl domain wg0 small.private
+PostUp = wg set %i private-key /etc/wireguard/private-key
+PostUp = resolvectl dns %i 192.168.56.1
+PostUp = resolvectl domain %i small.private
# Core
[Peer]
PublicKey = lGhC51IBgZtlq4H2bsYFuKvPtV0VAEwUvVIn5fW7D0c=
AllowedIPs = 10.177.87.2
-# AllowedIPs = 192.168.56.0/24 OpenVPN has this route.
-AllowedIPs = 10.84.138.0/24, 10.84.139.0/24
+AllowedIPs = 192.168.56.0/24
+AllowedIPs = 10.84.139.0/24
-# dicks-note
+# dick
[Peer]
PublicKey = 4qd4xdRztZBKhFrX9jI/b4fnMzpKQ5qhg691hwYSsX8=
AllowedIPs = 10.177.87.4
+
+# dicks-razr
+[Peer]
+PublicKey = zho0qMxoLclJSQu4GeJEcMkk0hx4Q047OcNc8vOejVw=
+AllowedIPs = 10.177.87.6
#+END_SRC
The configuration used on Dick's notebook when it is abroad looks like
#+BEGIN_SRC conf
[Interface]
Address = 10.177.87.3
-PrivateKey = WAhrlGccPf/BaFS5bRtBE4hEyt3kDxCavmwZfVTsfGs=
-PostUp = resolvectl dns wg0 192.168.56.1
-PostUp = resolvectl domain wg0 small.private
+PostUp = wg set %i private-key /etc/wireguard/private-key
+PostUp = resolvectl dns %i 192.168.56.1
+PostUp = resolvectl domain %i small.private
# Front
[Peer]
AllowedIPs = 10.177.87.1
AllowedIPs = 10.177.87.0/24
AllowedIPs = 192.168.56.0/24
-AllowedIPs = 10.84.138.0/24, 10.84.139.0/24
-AllowedIPs = 10.177.86.0/24
+AllowedIPs = 10.84.139.0/24
#+END_SRC
The following tasks install WireGuard™, configure it with
-=Secret/front-wg0.conf=, and enable the service.
+[[=private/front-wg0.conf=][=private/front-wg0.conf=]], and enable the service.
#+CAPTION: [[file:roles_t/front/tasks/main.yml][=roles_t/front/tasks/main.yml=]]
#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml
- name: Configure WireGuard™.
become: yes
copy:
- src: ../Secret/front-wg0.conf
+ src: ../private/front-wg0.conf
dest: /etc/wireguard/wg0.conf
mode: u=r,g=,o=
owner: root
group: root
- notify: Reload WireGuard™.
+ notify: Restart WireGuard™.
- name: Enable/Start WireGuard™ on boot.
become: yes
#+CAPTION: [[file:roles_t/front/handlers/main.yml][=roles_t/front/handlers/main.yml=]]
#+BEGIN_SRC conf :tangle roles_t/front/handlers/main.yml
-- name: Reload WireGuard™.
+- name: Restart WireGuard™.
become: yes
- command: wg setconf wg0
+ systemd:
+ service: wg-quick@wg0
+ state: restarted
#+END_SRC
** Configure Kamailio
to listen /only/ on Front's public VPN. The private name
~sip.small.private~ resolves to this address for the convenience
of members configuring SIP clients. The server configuration
-specifies the actual IP, known here as ~front_vpn_addr~.
+specifies the actual IP, known here as ~front_wg_addr~.
#+NAME: kamailio
#+CAPTION: ~kamailio~
#+BEGIN_SRC conf
-listen=udp:{{ front_vpn_addr }}:5060
listen=udp:{{ front_wg_addr }}:5060
#+END_SRC
#+END_SRC
Now the configuration drop concerns the network device on which
-Kamailio will be listening, the ~ovpn~ device created by OpenVPN. The
-added configuration settings inform Systemd that Kamailio should not
-be started before the ~ovpn~ device has appeared.
+Kamailio will be listening, the ~wg0~ device created by WireGuard™.
+The added configuration settings inform Systemd that Kamailio should
+not be started before the ~wg0~ device has appeared.
#+CAPTION: [[file:roles_t/front/tasks/main.yml][=roles_t/front/tasks/main.yml=]]
#+BEGIN_SRC conf :tangle roles_t/front/tasks/main.yml
path: /etc/systemd/system/kamailio.service.d
state: directory
-- name: Create Kamailio dependence on OpenVPN server.
+- name: Create Kamailio dependence on WireGuard™ interface.
become: yes
copy:
content: |
[Unit]
- Requires=sys-devices-virtual-net-ovpn.device
- After=sys-devices-virtual-net-ovpn.device
+ Requires=sys-devices-virtual-net-wg0.device
+ After=sys-devices-virtual-net-wg0.device
dest: /etc/systemd/system/kamailio.service.d/depend.conf
notify: Reload Systemd.
#+END_SRC
=/etc/netplan/60-core.yaml= file. That file provides Core's address
on the private Ethernet, the campus name server and search domain, and
the default route through Gate to the campus ISP. A second route,
-through Core itself to Front, is advertised to other hosts, but is not
-created here. It is created by OpenVPN when Core connects to Front's
-VPN.
+through Core itself to Front, is advertised to other hosts.
Core's Netplan needs the name of its main (only) Ethernet interface,
an example of which is given here. (A clever way to extract that name
option broadcast-address 192.168.56.255;
option routers 192.168.56.2;
option ntp-servers 192.168.56.1;
- option rfc3442-routes 24, 10,177,86, 192,168,56,1,
- 24, 10,177,87, 192,168,56,1,
+ option rfc3442-routes 24, 10,177,87, 192,168,56,1,
0, 192,168,56,2;
}
acl "trusted" {
{{ private_net_cidr }};
{{ wild_net_cidr }};
- {{ public_vpn_net_cidr }};
{{ public_wg_net_cidr }};
- {{ campus_vpn_net_cidr }};
{{ campus_wg_net_cidr }};
localhost;
};
file "/etc/bind/db.private";
};
-zone "{{ public_vpn_net_cidr | ansible.utils.ipaddr('revdns')
+zone "{{ public_wg_net_cidr | ansible.utils.ipaddr('revdns')
| regex_replace('^0\.','') }}" {
type master;
file "/etc/bind/db.public_vpn";
};
-zone "{{ campus_vpn_net_cidr | ansible.utils.ipaddr('revdns')
+zone "{{ campus_wg_net_cidr | ansible.utils.ipaddr('revdns')
| regex_replace('^0\.','') }}" {
type master;
file "/etc/bind/db.campus_vpn";
test IN CNAME core.small.private.
live IN CNAME core.small.private.
ntp IN CNAME core.small.private.
-sip IN A 10.177.86.1
+sip IN A 10.177.87.1
;
core IN A 192.168.56.1
gate IN A 192.168.56.2
notify:
- Restart Postfix.
- Restart Dovecot.
- - Restart OpenVPN.
#+END_SRC
** Install NTP
- p: mynetworks
v: >-
{{ private_net_cidr }}
- {{ public_vpn_net_cidr }}
{{ public_wg_net_cidr }}
- {{ campus_vpn_net_cidr }}
{{ campus_wg_net_cidr }}
127.0.0.0/8
[::ffff:127.0.0.0]/104
#+NAME: postfix-core-relayhost
#+CAPTION: ~postfix-core-relayhost~
#+BEGIN_SRC conf
-- { p: relayhost, v: "[{{ front_vpn_addr }}]" }
+- { p: relayhost, v: "[{{ front_wg_addr }}]" }
#+END_SRC
Core uses a Postfix transport file, =/etc/postfix/transport=, to
** Configure Private Email Aliases
The institute's Core needs to deliver email addressed to institute
-aliases including those advertised on the campus web site, in VPN
+aliases including those advertised on the campus web site, in X.509
certificates, etc. System daemons like ~cron(8)~ may also send email
to e.g. ~monkey~. The following aliases are installed in
=/etc/aliases= with a special marker so that additional blocks can be
set no syslog
#set logfile /home/{{ item }}/.fetchmail.log
-poll {{ front_vpn_addr }} protocol imap timeout 15
+poll {{ front_wg_addr }} protocol imap timeout 15
username {{ item }}
password "{{ members[item].password_fetchmail }}" fetchall
ssl sslproto tls1.2+ sslcertck sslcommonname {{ domain_name }}
[Unit]
Description=Fetchmail --idle task for {{ item }}.
AssertPathExists=/home/{{ item }}/.fetchmailrc
-After=openvpn@front.service
-Wants=sys-devices-virtual-net-ovpn.device
+After=wg-quick@wg0.service
+Wants=sys-devices-virtual-net-wg0.device
[Service]
User={{ item }}
user: monkey
#+END_SRC
-** Configure OpenVPN Connection to Front
-
-Core connects to Front's public VPN to provide members abroad with a
-route to the campus networks. As described in the configuration of
-Front's OpenVPN service, Front expects Core to connect using a client
-certificate with Common Name ~Core~.
-
-Core's OpenVPN client configuration uses the Debian default Systemd
-service unit to keep Core connected to Front. The configuration
-is installed in =/etc/openvpn/front.conf= so the Systemd service is
-called ~openvpn@front~.
-
-#+NAME: openvpn-core
-#+CAPTION: ~openvpn-core~
-#+BEGIN_SRC conf :noweb no-export
-client
-dev-type tun
-dev ovpn
-remote {{ front_addr }}
-nobind
-<<openvpn-drop-priv>>
-<<openvpn-crypt>>
-remote-cert-tls server
-verify-x509-name {{ domain_name }} name
-verb 3
-ca /usr/local/share/ca-certificates/{{ domain_name }}.crt
-cert client.crt
-key client.key
-tls-crypt shared.key
-#+END_SRC
-
-The tasks that install and configure the OpenVPN client configuration
-for Core.
-
-#+CAPTION: [[file:roles_t/core/tasks/main.yml][=roles_t/core/tasks/main.yml=]]
-#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml :noweb no-export
-
-- name: Install OpenVPN.
- become: yes
- apt: pkg=openvpn
-
-- name: Enable IP forwarding.
- become: yes
- sysctl:
- name: net.ipv4.ip_forward
- value: "1"
- state: present
-
-- name: Install OpenVPN secret.
- become: yes
- copy:
- src: ../Secret/front-shared.key
- dest: /etc/openvpn/shared.key
- mode: u=r,g=,o=
- notify: Restart OpenVPN.
-
-- name: Install OpenVPN client certificate/key.
- become: yes
- copy:
- src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }}
- dest: /etc/openvpn/client.{{ item.typ }}
- mode: "{{ item.mode }}"
- loop:
- - { path: "issued/core", typ: crt, mode: "u=r,g=r,o=r" }
- - { path: "private/core", typ: key, mode: "u=r,g=,o=" }
- notify: Restart OpenVPN.
-
-- name: Configure OpenVPN.
- become: yes
- copy:
- content: |
- <<openvpn-core>>
- dest: /etc/openvpn/front.conf
- mode: u=r,g=r,o=
- notify: Restart OpenVPN.
-
-- name: Enable/Start OpenVPN.
- become: yes
- systemd:
- service: openvpn@front
- state: started
- enabled: yes
-#+END_SRC
-
-#+CAPTION: [[file:roles_t/core/handlers/main.yml][=roles_t/core/handlers/main.yml=]]
-#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml
-
-- name: Restart OpenVPN.
- become: yes
- systemd:
- service: openvpn@front
- state: restarted
-#+END_SRC
-
** Configure Core WireGuard™ Interface
Core connects to Front's WireGuard™ service to provide members abroad
-with a route to the campus networks. As described in [[*Configure Public WireGuard™][Configure
-Public WireGuard™]] for Front, Core is expected to forward packets from/to the
+with a route to the campus networks. As described in [[*Configure Public WireGuard™][Configure Public
+WireGuard™]] for Front, Core is expected to forward packets from/to the
private networks.
-The following example [[file:Secret/gate-wg0.conf][=Secret/gate-wg0.conf=]] configuration recognizes
+The following example [[file:private/core-wg0.conf][=private/core-wg0.conf=]] configuration recognizes
Front by its public key, ~S+6HaT~, looking for it at the institute's
public IP address and a special port.
-#+CAPTION: [[file:Secret/core-wg0.conf][=Secret/core-wg0.conf=]]
-#+BEGIN_SRC conf :tangle Secret/core-wg0.conf
+#+CAPTION: [[file:private/core-wg0.conf][=private/core-wg0.conf=]]
+#+BEGIN_SRC conf :tangle private/core-wg0.conf
[Interface]
Address = 10.177.87.2
-PrivateKey = AI+KhwnsHzSPqyIyAObx7EBBTBXFZPiXb2/Qcts8zEI=
+PostUp = wg set %i private-key /etc/wireguard/private-key
# Front
[Peer]
AllowedIPs = 10.177.87.1
AllowedIPs = 10.177.87.0/24
#+END_SRC
+# TODO: Make this a template?
The following tasks install WireGuard™, configure it with
-=Secret/core-wg0.conf=, and enable the service.
+[[file:private/core-wg0.conf][=private/core-wg0.conf=]], and enable the service.
#+CAPTION: [[file:roles_t/core/tasks/main.yml][=roles_t/core/tasks/main.yml=]]
#+BEGIN_SRC conf :tangle roles_t/core/tasks/main.yml
- name: Configure WireGuard™.
become: yes
copy:
- src: ../Secret/core-wg0.conf
+ src: ../private/core-wg0.conf
dest: /etc/wireguard/wg0.conf
mode: u=r,g=,o=
owner: root
group: root
- notify: Reload WireGuard™.
+ notify: Restart WireGuard™.
- name: Enable/Start WireGuard™ on boot.
become: yes
#+CAPTION: [[file:roles_t/core/handlers/main.yml][=roles_t/core/handlers/main.yml=]]
#+BEGIN_SRC conf :tangle roles_t/core/handlers/main.yml
-- name: Reload WireGuard™.
+- name: Restart WireGuard™.
become: yes
- command: wg setconf wg0
+ systemd:
+ service: wg-quick@wg0
+ state: restarted
#+END_SRC
** Configure NAGIOS
addresses: [ {{ core_addr }} ]
search: [ {{ domain_priv }} ]
routes:
- - to: {{ public_vpn_net_cidr }}
+ - to: {{ public_wg_net_cidr }}
via: {{ core_addr }}
wild:
match:
The default policy settings in =/etc/default/ufw= are ~ACCEPT~ and
~ACCEPT~ for input and output, and ~DROP~ for forwarded packets.
Forwarding was enabled in the kernel previously (when configuring
-OpenVPN) using Ansible's ~sysctl~ module. It does not need to be set
-in =/etc/ufw/sysctl.conf=.
+WireGuard™) using Ansible's ~sysctl~ module. It does not need to be
+set in =/etc/ufw/sysctl.conf=.
NAT is enabled per the ~ufw-framework(8)~ manual page, by introducing
~nat~ table rules in a block at the end of =/etc/ufw/before.rules=.
know!
Forwarding rules are also needed to route packets from the campus VPN
-(the ~ovpn~ tunnel device) or WireGuard™ subnet (the ~wg0~ tunnel
-device) to the institute's LAN and back. The public VPN on Front will
-also be included since its packets arrive at Gate's ~lan~ interface,
-coming from Core. Thus forwarding between public and campus VPNs is
-also allowed.
+(the ~wg0~ WireGuard™ tunnel device) to the institute's LAN and back.
+The public VPN on Front will also be included since its packets arrive
+at Gate's ~lan~ interface, coming from Core. Thus forwarding between
+public and campus VPNs is also allowed.
#+NAME: ufw-forward-private
#+CAPTION: ~ufw-forward-private~
#+BEGIN_SRC conf
--A FORWARD -i lan -o ovpn -j ACCEPT
--A FORWARD -i ovpn -o lan -j ACCEPT
-A FORWARD -i lan -o wg0 -j ACCEPT
-A FORWARD -i wg0 -o lan -j ACCEPT
#+END_SRC
Note that there are no forwarding rules to allow packets to pass from
-the ~wild~ device to the ~lan~ device, just the ~ovpn~ device.
+the ~wild~ device to the ~lan~ device, just the ~wg0~ device.
** Install UFW
If physically moved or rebooted for some other reason, the above
command would not be necessary.
-** Install Server Certificate
-
-The (OpenVPN) server on Gate uses an institute certificate (and key)
-to authenticate itself to its clients. It uses the =/etc/server.crt=
-and =/etc/server.key= files just because the other servers (on Core
-and Front) do.
-
-#+CAPTION: [[file:roles_t/gate/tasks/main.yml][=roles_t/gate/tasks/main.yml=]]
-#+BEGIN_SRC conf :tangle roles_t/gate/tasks/main.yml
-
-- name: Install server certificate/key.
- become: yes
- copy:
- src: ../Secret/CA/pki/{{ item.path }}.{{ item.typ }}
- dest: /etc/server.{{ item.typ }}
- mode: "{{ item.mode }}"
- loop:
- - { path: "issued/gate.{{ domain_priv }}", typ: crt,
- mode: "u=r,g=r,o=r" }
- - { path: "private/gate.{{ domain_priv }}", typ: key,
- mode: "u=r,g=,o=" }
- notify: Restart OpenVPN.
-#+END_SRC
-
-** Configure OpenVPN
-
-Gate uses OpenVPN to provide the institute's campus VPN service. Its
-clients are /not/ configured to route /all/ of their traffic through
-the VPN, so Gate pushes routes to the other institute networks. Gate
-itself is on the private Ethernet and thereby learns about the route
-to Front.
-
-#+NAME: openvpn-gate-routes
-#+CAPTION: ~openvpn-gate-routes~
-#+BEGIN_SRC conf
-push "route {{ private_net_and_mask }}"
-push "route {{ public_vpn_net_and_mask }}"
-#+END_SRC
-
-The complete OpenVPN configuration for Gate includes a ~server~
-option, the pushed routes mentioned above, and the common options
-discussed in [[*The VPN Services][The VPN Services]].
-
-#+NAME: openvpn-gate
-#+CAPTION: ~openvpn-gate~
-#+BEGIN_SRC conf :noweb no-export
-server {{ campus_vpn_net_and_mask }}
-client-config-dir /etc/openvpn/ccd
-<<openvpn-gate-routes>>
-<<openvpn-dev-mode>>
-<<openvpn-keepalive>>
-<<openvpn-dns>>
-<<openvpn-drop-priv>>
-<<openvpn-crypt>>
-<<openvpn-max>>
-<<openvpn-debug>>
-ca /usr/local/share/ca-certificates/{{ domain_name }}.crt
-cert /etc/server.crt
-key /etc/server.key
-dh dh2048.pem
-tls-crypt shared.key
-#+END_SRC
-
-Finally, here are the tasks (and handler) required to install and
-configure the OpenVPN server on Gate.
-
-#+CAPTION: [[file:roles_t/gate/tasks/main.yml][=roles_t/gate/tasks/main.yml=]]
-#+BEGIN_SRC conf :tangle roles_t/gate/tasks/main.yml :noweb no-export
-
-- name: Install OpenVPN.
- become: yes
- apt: pkg=openvpn
-
-- name: Enable IP forwarding.
- become: yes
- sysctl:
- name: net.ipv4.ip_forward
- value: "1"
- state: present
-
-- name: Create OpenVPN client configuration directory.
- become: yes
- file:
- path: /etc/openvpn/ccd
- state: directory
- notify: Restart OpenVPN.
-
-- name: Disable former VPN clients.
- become: yes
- copy:
- content: "disable\n"
- dest: /etc/openvpn/ccd/{{ item }}
- loop: "{{ revoked }}"
- notify: Restart OpenVPN.
- tags: accounts
-
-- name: Install OpenVPN secrets.
- become: yes
- copy:
- src: ../Secret/{{ item.src }}
- dest: /etc/openvpn/{{ item.dest }}
- mode: u=r,g=,o=
- loop:
- - { src: gate-dh2048.pem, dest: dh2048.pem }
- - { src: gate-shared.key, dest: shared.key }
- notify: Restart OpenVPN.
-
-- name: Configure OpenVPN.
- become: yes
- copy:
- content: |
- <<openvpn-gate>>
- dest: /etc/openvpn/server.conf
- mode: u=r,g=r,o=
- notify: Restart OpenVPN.
-#+END_SRC
-
-#+CAPTION: [[file:roles_t/gate/handlers/main.yml][=roles_t/gate/handlers/main.yml=]]
-#+BEGIN_SRC conf :tangle roles_t/gate/handlers/main.yml
-
-- name: Restart OpenVPN.
- become: yes
- systemd:
- service: openvpn@server
- state: restarted
-#+END_SRC
-
** Configure Campus WireGuard™
Gate uses WireGuard™ to provide a campus VPN service. Gate's routes
additional route Gate needs is to the public VPN via Core. The rest
(private Ethernet and campus VPN) are directly connected.
-The following example [[file:Secret/gate-wg0.conf][=Secret/gate-wg0.conf=]] configuration recognizes
-a wired IoT appliance (public key ~LdsCsg~) and a member client,
-Dick's Notebook (public key ~4qd4xd~), assigning them the host numbers
-3 and 4 respectively. (Dick's Notebook's host number is /not
-coincidentally/ 4 here as well as on Front's WireGuard™ subnet.)
+The following example [[=private/gate-wg0.conf=][=private/gate-wg0.conf=]] configuration recognizes
+a wired IoT appliance, Dick's notebook and his replacement phone,
+assigning them the host numbers 3, 4 and 6 respectively.
-#+CAPTION: [[file:Secret/gate-wg0.conf][=Secret/gate-wg0.conf=]]
-#+BEGIN_SRC conf :tangle Secret/gate-wg0.conf
+#+CAPTION: =private/gate-wg0.conf=
+#+BEGIN_SRC conf
[Interface]
Address = 10.84.139.1/24
-PrivateKey = yOBdLbXh6KBwYQvvb5mhiku8Fxkqc5Cdyz6gNgjc/2U=
ListenPort = 51820
+PostUp = wg set %i private-key /etc/wireguard/private-key
-# IoT appliance
+# thing
[Peer]
PublicKey = LdsCsgfjKCfd5+VKS+Q/dQhWO8NRNygByDO2VxbXlSQ=
AllowedIPs = 10.84.139.3
-# dicks-note
+# dick
[Peer]
PublicKey = 4qd4xdRztZBKhFrX9jI/b4fnMzpKQ5qhg691hwYSsX8=
AllowedIPs = 10.84.139.4
+
+# dicks-razr
+[Peer]
+PublicKey = zho0qMxoLclJSQu4GeJEcMkk0hx4Q047OcNc8vOejVw=
+AllowedIPs = 10.84.139.6
#+END_SRC
-The configuration used on the IoT appliance looks like this:
+The configuration used on ~thing~, the IoT appliance, looks like this:
#+CAPTION: WireGuard™ tunnel on an IoT appliance
#+BEGIN_SRC conf
AllowedIPs = 10.84.139.1
AllowedIPs = 10.84.139.0/24
AllowedIPs = 192.168.56.0/24
-AllowedIPs = 10.177.86.0/24
AllowedIPs = 10.177.87.0/24
-AllowedIPs = 10.84.138.0/24
#+END_SRC
And the configuration used on Dick's notebook when it is on campus
#+BEGIN_SRC conf
[Interface]
Address = 10.84.139.3
-PrivateKey = WAhrlGccPf/BaFS5bRtBE4hEyt3kDxCavmwZfVTsfGs=
+PostUp = wg set %i private-key /etc/wireguard/private-key
PostUp = resolvectl dns wg0 192.168.56.1
PostUp = resolvectl domain wg0 small.private
AllowedIPs = 10.84.139.1
AllowedIPs = 10.84.139.0/24
AllowedIPs = 192.168.56.0/24
-AllowedIPs = 10.177.86.0/24
AllowedIPs = 10.177.87.0/24
-AllowedIPs = 10.84.138.0/24
#+END_SRC
The following tasks install WireGuard™, configure it with
-=Secret/gate-wg0.conf=, and enable the service.
+[[=private/gate-wg0.conf=][=private/gate-wg0.conf=]], and enable the service.
#+CAPTION: [[file:roles_t/gate/tasks/main.yml][=roles_t/gate/tasks/main.yml=]]
#+BEGIN_SRC conf :tangle roles_t/gate/tasks/main.yml
- name: Configure WireGuard™.
become: yes
copy:
- src: ../Secret/gate-wg0.conf
+ src: ../private/gate-wg0.conf
dest: /etc/wireguard/wg0.conf
mode: u=r,g=,o=
owner: root
group: root
- notify: Reload WireGuard™.
+ notify: Restart WireGuard™.
- name: Enable/Start WireGuard™ on boot.
become: yes
#+CAPTION: [[file:roles_t/gate/handlers/main.yml][=roles_t/gate/handlers/main.yml=]]
#+BEGIN_SRC conf :tangle roles_t/gate/handlers/main.yml
-- name: Reload WireGuard™.
+- name: Restart WireGuard™.
become: yes
- command: wg setconf wg0
+ systemd:
+ service: wg-quick@wg0
+ state: restarted
#+END_SRC
certificate authority, and deliver email addressed to ~root~ to the
system administrator's account on Core.
-Wireless campus devices can get a key to the campus VPN from the
-~./inst client campus~ command, but their OpenVPN client must be
-configured manually.
+Wireless campus devices register their public keys using the ~./inst
+client~ command which updates the WireGuard™ configuration on Gate.
** Include Particulars
mysystem "ansible-playbook playbooks/check-inst-vars.yml >/dev/null";
-our ($domain_name, $domain_priv, $front_addr, $gate_wild_addr);
+our ($domain_name, $domain_priv, $private_net_cidr,
+ $front_addr, $front_wg_pubkey,
+ $public_wg_net_cidr, $public_wg_port,
+ $gate_wild_addr, $gate_wg_pubkey,
+ $campus_wg_net_cidr, $campus_wg_port,
+ $core_addr, $core_wg_pubkey);
do "./private/vars.pl";
#+END_SRC
content: |
$domain_name = "{{ domain_name }}";
$domain_priv = "{{ domain_priv }}";
+ $private_net_cidr = "{{ private_net_cidr }}";
+
$front_addr = "{{ front_addr }}";
+ $front_wg_pubkey = "{{ front_wg_pubkey }}";
+
+ $public_wg_net_cidr = "{{ public_wg_net_cidr }}";
+
+ $public_wg_port = "{{ public_wg_port }}";
+
$gate_wild_addr = "{{ gate_wild_addr }}";
+ $gate_wg_pubkey = "{{ gate_wg_pubkey }}";
+
+ $campus_wg_net_cidr = "{{ campus_wg_net_cidr }}";
+ $campus_wg_port = "{{ campus_wg_port }}";
+
+ $core_addr = "{{ core_addr }}";
+ $core_wg_pubkey = "{{ core_wg_pubkey }}";
dest: ../private/vars.pl
mode: u=rw,g=,o=
#+END_SRC
+Most of these settings are already in =private/vars.yml=. The
+following few provide the servers' public keys and ports.
+
+#+CAPTION: [[file:private/vars.yml][=private/vars.yml]]
+#+BEGIN_SRC conf :tangle private/vars.yml
+front_wg_pubkey: S+6HaTnOwwhWgUGXjSBcPAvifKw+j8BDTRfq534gNW4=
+public_wg_port: 39608
+
+gate_wg_pubkey: y3cjFnvQbylmH4lGTujpqc8rusIElmJ4Gu9hh6iR7QI=
+campus_wg_port: 51820
+
+core_wg_pubkey: lGhC51IBgZtlq4H2bsYFuKvPtV0VAEwUvVIn5fW7D0c=
+#+END_SRC
+
+All of the private keys used in the example/test configuration are
+listed in the following table. The first three are copied to
+=/etc/wireguard/private-key= on each of the corresponding test
+machines: ~front~, ~gate~ and ~core~. The rest are installed on
+the test client to give it different personae.
+
+| Test Host | WireGuard™ Private Key |
+|---------------+----------------------------------------------|
+| ~front~ | AJkzVxfTm/KvRjzTN/9X2jYy+CAugiwZfN5F3JTegms= |
+| ~gate~ | yOBdLbXh6KBwYQvvb5mhiku8Fxkqc5Cdyz6gNgjc/2U= |
+| ~core~ | AI+KhwnsHzSPqyIyAObx7EBBTBXFZPiXb2/Qcts8zEI= |
+| ~thing~ | KIwQT5eGOl9w1qOa5I+2xx5kJH3z4xdpmirS/eGdsXY= |
+| ~dick~ | WAhrlGccPf/BaFS5bRtBE4hEyt3kDxCavmwZfVTsfGs= |
+| ~dicks-phone~ | oG/Kou9HOBCBwHAZGypPA1cZWUL6nR6WoxBiXc/OQWQ= |
+| ~dicks-razr~ | IGNcF0VpkIBcJQAcLZ9jgRmk0SYyUr/WwSNXZoXXUWQ= |
+
** The CA Command
The next code block implements the ~CA~ sub-command, which creates a
my $dom = $domain_name;
my $pvt = $domain_priv;
mysystem "cd Secret/CA; ./easyrsa build-server-full $dom nopass";
- mysystem "cd Secret/CA; ./easyrsa build-server-full gate.$pvt nopass";
mysystem "cd Secret/CA; ./easyrsa build-server-full core.$pvt nopass";
- mysystem "cd Secret/CA; ./easyrsa build-client-full core nopass";
umask 077;
- mysystem "openvpn --genkey secret Secret/front-shared.key";
- mysystem "openvpn --genkey secret Secret/gate-shared.key";
- mysystem "openssl dhparam -out Secret/front-dh2048.pem 2048";
- mysystem "openssl dhparam -out Secret/gate-dh2048.pem 2048";
mysystem "mkdir --mode=700 Secret/root.gnupg";
mysystem ("gpg --homedir Secret/root.gnupg",
key value ~current~. That key gets value ~former~ when the member
leaves.[fn:3] Access by former members is revoked by invalidating the
Unix account passwords, removing any authorized SSH keys from Front
-and Core, and disabling their VPN certificates.
+and Core, and removing their public keys from the WireGuard™
+configurations.
The example file (below) contains a membership roll with one
-membership record, for an account named ~dick~, which was issued
-client certificates for devices named ~dick-note~, ~dick-phone~ and
-~dick-razr~. ~dick-phone~ appears to be lost because its certificate
-was revoked. Dick's membership record includes a vault-encrypted
-password (for Fetchmail) and the two password hashes installed on
-Front and Core. (The example hashes are truncated versions.)
+membership record, for an account named ~dick~, which registered the
+public keys of devices named ~dick~, ~dicks-phone~ and ~dicks-razr~.
+~dicks-razr~ is presumably a replacement for ~dicks-phone~, which was
+lost and its key invalidated. Lastly, Dick's membership record
+includes a vault-encrypted password (for Fetchmail) and the two
+password hashes installed on Front and Core. (The example hashes are
+truncated versions.)
#+CAPTION: =private/members.yml=
#+BEGIN_SRC conf
dick:
status: current
clients:
- - dick-note
- - dick-phone
- - dick-razr
+ - dick 4 4qd4xdRztZBKhFrX9jI/b4fnMzpKQ5qhg691hwYSsX8=
+ - dicks-phone 5 --WFbTSff17QiYObXoU+7mjaEUCqKjgvLqA49pAxqVeWg=
+ - dicks-razr 6 zho0qMxoLclJSQu4GeJEcMkk0hx4Q047OcNc8vOejVw=
password_front:
$6$17h49U76$c7TsH6eMVmoKElNANJU1F1LrRrqzYVDreNu.QarpCoSt9u0gTHgiQ
password_core:
6535633263656434393030333032343533626235653332626330666166613833
usernames:
- dick
-revoked:
-- dick-phone
+clients:
+- thing 3 LdsCsgfjKCfd5+VKS+Q/dQhWO8NRNygByDO2VxbXlSQ=
#+END_SRC
The test campus starts with the empty membership roll found in
---
members:
usernames: []
-revoked: []
+clients: []
#+END_SRC
Both locations go on the ~membership_rolls~ variable used by the
if (keys %{$yaml->{"members"}}) {
print $O "members:\n";
for my $user (sort keys %{$yaml->{"members"}}) {
- print_member ($O, $yaml->{"members"}->{$user});
+ print_member ($O, $user, $yaml->{"members"}->{$user});
}
print $O "usernames:\n";
for my $user (sort keys %{$yaml->{"members"}}) {
print $O "members:\n";
print $O "usernames: []\n";
}
- if (@{$yaml->{"revoked"}}) {
- print $O "revoked:\n";
- for my $name (@{$yaml->{"revoked"}}) {
+ if (@{$yaml->{"clients"}}) {
+ print $O "clients:\n";
+ for my $name (@{$yaml->{"clients"}}) {
print $O "- $name\n";
}
} else {
- print $O "revoked: []\n";
+ print $O "clients: []\n";
}
close $O or die "Could not close $pathname: $!\n";
}
#+CAPTION: [[file:inst][=inst=]]
#+BEGIN_SRC perl :tangle inst
-sub print_member ($$) {
- my ($out, $member) = @_;
- print $out " ", $member->{"username"}, ":\n";
- print $out " username: ", $member->{"username"}, "\n";
+sub print_member ($$$) {
+ my ($out, $username, $member) = @_;
+ print $out " ", $username, ":\n";
print $out " status: ", $member->{"status"}, "\n";
if (@{$member->{"clients"} || []}) {
print $out " clients:\n";
print $out " $line\n";
}
}
- my @standard_keys = ( "username", "status", "clients",
+ my @standard_keys = ( "status", "clients",
"password_front", "password_core",
"password_fetchmail" );
my @other_keys = (sort
mysystem ("ansible-playbook -e \@Secret/become.yml",
" playbooks/nextcloud-new.yml",
" -e user=$user", " -e pass=\"$epass\"");
- $members->{$user} = { "username" => $user,
- "status" => "current",
+ $members->{$user} = { "status" => "current",
"password_front" => $front,
"password_core" => $core,
"password_fetchmail" => $vault };
- write_members_yaml
- { "members" => $members,
- "revoked" => $yaml->{"revoked"} };
+ write_members_yaml $yaml;
mysystem ("ansible-playbook -e \@Secret/become.yml",
" -t accounts -l core,front playbooks/site.yml");
exit;
** The Old Command
-The ~old~ command disables a member's accounts and clients.
+The ~old~ command disables a member's account (and thus their clients).
#+CAPTION: [[file:inst][=inst=]]
#+BEGIN_SRC perl :tangle inst
mysystem ("ansible-playbook -e \@Secret/become.yml",
"playbooks/nextcloud-old.yml -e user=$user");
$member->{"status"} = "former";
- write_members_yaml { "members" => $members,
- "revoked" => [ sort @{$member->{"clients"}},
- @{$yaml->{"revoked"}} ] };
+ write_members_yaml $yaml;
mysystem ("ansible-playbook -e \@Secret/become.yml",
"-t accounts playbooks/site.yml");
exit;
** The Client Command
-The ~client~ command creates an OpenVPN configuration (=.ovpn=) file
-authorizing wireless devices to connect to the institute's VPNs. The
-command uses the EasyRSA CA in [[file:Secret/][=Secret/=]]. The generated configuration
-is slightly different depending on the type of host, given as the
-first argument to the command.
-
-- ~./inst client android NEW USER~ \\
- An ~android~ host runs OpenVPN for Android or work-alike. Two files
- are generated. =campus.ovpn= configures a campus VPN connection,
- and =public.ovpn= configures a connection to the institute's public
- VPN.
-
-- ~./inst client debian NEW USER~ \\
- A ~debian~ host runs a Debian desktop with Network Manager. Again
- two files are generated, for the campus and public VPNs.
-
-- ~./inst client campus NEW~ \\
- A ~campus~ host is a Debian host (with or without desktop) that is
- used by the institute generally, is /not/ the property of a member,
- never roams off campus, and so is remotely administered with
- Ansible. One file is generated, =campus.ovpn=.
-
-The administrator uses encrypted email to send =.ovpn= files to new
-members. New members install the ~network-manager-openvpn-gnome~ and
-~openvpn-systemd-resolved~ packages, and import the =.ovpn= files into
-Network Manager on their desktops. The =.ovpn= files for an
-Android device are transferred by USB stick and should automatically
-install when "opened". On campus hosts, the system administrator
-copies the =campus.ovpn= file to =/etc/openvpn/campus.conf=.
-
-The OpenVPN configurations generated for Debian hosts specify an ~up~
-script, ~update-systemd-resolved~, installed in =/etc/openvpn/= by the
-~openvpn-systemd-resolved~ package. The following configuration lines
-instruct the OpenVPN clients to run this script whenever the
-connection is restarted.
-
-#+NAME: openvpn-up
-#+CAPTION: ~openvpn-up~
-#+BEGIN_SRC conf
-script-security 2
-up /etc/openvpn/update-systemd-resolved
-up-restart
-#+END_SRC
+The ~client~ command registers the public key of a client wishing to
+connect to the institute's WireGuard™ subnets. The command allocates
+a host number, associates it with the provided public key, and updates
+the configuration files =front-wg0.conf= and =gate-wg0.conf=. These
+are distributed to the servers, which are then reset. Thereafter the
+servers recognize the new peer (and drop packets from any "peer" that
+is no longer authorized).
+
+The ~client~ command also generates template WireGuard™ configuration
+files for the client. They contain the necessary parameters /except/
+the client's ~PrivateKey~, which in most cases should be found in the
+local =/etc/wireguard/private-key=, /not/ in the configuration files.
+Private keys (and corresponding public keys) should be generated on
+the client (i.e. by the WireGuard for Android™ app) and never revealed
+(i.e. sent in email, copied to a network drive, etc.).
+
+The generated configuration vary depending on the type of client,
+which must be given as the first argument to the command. For most
+types, two configuration files are generated. =campus.conf= contains
+the client's campus VPN configuration, and =public.conf= the client's
+public VPN configuration.
+
+- ~./inst client android NAME USER PUBKEY~ \\
+ An ~android~ client runs WireGuard for Android™ or work-alike.
+
+- ~./inst client debian NAME USER PUBKEY~ \\
+ A ~debian~ client runs a Debian/Linux desktop with Network Manager
+ (though ~wg-quick~ is currently used).
+
+- ~./inst client campus NAME PUBKEY~ \\
+ A ~campus~ client is an institute machine (with or without desktop)
+ that is used by the institute generally, is /not/ the property of a
+ member, never roams off campus, and so is remotely administered with
+ Ansible. Just one configuration file is generated: =campus.conf=.
+
+The administrator emails the template =.conf= files to new members.
+(They contain no secrets.) The members will have already installed
+the ~wireguard~ package in order to run the ~wg genkey~ and ~wg
+pubkey~ commands. After receiving the =.conf= templates, they paste
+in their private keys and install the resulting files in
+e.g. =/etc/wireguard/wg0.conf= and =wg1.conf=. To connect, members
+run a command like ~systemctl start wg-quick@wg0~. (There may be
+better support in Network Manager soon.)
#+CAPTION: [[file:inst][=inst=]]
#+BEGIN_SRC perl :tangle inst :noweb no-export
-sub write_template ($$$$$$$$$);
-sub read_file ($);
-sub add_client ($$$);
+sub write_wg_server ($$$$$);
+sub write_wg_client ($$$$$$);
+sub hostnum_to_ipaddr ($$);
+sub hostnum_to_ipaddr_cidr ($$);
if (defined $ARGV[0] && $ARGV[0] eq "client") {
- die "Secret/CA/easyrsa: not found\n" if ! -x "Secret/CA/easyrsa";
my $type = $ARGV[1]||"";
my $name = $ARGV[2]||"";
my $user = $ARGV[3]||"";
- if ($type eq "campus") {
- die "usage: $0 client campus NAME\n" if @ARGV != 3;
+ my $pubkey = $ARGV[4]||"";
+ if ($type eq "android" || $type eq "debian") {
+ die "usage: $0 client $type NAME USER PUBKEY\n" if @ARGV != 5;
die "$name: invalid host name\n" if $name !~ /^[a-z][-a-z0-9]+$/;
- } elsif ($type eq "android" || $type eq "debian") {
- die "usage: $0 client $type NAME USER\n" if @ARGV != 4;
+ } elsif ($type eq "campus") {
+ die "usage: $0 client campus NAME PUBKEY\n" if @ARGV != 4;
die "$name: invalid host name\n" if $name !~ /^[a-z][-a-z0-9]+$/;
+ $pubkey = $user;
+ $user = "";
} else {
die "usage: $0 client [debian|android|campus]\n";
}
my $yaml;
- my $member;
- if ($type ne "campus") {
- $yaml = read_members_yaml;
- my $members = $yaml->{"members"};
- if (@ARGV == 4) {
- $member = $members->{$user};
- die "$user: does not exist\n" if ! defined $member;
- }
- if (defined $member) {
- my ($owner) = grep { grep { $_ eq $name } @{$_->{"clients"}} }
- values %{$members};
- die "$name: owned by $owner->{username}\n"
- if defined $owner && $owner->{username} ne $member->{username};
- }
- }
+ $yaml = read_members_yaml;
+ my $members = $yaml->{"members"};
+ my $member = $members->{$user};
+ die "$user: does not exist\n"
+ if !defined $member && $type ne "campus";
- die "Secret/CA: no certificate authority found"
- if ! -d "Secret/CA/pki/issued";
+ my @campus_peers # [ name, hostnum, type, pubkey, user|"" ]
+ = map { [ (split / /), "" ] } @{$yaml->{"clients"}};
- if (! -f "Secret/CA/pki/issued/$name.crt") {
- mysystem "cd Secret/CA; ./easyrsa build-client-full $name nopass";
- } else {
- print "Using existing key/cert...\n";
+ my @member_peers = ();
+ for my $u (sort keys %$members) {
+ push @member_peers,
+ map { [ (split / /), $u ] } @{$members->{$u}->{"clients"}};
}
- if ($type ne "campus") {
- my $clients = $member->{"clients"};
- if (! grep { $_ eq $name } @$clients) {
- $member->{"clients"} = [ $name, @$clients ];
- write_members_yaml $yaml;
- }
+ my @all_peers = sort { $a->[1] <=> $b->[1] }
+ (@campus_peers, @member_peers);
+
+ for my $p (@all_peers) {
+ my ($n, $h, $t, $k, $u) = @$p;
+ die "$n: name already in use by $u\n"
+ if $name eq $n && $u ne "";
+ die "$n: name already in use on campus\n"
+ if $name eq $n && $u eq "";
}
+ my $hostnum = (@all_peers
+ ? 1 + $all_peers[$#all_peers][1]
+ : 3);
+
+ push @{$type eq "campus"
+ ? $yaml->{"clients"}
+ : $member->{"clients"}},
+ "$name $hostnum $type $pubkey";
+
umask 077;
- my $DEV = $type eq "android" ? "tun" : "ovpn";
- my $CA = read_file "Secret/CA/pki/ca.crt";
- my $CRT = read_file "Secret/CA/pki/issued/$name.crt";
- my $KEY = read_file "Secret/CA/pki/private/$name.key";
- my $UP = $type eq "android" ? "" : "
-<<openvpn-up>>";
-
- if ($type ne "campus") {
- my $TC = read_file "Secret/front-shared.key";
- write_template ($DEV,$UP,$CA,$CRT,$KEY,$TC, $front_addr,
- $domain_name, "public.ovpn");
- print "Wrote public VPN configuration to public.ovpn.\n";
+ write_members_yaml $yaml;
+
+ if ($type eq "campus") {
+ push @all_peers, [ $name, $hostnum, $type, $pubkey, "" ];
+ } else {
+ push @member_peers, [ $name, $hostnum, $type, $pubkey, $user ];
+ push @all_peers, [ $name, $hostnum, $type, $pubkey, $user ];
}
- my $TC = read_file "Secret/gate-shared.key";
- write_template ($DEV,$UP,$CA,$CRT,$KEY,$TC, $gate_wild_addr,
- "gate.$domain_priv", "campus.ovpn");
- print "Wrote campus VPN configuration to campus.ovpn.\n";
- exit;
+ my $core_wg_addr = hostnum_to_ipaddr (2, $public_wg_net_cidr);
+ my $extra_front_config = "
+PostUp = resolvectl dns %i $core_addr
+PostUp = resolvectl domain %i $domain_priv
+
+# Core
+[Peer]
+PublicKey = $core_wg_pubkey
+AllowedIPs = $core_wg_addr
+AllowedIPs = $private_net_cidr
+AllowedIPs = $campus_wg_net_cidr\n";
+
+ write_wg_server ("private/front-wg0.conf", \@member_peers,
+ hostnum_to_ipaddr_cidr (1, $public_wg_net_cidr),
+ $public_wg_port, $extra_front_config)
+ if $type ne "campus";
+ write_wg_server ("private/gate-wg0.conf", \@all_peers,
+ hostnum_to_ipaddr_cidr (1, $campus_wg_net_cidr),
+ $campus_wg_port, "\n");
+
+ write_wg_client ("public.conf",
+ hostnum_to_ipaddr ($hostnum, $public_wg_net_cidr),
+ $type,
+ $front_wg_pubkey,
+ "$front_addr:$public_wg_port",
+ hostnum_to_ipaddr (1, $public_wg_net_cidr))
+ if $type ne "campus";
+ write_wg_client ("campus.conf",
+ hostnum_to_ipaddr ($hostnum, $campus_wg_net_cidr),
+ $type,
+ $gate_wg_pubkey,
+ "$gate_wild_addr:$campus_wg_port",
+ hostnum_to_ipaddr (1, $campus_wg_net_cidr));
}
-sub write_template ($$$$$$$$$) {
- my ($DEV,$UP,$CA,$CRT,$KEY,$TC,$ADDR,$NAME,$FILE) = @_;
+sub write_wg_server ($$$$$) {
+ my ($file, $peers, $addr_cidr, $port, $extra) = @_;
my $O = new IO::File;
- open ($O, ">$FILE.tmp") or die "Could not open $FILE.tmp: $!\n";
- print $O "client
-dev-type tun
-dev $DEV
-remote $ADDR
-nobind
-<<openvpn-drop-priv>>
-remote-cert-tls server
-verify-x509-name $NAME name
-<<openvpn-crypt>>$UP
-verb 3
-key-direction 1
-<ca>\n$CA</ca>
-<cert>\n$CRT</cert>
-<key>\n$KEY</key>
-<tls-crypt>\n$TC</tls-crypt>\n";
- close $O or die "Could not close $FILE.tmp: $!\n";
- rename ("$FILE.tmp", $FILE)
- or die "Could not rename $FILE.tmp: $!\n";
+ open ($O, ">$file.tmp") or die "Could not open $file.tmp: $!\n";
+ print $O "[Interface]
+Address = $addr_cidr
+ListenPort = $port
+PostUp = wg set %i private-key /etc/wireguard/private-key$extra";
+ for my $p (@$peers) {
+ my ($n, $h, $t, $k, $u) = @$p;
+ next if $k =~ /^-/;
+ my $ip = hostnum_to_ipaddr ($h, $addr_cidr);
+ print $O "
+# $n
+[Peer]
+PublicKey = $k
+AllowedIPs = $ip\n";
+ }
+ close $O or die "Could not close $file.tmp: $!\n";
+ rename ("$file.tmp", $file)
+ or die "Could not rename $file.tmp: $!\n";
}
-sub read_file ($) {
- my ($path) = @_;
- my $I = new IO::File;
- open ($I, "<$path") or die "$path: could not read: $!\n";
- local $/;
- my $c = <$I>;
- close $I or die "$path: could not close: $!\n";
- return $c;
+sub write_wg_client ($$$$$$) {
+ my ($file, $addr, $type, $pubkey, $endpt, $server_addr) = @_;
+ my $O = new IO::File;
+ my $DNS = ($type eq "android"
+ ? "
+DNS=$core_addr\nDomain=$domain_priv"
+ : "
+PostUp = resolvectl dns %i $core_addr
+PostUp = resolvectl domain %i $domain_priv");
+ open ($O, ">$file.tmp") or die "Could not open $file.tmp: $!\n";
+ print $O "[Interface]
+Address = $addr
+PostUp = wg set %i private-key /etc/wireguard/private-key$DNS
+
+[Peer]
+PublicKey = $pubkey
+EndPoint = $endpt
+AllowedIPs = $server_addr
+AllowedIPs = $private_net_cidr
+AllowedIPs = $public_wg_net_cidr
+AllowedIPs = $campus_wg_net_cidr\n";
+ close $O or die "Could not close $file.tmp: $!\n";
+ rename ("$file.tmp", $file)
+ or die "Could not rename $file.tmp: $!\n";
+
+ exit;
+}
+
+sub hostnum_to_ipaddr ($$)
+{
+ my ($hostnum, $net_cidr) = @_;
+
+ # Assume 24bit subnet, 8bit hostnum.
+ # Find a Perl library for more generality?
+ die "$hostnum: hostnum too large\n" if $hostnum > 255;
+ my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$";
+ die if !$prefix;
+ return "$prefix.$hostnum";
+}
+
+sub hostnum_to_ipaddr_cidr ($$)
+{
+ my ($hostnum, $net_cidr) = @_;
+
+ # Assume 24bit subnet, 8bit hostnum.
+ # Find a Perl library for more generality?
+ die "$hostnum: hostnum too large\n" if $hostnum > 255;
+ my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$";
+ die if !$prefix;
+ return "$prefix.$hostnum/24";
}
#+END_SRC
#+BEGIN_SRC sh
sudo apt install netplan.io systemd-resolved unattended-upgrades \
- ufw isc-dhcp-server postfix openvpn
+ ufw isc-dhcp-server postfix wireguard
#+END_SRC
Again, the Postfix installation prompts for a couple settings. The
#+BEGIN_SRC sh
sudo apt install netplan.io systemd-resolved unattended-upgrades \
- ntp isc-dhcp-server bind9 apache2 openvpn \
+ ntp isc-dhcp-server bind9 apache2 wireguard \
postfix dovecot-imapd fetchmail expect rsync \
gnupg
sudo apt install mariadb-server php php-{apcu,bcmath,curl,gd,gmp}\
exercise the test Nextcloud.
The process starts with enrolling the first member of the institute
-using the ~./inst new~ command and issuing client VPN keys with the
-~./inst client~ command.
+using the ~./inst new~ command and registering a client's public key
+with the ~./inst client~ command.
** Test New Command
A test member's notebook is created next, much like the servers,
except with memory and disk space doubled to 2GiB and 8GiB, and a
desktop. This machine is not configured by Ansible. Rather, its
-desktop VPN client and web browser test the OpenVPN configurations on
+WireGuard™ tunnels and web browser test the VPN configurations on
~gate~ and ~front~, and the Nextcloud installation on ~core~.
#+BEGIN_SRC sh
require several more).
#+BEGIN_SRC
-sudo apt install network-manager-openvpn-gnome \
- openvpn-systemd-resolved \
- nextcloud-desktop evolution
+sudo apt install wireguard nextcloud-desktop evolution
#+END_SRC
** Test Client Command
-The ~./inst client~ command is used to issue keys for the institute's
-VPNs. The following command generates two =.ovpn= (OpenVPN
-configuration) files, =small.ovpn= and =campus.ovpn=, authorizing
-access by the holder, identified as ~dick~, owned by member ~dick~, to
-the test VPNs.
+The ~./inst client~ command is used to register the public key of a
+client wishing to connect to the institute's VPNs. In this test, new
+member Dick wants to connect his notebook, ~dick~, to the institute
+VPNs. First he generates a pair of WireGuard™ keys by running the
+following commands on Dick's notebook.
#+BEGIN_SRC sh
-./inst client debian dick dick
+( umask 077; wg genkey >private)
+wg pubkey <private >public
#+END_SRC
-** Test Campus VPN
+The administrator uses the key in =public= to run the following
+command, generating =campus.conf= and =public.conf= files.
-The =campus.ovpn= OpenVPN configuration file (generated in [[*Test Client Command][Test Client
-Command]]) is transferred to ~dick~, which is at the Wi-Fi access
-point's ~wifi_wan_addr~.
+#+BEGIN_SRC sh
+./inst client debian dick dick \
+ 4qd4xdRztZBKhFrX9jI/b4fnMzpKQ5qhg691hwYSsX8=
+#+END_SRC
+
+** Test Campus WireGuard™ Subnet
+
+The =campus.conf= WireGuard™ configuration file (generated in [[*Test Client Command][Test
+Client Command]]) is transferred to ~dick~, which is at the Wi-Fi access
+point's IP address, host 2 on the wild Ethernet.
#+BEGIN_SRC sh
-scp *.ovpn sysadm@192.168.57.2:
+scp *.conf sysadm@192.168.57.2:
#+END_SRC
-The file is installed using the Network tab of the desktop Settings
-app. The administrator uses the "+" button, chooses "Import from
-file..." and the =campus.ovpn= file. /Importantly/ the administrator
-checks the "Use this connection only for resources on its network"
-checkbox in the IPv4 tab of the Add VPN dialog. The admin does the
-same with the =small.ovpn= file, for use on the simulated Internet.
+Dick then pastes his notebook's private key into the template
+=campus.conf= file and installs the result in
+=/etc/wireguard/wg0.conf=, doing the same to complete =public.conf=
+and install it in =/etc/wireguard/wg1.conf=.
-The administrator turns on the campus VPN on ~dick~ (which connects
-instantly) and does a few basic tests in a terminal.
+To connect to the campus VPN, the following command is run.
+
+#+BEGIN_SRC sh
+systemctl start wg-quick@wg0
+#+END_SRC
+
+A few basic tests are then performed in a terminal.
#+BEGIN_SRC sh
systemctl status
-ping -c 1 8.8.4.4 # dns.google
+ping -c 1 8.8.8.8 # dns.google
ping -c 1 192.168.56.1 # core
host dns.google
host core.small.private
VBoxManage modifyvm dick --nic1 natnetwork --natnetwork1 premises
#+END_SRC
-The administrator might wait to see evidence of the change in
-networks. Evolution may start "Testing reachability of mail account
-dick@small.example.org." Eventually, the ~campus~ VPN should
-disconnect. After it does, the administrator turns on the ~small~
-VPN, which connects in a second or two. Again, some basics are
-tested in a terminal.
+Then the campus VPN is disconnected and the public VPN connected.
+
+#+BEGIN_SRC sh
+systemctl stop wg-quick@wg0
+systemctl start wg-quick@wg1
+#+END_SRC
+
+Again, some basics are tested in a terminal.
#+BEGIN_SRC sh
ping -c 1 8.8.4.4 # dns.google
#+END_SRC
The administrator tests Dick's access to ~core~, ~front~ and
-Nextcloud, and attempts to re-connect the ~small~ VPN. All of these
-should fail.
+Nextcloud, and attempts to access the campus VPN. All of these should
+fail.
* Future Work
Monkey's ~cron~ jobs on Core should be ~systemd.timer~ and ~.service~
units.
-The institute's private domain names (e.g. ~www.small.private~) are
-not resolvable on Front. Reverse domains (~86.177.10.in-addr.arpa~)
-mapping institute network addresses back to names in the private
-domain ~small.private~ work only on the campus Ethernet. These nits
-might be picked when OpenVPN supports the DHCP option
-~rdnss-selection~ (RFC6731), or with hard-coded ~resolvectl~ commands.
-
-The ~./inst old dick~ command does not break VPN connections to Dick's
-clients. New connections cannot be created, but old connections can
-continue to work for some time.
-
-The ~./inst client android dick-phone dick~ command generates =.ovpn=
-files that require the member to remember to check the "Use this
-connection only for resources on its network" box in the IPv4 (and
-IPv6) tab(s) of the Add VPN dialog. The command should include an
-OpenVPN setting that the NetworkManager file importer recognizes as
-the desired setting.
-
-The VPN service is overly complex. The OpenVPN 2.4.7 clients allow
-multiple server addresses, but the ~openvpn(8)~ manual page suggests
-per connection parameters are restricted to a set that does /not/
-include the essential ~verify-x509-name~. Use the same name on
-separate certificates for Gate and Front? Use the same certificate
-and key on Gate and Front?
+The institute's reverse domains (e.g. ~86.177.10.in-addr.arpa~) are
+not available on Front, yet.
** More Tests
mysystem "ansible-playbook playbooks/check-inst-vars.yml >/dev/null";
-our ($domain_name, $domain_priv, $front_addr, $gate_wild_addr);
+our ($domain_name, $domain_priv, $private_net_cidr,
+ $front_addr, $front_wg_pubkey,
+ $public_wg_net_cidr, $public_wg_port,
+ $gate_wild_addr, $gate_wg_pubkey,
+ $campus_wg_net_cidr, $campus_wg_port,
+ $core_addr, $core_wg_pubkey);
do "./private/vars.pl";
if (defined $ARGV[0] && $ARGV[0] eq "CA") {
my $dom = $domain_name;
my $pvt = $domain_priv;
mysystem "cd Secret/CA; ./easyrsa build-server-full $dom nopass";
- mysystem "cd Secret/CA; ./easyrsa build-server-full gate.$pvt nopass";
mysystem "cd Secret/CA; ./easyrsa build-server-full core.$pvt nopass";
- mysystem "cd Secret/CA; ./easyrsa build-client-full core nopass";
umask 077;
- mysystem "openvpn --genkey secret Secret/front-shared.key";
- mysystem "openvpn --genkey secret Secret/gate-shared.key";
- mysystem "openssl dhparam -out Secret/front-dh2048.pem 2048";
- mysystem "openssl dhparam -out Secret/gate-dh2048.pem 2048";
mysystem "mkdir --mode=700 Secret/root.gnupg";
mysystem ("gpg --homedir Secret/root.gnupg",
if (keys %{$yaml->{"members"}}) {
print $O "members:\n";
for my $user (sort keys %{$yaml->{"members"}}) {
- print_member ($O, $yaml->{"members"}->{$user});
+ print_member ($O, $user, $yaml->{"members"}->{$user});
}
print $O "usernames:\n";
for my $user (sort keys %{$yaml->{"members"}}) {
print $O "members:\n";
print $O "usernames: []\n";
}
- if (@{$yaml->{"revoked"}}) {
- print $O "revoked:\n";
- for my $name (@{$yaml->{"revoked"}}) {
+ if (@{$yaml->{"clients"}}) {
+ print $O "clients:\n";
+ for my $name (@{$yaml->{"clients"}}) {
print $O "- $name\n";
}
} else {
- print $O "revoked: []\n";
+ print $O "clients: []\n";
}
close $O or die "Could not close $pathname: $!\n";
}
-sub print_member ($$) {
- my ($out, $member) = @_;
- print $out " ", $member->{"username"}, ":\n";
- print $out " username: ", $member->{"username"}, "\n";
+sub print_member ($$$) {
+ my ($out, $username, $member) = @_;
+ print $out " ", $username, ":\n";
print $out " status: ", $member->{"status"}, "\n";
if (@{$member->{"clients"} || []}) {
print $out " clients:\n";
print $out " $line\n";
}
}
- my @standard_keys = ( "username", "status", "clients",
+ my @standard_keys = ( "status", "clients",
"password_front", "password_core",
"password_fetchmail" );
my @other_keys = (sort
mysystem ("ansible-playbook -e \@Secret/become.yml",
" playbooks/nextcloud-new.yml",
" -e user=$user", " -e pass=\"$epass\"");
- $members->{$user} = { "username" => $user,
- "status" => "current",
+ $members->{$user} = { "status" => "current",
"password_front" => $front,
"password_core" => $core,
"password_fetchmail" => $vault };
- write_members_yaml
- { "members" => $members,
- "revoked" => $yaml->{"revoked"} };
+ write_members_yaml $yaml;
mysystem ("ansible-playbook -e \@Secret/become.yml",
" -t accounts -l core,front playbooks/site.yml");
exit;
mysystem ("ansible-playbook -e \@Secret/become.yml",
"playbooks/nextcloud-old.yml -e user=$user");
$member->{"status"} = "former";
- write_members_yaml { "members" => $members,
- "revoked" => [ sort @{$member->{"clients"}},
- @{$yaml->{"revoked"}} ] };
+ write_members_yaml $yaml;
mysystem ("ansible-playbook -e \@Secret/become.yml",
"-t accounts playbooks/site.yml");
exit;
}
-sub write_template ($$$$$$$$$);
-sub read_file ($);
-sub add_client ($$$);
+sub write_wg_server ($$$$$);
+sub write_wg_client ($$$$$$);
+sub hostnum_to_ipaddr ($$);
+sub hostnum_to_ipaddr_cidr ($$);
if (defined $ARGV[0] && $ARGV[0] eq "client") {
- die "Secret/CA/easyrsa: not found\n" if ! -x "Secret/CA/easyrsa";
my $type = $ARGV[1]||"";
my $name = $ARGV[2]||"";
my $user = $ARGV[3]||"";
- if ($type eq "campus") {
- die "usage: $0 client campus NAME\n" if @ARGV != 3;
+ my $pubkey = $ARGV[4]||"";
+ if ($type eq "android" || $type eq "debian") {
+ die "usage: $0 client $type NAME USER PUBKEY\n" if @ARGV != 5;
die "$name: invalid host name\n" if $name !~ /^[a-z][-a-z0-9]+$/;
- } elsif ($type eq "android" || $type eq "debian") {
- die "usage: $0 client $type NAME USER\n" if @ARGV != 4;
+ } elsif ($type eq "campus") {
+ die "usage: $0 client campus NAME PUBKEY\n" if @ARGV != 4;
die "$name: invalid host name\n" if $name !~ /^[a-z][-a-z0-9]+$/;
+ $pubkey = $user;
+ $user = "";
} else {
die "usage: $0 client [debian|android|campus]\n";
}
my $yaml;
- my $member;
- if ($type ne "campus") {
- $yaml = read_members_yaml;
- my $members = $yaml->{"members"};
- if (@ARGV == 4) {
- $member = $members->{$user};
- die "$user: does not exist\n" if ! defined $member;
- }
- if (defined $member) {
- my ($owner) = grep { grep { $_ eq $name } @{$_->{"clients"}} }
- values %{$members};
- die "$name: owned by $owner->{username}\n"
- if defined $owner && $owner->{username} ne $member->{username};
- }
- }
+ $yaml = read_members_yaml;
+ my $members = $yaml->{"members"};
+ my $member = $members->{$user};
+ die "$user: does not exist\n"
+ if !defined $member && $type ne "campus";
- die "Secret/CA: no certificate authority found"
- if ! -d "Secret/CA/pki/issued";
+ my @campus_peers # [ name, hostnum, type, pubkey, user|"" ]
+ = map { [ (split / /), "" ] } @{$yaml->{"clients"}};
- if (! -f "Secret/CA/pki/issued/$name.crt") {
- mysystem "cd Secret/CA; ./easyrsa build-client-full $name nopass";
- } else {
- print "Using existing key/cert...\n";
+ my @member_peers = ();
+ for my $u (sort keys %$members) {
+ push @member_peers,
+ map { [ (split / /), $u ] } @{$members->{$u}->{"clients"}};
}
- if ($type ne "campus") {
- my $clients = $member->{"clients"};
- if (! grep { $_ eq $name } @$clients) {
- $member->{"clients"} = [ $name, @$clients ];
- write_members_yaml $yaml;
- }
+ my @all_peers = sort { $a->[1] <=> $b->[1] }
+ (@campus_peers, @member_peers);
+
+ for my $p (@all_peers) {
+ my ($n, $h, $t, $k, $u) = @$p;
+ die "$n: name already in use by $u\n"
+ if $name eq $n && $u ne "";
+ die "$n: name already in use on campus\n"
+ if $name eq $n && $u eq "";
}
+ my $hostnum = (@all_peers
+ ? 1 + $all_peers[$#all_peers][1]
+ : 3);
+
+ push @{$type eq "campus"
+ ? $yaml->{"clients"}
+ : $member->{"clients"}},
+ "$name $hostnum $type $pubkey";
+
umask 077;
- my $DEV = $type eq "android" ? "tun" : "ovpn";
- my $CA = read_file "Secret/CA/pki/ca.crt";
- my $CRT = read_file "Secret/CA/pki/issued/$name.crt";
- my $KEY = read_file "Secret/CA/pki/private/$name.key";
- my $UP = $type eq "android" ? "" : "
-script-security 2
-up /etc/openvpn/update-systemd-resolved
-up-restart";
-
- if ($type ne "campus") {
- my $TC = read_file "Secret/front-shared.key";
- write_template ($DEV,$UP,$CA,$CRT,$KEY,$TC, $front_addr,
- $domain_name, "public.ovpn");
- print "Wrote public VPN configuration to public.ovpn.\n";
+ write_members_yaml $yaml;
+
+ if ($type eq "campus") {
+ push @all_peers, [ $name, $hostnum, $type, $pubkey, "" ];
+ } else {
+ push @member_peers, [ $name, $hostnum, $type, $pubkey, $user ];
+ push @all_peers, [ $name, $hostnum, $type, $pubkey, $user ];
}
- my $TC = read_file "Secret/gate-shared.key";
- write_template ($DEV,$UP,$CA,$CRT,$KEY,$TC, $gate_wild_addr,
- "gate.$domain_priv", "campus.ovpn");
- print "Wrote campus VPN configuration to campus.ovpn.\n";
- exit;
+ my $core_wg_addr = hostnum_to_ipaddr (2, $public_wg_net_cidr);
+ my $extra_front_config = "
+PostUp = resolvectl dns %i $core_addr
+PostUp = resolvectl domain %i $domain_priv
+
+# Core
+[Peer]
+PublicKey = $core_wg_pubkey
+AllowedIPs = $core_wg_addr
+AllowedIPs = $private_net_cidr
+AllowedIPs = $campus_wg_net_cidr\n";
+
+ write_wg_server ("private/front-wg0.conf", \@member_peers,
+ hostnum_to_ipaddr_cidr (1, $public_wg_net_cidr),
+ $public_wg_port, $extra_front_config)
+ if $type ne "campus";
+ write_wg_server ("private/gate-wg0.conf", \@all_peers,
+ hostnum_to_ipaddr_cidr (1, $campus_wg_net_cidr),
+ $campus_wg_port, "\n");
+
+ write_wg_client ("public.conf",
+ hostnum_to_ipaddr ($hostnum, $public_wg_net_cidr),
+ $type,
+ $front_wg_pubkey,
+ "$front_addr:$public_wg_port",
+ hostnum_to_ipaddr (1, $public_wg_net_cidr))
+ if $type ne "campus";
+ write_wg_client ("campus.conf",
+ hostnum_to_ipaddr ($hostnum, $campus_wg_net_cidr),
+ $type,
+ $gate_wg_pubkey,
+ "$gate_wild_addr:$campus_wg_port",
+ hostnum_to_ipaddr (1, $campus_wg_net_cidr));
}
-sub write_template ($$$$$$$$$) {
- my ($DEV,$UP,$CA,$CRT,$KEY,$TC,$ADDR,$NAME,$FILE) = @_;
+sub write_wg_server ($$$$$) {
+ my ($file, $peers, $addr_cidr, $port, $extra) = @_;
my $O = new IO::File;
- open ($O, ">$FILE.tmp") or die "Could not open $FILE.tmp: $!\n";
- print $O "client
-dev-type tun
-dev $DEV
-remote $ADDR
-nobind
-user nobody
-group nogroup
-persist-key
-persist-tun
-remote-cert-tls server
-verify-x509-name $NAME name
-cipher AES-256-GCM
-auth SHA256$UP
-verb 3
-key-direction 1
-<ca>\n$CA</ca>
-<cert>\n$CRT</cert>
-<key>\n$KEY</key>
-<tls-crypt>\n$TC</tls-crypt>\n";
- close $O or die "Could not close $FILE.tmp: $!\n";
- rename ("$FILE.tmp", $FILE)
- or die "Could not rename $FILE.tmp: $!\n";
+ open ($O, ">$file.tmp") or die "Could not open $file.tmp: $!\n";
+ print $O "[Interface]
+Address = $addr_cidr
+ListenPort = $port
+PostUp = wg set %i private-key /etc/wireguard/private-key$extra";
+ for my $p (@$peers) {
+ my ($n, $h, $t, $k, $u) = @$p;
+ next if $k =~ /^-/;
+ my $ip = hostnum_to_ipaddr ($h, $addr_cidr);
+ print $O "
+# $n
+[Peer]
+PublicKey = $k
+AllowedIPs = $ip\n";
+ }
+ close $O or die "Could not close $file.tmp: $!\n";
+ rename ("$file.tmp", $file)
+ or die "Could not rename $file.tmp: $!\n";
}
-sub read_file ($) {
- my ($path) = @_;
- my $I = new IO::File;
- open ($I, "<$path") or die "$path: could not read: $!\n";
- local $/;
- my $c = <$I>;
- close $I or die "$path: could not close: $!\n";
- return $c;
+sub write_wg_client ($$$$$$) {
+ my ($file, $addr, $type, $pubkey, $endpt, $server_addr) = @_;
+ my $O = new IO::File;
+ my $DNS = ($type eq "android"
+ ? "
+DNS=$core_addr\nDomain=$domain_priv"
+ : "
+PostUp = resolvectl dns %i $core_addr
+PostUp = resolvectl domain %i $domain_priv");
+ open ($O, ">$file.tmp") or die "Could not open $file.tmp: $!\n";
+ print $O "[Interface]
+Address = $addr
+PostUp = wg set %i private-key /etc/wireguard/private-key$DNS
+
+[Peer]
+PublicKey = $pubkey
+EndPoint = $endpt
+AllowedIPs = $server_addr
+AllowedIPs = $private_net_cidr
+AllowedIPs = $public_wg_net_cidr
+AllowedIPs = $campus_wg_net_cidr\n";
+ close $O or die "Could not close $file.tmp: $!\n";
+ rename ("$file.tmp", $file)
+ or die "Could not rename $file.tmp: $!\n";
+
+ exit;
+}
+
+sub hostnum_to_ipaddr ($$)
+{
+ my ($hostnum, $net_cidr) = @_;
+
+ # Assume 24bit subnet, 8bit hostnum.
+ # Find a Perl library for more generality?
+ die "$hostnum: hostnum too large\n" if $hostnum > 255;
+ my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$";
+ die if !$prefix;
+ return "$prefix.$hostnum";
+}
+
+sub hostnum_to_ipaddr_cidr ($$)
+{
+ my ($hostnum, $net_cidr) = @_;
+
+ # Assume 24bit subnet, 8bit hostnum.
+ # Find a Perl library for more generality?
+ die "$hostnum: hostnum too large\n" if $hostnum > 255;
+ my ($prefix) = $net_cidr =~ m"^(\d+\.\d+\.\d+)\.\d+/24$";
+ die if !$prefix;
+ return "$prefix.$hostnum/24";
}
die "usage: $0 [CA|config|new|pass|old|client] ...\n";