From: Matt Birkholz
+
|
=
_|||_
@@ -144,8 +144,8 @@ with Apache2, spooling email with Postfix and serving it with
Dovecot-IMAPd, and hosting a VPN with OpenVPN.
-
-3.1. Install Emacs
+
+3.1. Install Emacs
The monks of the abbey are masters of the staff (bo) and Emacs.
@@ -711,7 +711,7 @@ certificate is a terminal session affair (with prompts and lines
entered as shown below).
-
+
$ sudo apt install python3-certbot-apache
$ sudo certbot --apache -d birchwood-abbey.net
...
@@ -930,8 +930,8 @@ with Postfix and Dovecot, and providing essential localnet services:
NTP, DNS and DHCP.
-
-4.1. Include Abbey Variables
+
+4.1. Include Abbey Variables
In this abbey specific document, most abbey particulars are not
@@ -954,12 +954,10 @@ directory, playbooks/
.
4.2. Install Additional Packages
-The scripts that maintain the abbey's web site and run the Weather
-project use a number of additional software packages. The
-/WWW/live/Private/make-top-index
script uses HTML::TreeBuilder in
-the libhtml-tree-perl package. The house task list uses JQuery.
-Weather scripts use mit-scheme and gnuplot (in pseudonymous
-packages).
+The scripts that maintain the abbey's web site use a number of
+additional software packages. The /WWW/live/Private/make-top-index
+script uses HTML::TreeBuilder in the libhtml-tree-perl package.
+The house task list uses JQuery.
@@ -1129,11 +1127,13 @@ The abbey uses the Apt-Cacher:TNG package cache on Core. The
-
-4.8. Use Cloister Apt Cache
+
+4.8. Use Cloister Apt Cache
-Core itself will benefit from using the package cache.
+Core itself will benefit from using the package cache, but should
+contact https repositories directly. (There are few such cretins
+so caching their packages is not a priority.)
@@ -1144,6 +1144,8 @@ Core itself will benefit from using the package cache.
content: >
Acquire::http::Proxy
"http://apt-cacher.birchwood.private.:3142";
+
+ Acquire::https::Proxy "DIRECT";
dest: /etc/apt/apt.conf.d/01proxy
mode: u=rw,g=r,o=r
@@ -1535,10 +1537,9 @@ trends in resource usage.
roles_t/abbey-core/tasks/main.yml
- name: Install Munin.
become: yes
- apt:
- pkg: munin
+ apt: pkg=munin
-- name: Add {{ ansible_user }} to Munin group.
+- name: Add {{ ansible_user }} to munin group.
become: yes
user:
name: "{{ ansible_user }}"
@@ -1706,24 +1707,80 @@ Monkey's photo processing scripts use netpbm commands like
-
-4.17. Configure Weather Updates
+
+4.17. Install Samba
-Monkey on Core runs /WWW/campus/Weather/Private/cronjob
every 5
-minutes and cronjob-midnight
at midnight.
+The abbey core provides NAS (Network Attached Storage) service to the
+cloister network. It also provides writable shares for a Home
+Assistant appliance (Raspberry Pi).
+
+- Install
samba.
+- Create system user
hass.
+- Create
/home/hass/{media,backup,share}/
with appropriate
+permissions.
+
+
-roles_t/abbey-core/tasks/main.yml
-- name: Create Monkey's weather job.
+roles_t/abbey-core/tasks/main.yml
+- name: Install Samba.
+ become: yes
+ apt: pkg=samba
+
+- name: Add system user hass.
+ become: yes
+ user:
+ name: hass
+ system: yes
+
+- name: Add {{ ansible_user }} to hass group.
+ become: yes
+ user:
+ name: "{{ ansible_user }}"
+ append: yes
+ groups: hass
+
+- name: Configure shares.
become: yes
- cron:
- name: weather
- hour: "*"
- minute: "*/5"
- job: "[ -d /WWW/house ] && /WWW/house/Weather/Private/cronjob"
- user: monkey
+ blockinfile:
+ block: |
+ [Shared]
+ path = /Shared
+ guest ok = yes
+ read only = yes
+
+ [HASS-backup]
+ comment = Home Assistant backup
+ path = /home/hass/backup
+ valid users = hass
+ read only = no
+
+ [HASS-media]
+ comment = Home Assistant media
+ path = /home/hass/media
+ valid users = hass
+ read only = yes
+
+ [HASS-share]
+ comment = Home Assistant share
+ path = /home/hass/share
+ valid users = hass
+ read only = no
+ dest: /etc/samba/smb.conf
+ marker: "# {mark} ABBEY MANAGED BLOCK"
+ notify: New shares.
+
+
+
+
+roles_t/abbey-core/handlers/main.yml
+- name: New shares.
+ become: yes
+ systemd:
+ service: smbd
+ state: reloaded
@@ -1881,8 +1938,8 @@ hosts never roam, are not associated with a member, and so are
./abbey client campus new-host-name
-
-6.1. Use Cloister Apt Cache
+
+6.1. Use Cloister Apt Cache
The Apt-Cacher:TNG program does not work well on the frontier, so is
@@ -1895,6 +1952,11 @@ Depending on the quality of the Internet connection, this may take a
while.
+
+Again, https repositories are contacted directly, cached only on the
+local host.
+
+
roles_t/abbey-cloister/tasks/main.yml
---
- name: Use the local Apt package cache.
@@ -1903,6 +1965,8 @@ while.
content: >
Acquire::http::Proxy
"http://apt-cacher.birchwood.private.:3142";
+
+ Acquire::https::Proxy "DIRECT";
dest: /etc/apt/apt.conf.d/01proxy
mode: u=rw,g=r,o=r
@@ -1964,10 +2028,9 @@ Each cloistered host is a Munin node.
roles_t/abbey-cloister/tasks/main.yml
- name: Install Munin Node.
become: yes
- apt:
- pkg: munin-node
+ apt: pkg=munin-node
-- name: Add {{ ansible_user }} to Munin group.
+- name: Add {{ ansible_user }} to munin group.
become: yes
user:
name: "{{ ansible_user }}"
@@ -1998,8 +2061,8 @@ them.
-
-6.4. Install Emacs
+
+6.4. Install Emacs
The monks of the abbey are masters of the staff and Emacs.
@@ -2019,852 +2082,342 @@ The monks of the abbey are masters of the staff and Emacs.
7. The Abbey Weather Role
-Birchwood Abbey's weather hosts use the 1-Wire server (from the
-owserver package) and a 1-Wire USB adapter. They use an
-unprivileged account (monkey) to run a SystemD service named
-weatherd (aka "the daemon"). The daemon is a Perl script that runs
-owread and logs the new measurements once per minute.
-
-
-
-The log files are collected by Monkey on Core (via rsync), then
-processed and published in campus web pages by The Weather Project's
-code (old, using gnuplot(1), and so… unpublished).
-
-
-
-7.1. The Abbey Weather Hardware
-
-
-The abbey currently has one weather host, Gate, and a couple 1-Wire
-sensor modules. The modules measure inside and outside temperature
-and humidity. Their desired locations are 7-8m from the core servers
-so they are plugged into a custom Y cable, with the inside sensor
-cable spliced into the middle of the outside/main cable. The proximal
-end's RJ11 plugs into a 1-Wire USB adapter (a DS9490R) plugged into
-Gate. The outside end goes out the window with the Starlink cable.
-
-
-
-
-7.2. The Abbey Weather Host Setup
-
-
-The Ansible code in the abbey-weather role assumes it is working
-with a cloistered host (as described in Cloistering) and proceeds in
-two phases. The first installs the ow-server package and configures
-it to use a DS9490 (USB adapter) rather than a debugging fake. After
-the first ./abbey config new, the new weather host seems to need a
-reboot before the 1-Wire bus becomes visible via owdir.
+Birchwood Abbey now uses Home Assistant to record and display weather
+data from an Ecowitt GW2001 IoT gateway connecting wirelessly to a
+WS90 (7 function weather station) and a couple WN31s (temp/humidity
+sensors).
-After a reboot owdir should list one or more type 26 device IDs.
-Listing them (e.g. running owdir /26.nnnnnnnn or owdir
-/26.nnnnnnnn/HIH) should reveal "files" named temperature
and
-HIH/humidity
. These pseudo-file paths are used in the daemon script
-below. A test session is shown below.
+The configuration of the GW2001 IoT hub involved turning off the Wi-Fi
+access point, and disabling unused channels. The hub reports the data
+from all sensors in range, anyone's sensors. These new data sources
+are noticed and recorded by Home Assistant automatically as similarly
+equipped campers come and go. Disabling unused channels helps avoid
+these distractions.
-
-monkey@new$ owdir
-...
- /26.2153B6000000/
-...
-monkey@new$ owdir /26.2153B6000000
-...
- /26.2153B6000000/temperature
-...
-monkey@new$ owread /26.2153B6000000/temperature; echo
-26.125
-monkey@new$
-
-
-The second phase of weather host configuration waits for the host-
-specific weather daemon script to appear in the role's files/
.
+The configuration of Home Assistant involved installing the Ecowitt
+"integration". This was accomplished by choosing "Settings", then
+"Devices & services", then "Add Integration", and searching for
+"Ecowitt". Once installed, the integration created dozens of weather
+entities which were organized into an "Abbey" dashboard.
-
-7.3. The Abbey Weather Daemons
-
+
+8. The Abbey DVR Role
+
-Different weather hosts, with different 1-Wire devices, need different
-daemon scripts, to call owread with different paths (containing the
-IDs of each host's devices). At the moment there is just the
-one weather host, anoat.
+The abbey uses AgentDVR to record video from PoE IP HD security
+cameras. The "download" button on iSpy's Download page
+(https://www.ispyconnect.com/download), when "Agent DVR - Linux/
+macOS/ RPi" is chosen, suggests the following command lines (the
+second of which is broken across three lines).
-roles_t/abbey-weather/files/daemon-anoat
#!/usr/bin/perl -w
-# -*- CPerl -*-
-#
-# Weather/daemon
-#
-# Fetches data from the local owserver once per minute. Appends to
-# Log/{In,Out}side/YEAR/MONTH/DAY.txt.
-
-use strict;
-use IO::File;
-use Date::Format;
-
-my $ILOG;
-my $OLOG;
-my $ymd = "";
-sub mymkdir ($);
-sub reopen_logs ()
-{
- my $time = time;
- my $datime = time2str ("%Y-%m-%d %H:%M:%S", $time, "UTC");
- my ($year, $month, $day) = $datime =~ /^(\d{4})-(\d\d)-(\d\d) /;
- my $new_ymd = "$year/$month/$day";
- return if $new_ymd eq $ymd;
- close $ILOG if defined $ILOG;
- close $OLOG if defined $OLOG;
- umask 07;
- mymkdir "Inside/$year/$month";
- mymkdir "Outside/$year/$month";
- umask 027;
- my $filename = "Inside/$new_ymd.txt";
- $ILOG = new IO::File;
- open $ILOG, ">>$filename" or die "Could not open $filename: $!\n";
- $filename = "Outside/$new_ymd.txt";
- $OLOG = new IO::File;
- open $OLOG, ">>$filename" or die "Could not open $filename: $!\n";
- $ymd = $new_ymd;
-}
-
-sub logit ($$$);
-sub main () {
- die "usage: $0\n" if @ARGV != 0;
- $0 = "weatherd";
- chdir "/home/monkey/Weather/Log" or die;
- umask 027;
- my $start = time;
- {
- my $secs = 60 - $start % 60;
- $start += $secs;
- sleep ($secs);
- }
- while (1) {
- reopen_logs;
- logit $OLOG, "T", "/26.2153B6000000/temperature";
- logit $OLOG, "H", "/26.2153B6000000/HIH4000/humidity";
- logit $ILOG, "T", "/26.8859B6000000/temperature";
- logit $ILOG, "H", "/26.8859B6000000/HIH4000/humidity";
- $start += 60;
- my $now = time;
- while ($start < $now) { $start += 60; }
- my $secs = $start - $now;
- sleep ($secs);
- }
-}
-
-sub logit ($$$)
-{
- my ($log, $name, $query) = @_;
-
- my $tries = 0;
- while ($tries < 3) {
- my $time = time;
- my $datime = time2str ("%Y-%m-%d %H:%M:%S", $time, "UTC");
- $tries += 1;
- my @lines = `/usr/bin/owread $query`;
- chomp @lines;
- my $status = $?;
- my $sig = $status & 127;
- $status >>= 8;
- if ($status != 0) {
- my $L = join "\\n", @lines;
- print $log "$datime\t$name\terror: status $status: $L\n";
- $log->flush;
- } elsif (@lines != 1) {
- my $L = join "\\n", @lines;
- print $log "$datime\t$name\terror: multiple lines: $L\n";
- $log->flush;
- } elsif ($lines[0] !~ /^ *(-?\d+(\.\d+)?)$/) {
- my $L = $lines[0];
- print $log "$datime\t$name\terror: bogus line: $L\n";
- $log->flush;
- } else {
- my $datum = $1;
- print $log "$datime\t$name\t$datum\n";
- $log->flush;
- return;
- }
- }
-}
-
-sub mymkdir ($)
-{
- my ($dirpath) = @_;
-
- my @path_names = split /\//, $dirpath;
- my $path;
- if (!$path_names[0]) {
- $path = "/";
- shift @path_names;
- } else {
- $path = ".";
- }
- my @created;
- while (@path_names) {
- $path .= "/" . shift @path_names;
- if (! -d $path) {
- if (-e $path) {
- die "mkdir $dirpath: already exists; not a directory!\n";
- }
- if (! mkdir $path) {
- die "mkdir $path: $!\n";
- } else {
- chmod 02775, $path;
- push @created, $path;
- }
- }
- }
- return @created;
-}
-
-main;
+sudo apt-get install curl
+bash <(curl -s "https://raw.githubusercontent.com/\
+ispysoftware/agent-install-scripts/main/v2/\
+install.sh")
-The above Perl script uses the Date::Format module, which is
-installed by the following task.
+Ansible assists by creating the system user agentdvr and granting it
+enough sudo latitude to run the installer as instructed above.
+Though a system user, the account gets a home directory,
+/home/agentdvr/
in which to do the installation. The rest of the
+DVR role, "phase two", waits until AgentDVR is installed.
-
-roles_t/abbey-weather/tasks/main.yml
---
-- name: Install weather daemon packages.
- become: yes
- apt: pkg=libtimedate-perl
-
-
-
-
-
-7.4. Install 1-Wire Server
-
-This next task installs the 1-Wire server and shell commands. The
-abbey uses the Dallas Semiconductor DS9490R, a USB to 1-Wire adapter,
-on all its weather hosts, so it also configures the server to use the
-USB adapter (rather than a test "fake" adapter).
+AgentDVR is installed, after Ansible has set things up, by running the
+command lines prescribed by iSpy while logged in as agentdvr with
+the current default directory /home/agentdvr/
. The installer should
+create the /home/agentdvr/AgentDVR/
directory. Its offer to install
+a system service is declined.
-
-roles_t/abbey-weather/tasks/main.yml
-- name: Install 1-Wire server.
- become: yes
- apt:
- pkg: [ owserver, ow-shell ]
-
-- name: Configure 1-Wire server.
- become: yes
- lineinfile:
- path: /etc/owfs.conf
- regexp: "{{ item.regexp }}"
- line: "{{ item.line }}"
- backrefs: yes
- loop:
- - { regexp: '^[# ]*server: *FAKE(.*)$', line: '#server: FAKE\1' }
- - { regexp: '^[# ]*server: *usb(.*)$', line: 'server: usb\1' }
-
-
-
-
-
-7.5. Install Rsync
-
-Monkey on Core will want to download log records (files) using
-rsync(1).
+After AgentDVR is installed, when the /home/agentdvr/AgentDVR/
+directory exists, Ansible is run again to install the system service.
-
-
-roles_t/abbey-weather/tasks/main.yml
-- name: Install Rsync.
- become: yes
- apt: pkg=rsync
-
-
-
-
-7.6. Create Monkey
-
+
+8.1. Create User agentdvr
+
-The weather daemon is run by an unprivileged monkey account (not
-sysadm) which allows monkey on Core shell access. This is also
-executed during the initial phase of configuration, allowing the
-administrator to login on the new weather host as monkey and thus to
-test access to the 1-Wire adapter and devices. To facilitate
-debugging, the sysadm account is included in the monkey group.
+AgentDVR runs as the system user agentdvr, which is created here.
-roles_t/abbey-weather/tasks/main.yml
-- name: Create monkey.
+roles_t/abbey-dvr/tasks/main.yml
---
+- name: Create agentdvr.
become: yes
user:
- name: monkey
+ name: agentdvr
system: yes
+ home: /home/agentdvr
+ shell: /bin/bash
+ append: yes
+ groups: video
-- name: Authorize monkey@core.
- become: yes
- vars:
- pubkeyfile: ../Secret/ssh_monkey/id_rsa.pub
- authorized_key:
- user: monkey
- key: "{{ lookup('file', pubkeyfile) }}"
- manage_dir: yes
-
-- name: Add {{ ansible_user }} to monkey group.
+- name: Add {{ ansible_user }} to agentdvr group.
become: yes
user:
name: "{{ ansible_user }}"
append: yes
- groups: monkey
-
-
-
-
-
-7.7. Install Weather Daemon
-
-
-The weather daemon is kept alive as a Systemd service unit. This task
-creates and starts that service after the host-specific
-files/daemon-HOST
file becomes available.
-
-
-
-The ExecStartPre=/bin/sleep 30 is intended to avoid recent hangs in
-owread.
-
+ groups: agentdvr
-
-roles_t/abbey-weather/tasks/main.yml
-- name: Install weather directory.
+- name: Create /home/agentdvr/.
become: yes
file:
- path: /home/monkey/Weather/Log
+ path: /home/agentdvr
state: directory
- owner: monkey
- group: monkey
- mode: u=rwx,g=rx,o=rx
-
-- name: Test for weather daemon script.
- vars:
- dir: ../roles/abbey-weather/files
- file: "{{ dir }}/daemon-{{ inventory_hostname }}"
- stat: path="{{ file }}"
- delegate_to: localhost
- register: weather
-
-- name: Note missing weather daemon script.
- vars:
- dir: ../roles/abbey-weather/files
- script: "{{ dir }}/daemon-{{ inventory_hostname }}"
- debug:
- msg: "{{ script }}: not found"
- when: not weather.stat.exists
-
-- name: Install weather daemon.
- become: yes
- vars:
- dir: ../roles/abbey-weather/files
- script: "{{ dir }}/daemon-{{ inventory_hostname }}"
- copy:
- src: "{{ script }}"
- dest: /home/monkey/Weather/daemon
- owner: monkey
- group: monkey
- mode: u=rwx,g=rx,o=
- when: weather.stat.exists
-
-- name: Install weatherd service.
- become: yes
- copy:
- content: |
- [Unit]
- Description=Weather Logger
- After=owserver.service
-
- [Service]
- User=monkey
- ExecStartPre=/bin/sleep 30
- ExecStart=/home/monkey/Weather/daemon
- Restart=always
-
- [Install]
- WantedBy=multi-user.target
- dest: /etc/systemd/system/weatherd.service
- when: weather.stat.exists
- notify:
- - Reload Systemd.
- - Restart weather daemon.
-
-- name: Enable/Start weather daemon.
- become: yes
- systemd:
- service: weatherd
- enabled: yes
- state: started
- when: weather.stat.exists
+ owner: agentdvr
+ group: agentdvr
+ mode: u=rwx,g=rwxs,o=rx
-
-
-roles_t/abbey-weather/handlers/main.yml
---
-- name: Reload Systemd.
- become: yes
- command: systemctl daemon-reload
-
-- name: Restart weather daemon.
- become: yes
- systemd:
- service: weatherd
- state: restarted
-
-
-
-
-8. The Abbey DVR Role
-
-
-The abbey uses Zoneminder to record video from PoE IP HD security
-cameras. The Abbey DVR Role installs Zoneminder and configures it to
-record to /Zoneminder/
, the mount point for a separate, large
-storage volume. It follows the instructions in Zoneminder's
-README.Debian
(in /usr/share/doc/zoneminder/
) to create the zm
-database and configure Apache.
-
-
-
-8.1. DVR Machine Setup
-
+
+8.2. Authorize User agentdvr
+
-The installation process involves some manual intervention. The first
-time a host is enrolled, Ansible will install the necessary packages,
-but it cannot create the database, nor the database user (yet, in the
-first pass). After adding the new machine to the dvrs group in
-10.2, run Ansible to get the Zoneminder software installed.
+The AgentDVR installer is also run by agentdvr, which is authorized
+to run a handful of system commands. This small set is sufficient
+if the offer to create the system service is declined. In that
+case, the installer will run the program in the terminal.
-
-./abbey config HOST
+
+roles_t/abbey-dvr/tasks/main.yml
+- name: Authorize agentdvr.
+ copy:
+ content: |
+ ALL ALL=(agentdvr) NOPASSWD: /bin/systemctl,/bin/apt-get,\
+ /sbin/adduser,/sbin/usermod
+ dest: /etc/sudoers.d/agentdvr
-
-
-
-Several configuration steps will be skipped because /Zoneminder/
has
-not been created yet. To proceed, first create the database and
-database user manually, as described in section Manually Create
-Zoneminder DB and User.
-
-
-8.2. Create /Zoneminder/
-
-
-/Zoneminder/
should be a separate, large volume lest Zoneminder fill
-the root file system. For acceptable performance, /Zoneminder/
-should also be the mount point of a solid-state disk (SSD). A
-symbolic link at /var/cache/zoneminder/events
targets /Zoneminder
-to make it Zoneminder's "default" storage area. (The PurgeWhenFull
-filter only works with the default storage area in v1.34.)
-
-
-
-8.3. Continue Zoneminder Configuration
+
+8.3. Test For AgentDVR/
-Once the zm database (and zmuser database user) are created, and a
-large volume mounted at /Zoneminder/
, Ansible can continue with the
-Zoneminder configuration.
-
-
-
-./abbey configure HOST
-
-
-
-
-Configuring Zoneminder's cameras is still a manual process as
-described in the final section, Configure Cameras, below.
-
-
-
-
-8.4. Include Abbey Variables
-
-
-Private variables in private/vars-abbey.yml
are needed, and included
-here, as in the abbey-core role. The file path is relative to the
-playbook's directory, playbooks/
.
+The following task probes for the /home/agentdvr/AgentDVR/
+directory, to detect that the build/install process has completed. It
+registers the results in the agentdvr variable. Several of the
+remaining installation steps are skipped unless
+agentdvr.stat.exists.
-roles_t/abbey-dvr/tasks/main.yml
---
-- name: Include private abbey variables.
- include_vars: ../private/vars-abbey.yml
+roles_t/abbey-dvr/tasks/main.yml
+- name: Test for AgentDVR directory.
+ stat:
+ path: /home/agentdvr/AgentDVR
+ register: agentdvr
+- debug:
+ msg: "/home/agentdvr/AgentDVR/ does not yet exist"
+ when: not agentdvr.stat.exists
-
-8.5. Install Zoneminder v1.34
-
-
-The latest version of Zoneminder (1.36) was manually downloaded, built
-and installed, but it immediately had problems, randomly producing
-short events, dropping "problem" cameras entirely, etc. Version 1.34
-did not have those problems, but could still melt down (thrash?) when
-/Zoneminder/
was a Seagate Barracuda in a USB3.1gen2 external drive
-enclosure. A Western Digital Passport Ultra seemed to work much
-better, for a short while. Ultimately a solid-state drive (a 2TB
-USB3.2 Gen2 Samsung T7 Shield) mounted at /Zoneminder/
got
-Zoneminder 1.34 to work reliably.
-
-
+
+8.4. Create AgentDVR Service
+
-After uninstalling 1.36, the Debian 11 package (1.34) was installed
-and configured per the instructions in sections "Web server set-up"
-and "Time Zone" in /usr/share/doc/zoneminder/README.Debian.gz
.
+This service definition came from the template downloaded (from here)
+by the installer, specifically the linux_setup2.sh
script downloaded
+by install.sh
.
roles_t/abbey-dvr/tasks/main.yml
-- name: Install Zoneminder.
+- name: Install AgentDVR.service.
become: yes
- apt: pkg=zoneminder
+ copy:
+ content: |
+ [Unit]
+ Description=AgentDVR
-- name: Enable Apache modules for Zoneminder.
- become: yes
- apache2_module:
- name: "{{ item }}"
- loop: [ cgid, rewrite, expires, headers ]
- notify: Restart Apache2.
+ [Service]
+ WorkingDirectory=/home/agentdvr/AgentDVR
+ ExecStart=/home/agentdvr/AgentDVR/Agent
-- name: Enable Zoneminder Apache configuration.
- become: yes
- command:
- cmd: a2enconf zoneminder
- creates: /etc/apache2/conf-enabled/zoneminder.conf
- notify: Restart Apache2.
+ # fix memory management issue with dotnet core
+ Environment="MALLOC_TRIM_THRESHOLD_=100000"
-- name: Configure MySQL for Zoneminder.
- become: yes
- copy:
- content: |
- [mysqld]
- sql_mode = NO_ENGINE_SUBSTITUTION
- dest: /etc/mysql/conf.d/zoneminder.cnf
- notify: Restart MySQL.
+ # to query logs using journalctl, set a logical name here
+ SyslogIdentifier=AgentDVR
-- name: Configure PHP date.timezone.
- become: yes
- lineinfile:
- regexp: date.timezone ?=
- line: date.timezone = {{ lookup('file', '/etc/timezone') }}
- path: "{{ item }}"
- loop:
- - /etc/php/8.2/cli/php.ini
- - /etc/php/8.2/apache2/php.ini
- notify: Restart Apache2.
+ User=agentdvr
+
+ # ensure the service automatically restarts
+ Restart=always
+ # amount of time to wait before restarting the service
+ RestartSec=5
-- name: Enable/Start Apache2.
+ [Install]
+ WantedBy=multi-user.target
+ dest: /etc/systemd/system/AgentDVR.service
+ when: agentdvr.stat.exists
+
+- name: Enable/Start AgentDVR.service.
become: yes
systemd:
- service: apache2
+ service: AgentDVR
enabled: yes
state: started
+ when: agentdvr.stat.exists
-
-
-roles_t/abbey-dvr/handlers/main.yml
---
-- name: Restart MySQL.
- become: yes
- systemd:
- service: mysql
- state: restarted
-
-- name: Restart Apache2.
- become: yes
- systemd:
- service: apache2
- state: restarted
-
-
+
+
+8.5. Create AgentDVR Storage
+
-The following Rsyslog configuration drop-in gets Zoneminder's natter
-out of /var/log/syslog
.
+The abbey uses a separate volume to store surveillance recordings,
+lest the DVR program fill the root file system. The volume is mounted
+at /DVR/
. The following tasks create /DVR/AgentDVR/video/
+(whether a large volume is mounted there or not!) with appropriate
+permissions so that the instructions for configuring a default storage
+location do not fail.
roles_t/abbey-dvr/tasks/main.yml
-- name: Use /var/log/zoneminder.log
+- name: Create /DVR/AgentDVR/.
become: yes
- copy:
- content: |
- :programname,startswith,"zm" -/var/log/zoneminder.log
- & stop
- dest: /etc/rsyslog.d/40-zoneminder.conf
-
-
-
-
-
-8.6. Create Zoneminder Database
-
-
-Zoneminder's MariaDB database is created by the following task, when
-the mysql_db Ansible module supports check_implicit_admin.
-
+ file:
+ state: directory
+ path: /DVR/AgentDVR
+ owner: agentdvr
+ group: agentdvr
+ mode: u=rwx,g=rxs,o=
-
-
-- name: Create Zoneminder DB.
+- name: Create /DVR/AgentDVR/video/.
become: yes
- mysql_db:
- check_implicit_admin: yes
- name: zm
- collation: utf8mb4_general_ci
- encoding: utf8mb4
+ file:
+ state: directory
+ path: /DVR/AgentDVR/video
+ owner: agentdvr
+ group: agentdvr
+ mode: u=rwx,g=rxs,o=
-
-
-Unfortunately it does not currently, yet the institute prefers the
-more secure Unix socket authentication method. Rather than create a
-privileged DB user, the zm database is created manually (below).
-
-
-8.7. Create Zoneminder DB User
-
+
+8.6. Configure IP Cameras
+
-The following task would create the DB user (mysql_user supports
-check_implicit_admin) but the zm database was not created above.
+A new security camera is setup as described in Cloistering, after
+which the camera should be accessible by name on the abbey networks.
+Assuming ping -c1 new works, the camera's web interface will be
+accessible at http://new/.
-The DB user's password is taken from the zoneminder_dbpass
-variable, kept in private/vars-abbey.yml
, and generated e.g. with
-the apg -n 1 -x 12 -m 12 command.
+The administrator uses this to make the following changes.
-
-private_ex/vars-abbey.yml
zoneminder_dbpass: gakJopbikJadsEdd
-
-
-
-
-
-- name: Create Zoneminder DB user.
- become: yes
- mysql_user:
- check_implicit_admin: yes
- name: zmuser
- password: "{{ zoneminder_dbpass }}"
- priv: >-
- zm.*:
- lock tables,alter,create,index,select,insert,update,delete
-
-
+
+- Set a password on the administrative account.
+- Create an unprivileged user with a short password,
+e.g.
user:blah.
+- Set the frame rate to 5fps. The abbey prefers HD resolution and
+long duration logs, thus fewer frames per second.
+
-
-8.8. Manually Create Zoneminder DB and User
-
+
+8.7. Configure AgentDVR's Cameras
+
-The Zoneminder database and database user are created manually with
-the following SQL (with the zoneminder_dbpass spliced in). The SQL
-commands are entered at the SQL prompt of the sudo mysql command, or
-perhaps piped into the command.
+After Ansible has configured and started the AgentDVR service, its web
+UI will be available at http://core:8090/. The initial Live View
+will be empty, overlayed with instructions to click the edit button.
-
-create database zm
- character set utf8mb4
- collate utf8mb4_general_ci;
-grant lock tables,alter,create,index,select,insert,update,delete
- on zm.*
- to 'zmuser'@'localhost'
- identified by '{{ zoneminder_dbpass }}';
-flush privileges;
-exit;
-
-
-Finally, zm's tables are created, completing the database setup,
+The wizard will ask for each device's general configuration
+parameters. The abbey uses SV3C IP cameras with a full HD stream as
+well as a standard definition "vice stream". AgentDVR wants both.
-
-sudo mysql < /usr/share/zoneminder/db/zm_create.sql
-
-
-
-
-
-8.9. Use /Zoneminder/
-
-
-The following tasks start with a test for the existence of
-/Zoneminder
. Configuration tasks that require /Zoneminder/
or the
-zm database are executed only when zoneminder.stat.exists. The
-last "Link…" task below "forces" the link, whether the target exists
-or not (yet).
-
-
-
-roles_t/abbey-dvr/tasks/main.yml
-- name: Test for /Zoneminder/.
- stat:
- path: /Zoneminder
- register: zoneminder
-- debug:
- msg: "/Zoneminder/ does not yet exist"
- when: not zoneminder.stat.exists
-
-- name: Check /Zoneminder/.
- become: yes
- file:
- state: directory
- path: /Zoneminder
- owner: www-data
- group: www-data
- mode: u=rwx,g=rx,o=rx
- when: zoneminder.stat.exists
+
+- General:
+
+- On: yes
+- Name: Outside
+- Source Type: Network Camera
+
+- Username: user
+- Password: blah
+- Live URL: rtsp://new.birchwood.private:554/12
+- Record URL: rtsp://new.birchwood.private:554/11
+
+
+
-- name: Link to /Zoneminder/.
- become: yes
- file:
- state: link
- src: /Zoneminder
- path: /var/cache/zoneminder/events
- force: yes
- follow: no
-
-
-
-
-
-8.10. Configure Zoneminder
-
-The remaining tasks ensure that the /etc/zm/zm.conf
file has the
-proper permissions and contains the correct password.
+Additional cameras are added via the "New Device" item in the Server
+Menu. This step is completed when all cameras are streaming to
+AgentDVR's Live View.
-
-
-roles_t/abbey-dvr/tasks/main.yml
-- name: Set /etc/zm/zm.conf permissions.
- become: yes
- file:
- path: /etc/zm/zm.conf
- owner: root
- group: www-data
- mode: u=rw,g=r,o=
-
-- name: Set Zoneminder passphrase.
- become: yes
- lineinfile:
- regexp: '^ *ZM_DB_PASS *='
- line: ZM_DB_PASS={{ zoneminder_dbpass }}
- path: /etc/zm/zm.conf
-
-
+
+
+8.8. Configure AgentDVR's Default Storage
+
-Finally, Zoneminder's service unit can be enabled (and started) if
-/Zoneminder/
exists. It is assumed that, if /Zoneminder/
exists,
-the zm database has also been created, and the service is ready to
-run.
+AgentDVR's web interface is also used to configure a default storage
+location. From the Server Menu (upper left), the administrator chooses
+Configuration Settings, the Storage tab, the Configure button, and the
+add (plus) button. The storage location is set to /DVR/AgentDVR/
+and the "default" toggle is set. Several OK buttons then need to be
+pressed before the task is complete.
-
-
-roles_t/abbey-dvr/tasks/main.yml
-- name: Enable/Start Zoneminder.
- become: yes
- systemd:
- service: zoneminder
- enabled: yes
- state: started
- when: zoneminder.stat.exists
-
-
-
-8.11. Configure Cameras
-
-
-A new security camera is setup as described in Cloistering, after
-which the camera should be accessible by name on the abbey networks.
-Assuming ping -c1 new works, the camera's web interface will be
-accessible at http://new/.
-
-
-
-The abbey's administrator logs into http://new/ and turns off any
-OSD (on-screen display). Zoneminder will add its own timestamp, for
-the best accuracy and reliability. The administrator also turns down
-the frame rate to 5fps. The abbey prefers HD resolution (e.g. 1080p)
-and long duration logs, thus fewer frames per second. The
-administrator also creates an unprivileged user with a short password
-e.g. user:gobbledygook.
-
-
+
+8.9. Configure AgentDVR's Recordings
+
-After Ansible has configured and started Zoneminder, a camera can be
-created by clicking on "Add" in the Zoneminder console. (If the
-Zoneminder host was named "security", the Zoneminder console can be
-found at http://security/zm/.) In the Add dialog, the following
-settings should be changed. (The parenthesized settings are default
-settings that should be checked but are probably already correctly
-set.)
+After a default storage location has been configured, AgentDVR's
+cameras can begin recording. The "Edit Devices" dialog lists (via the
+"Edit Devices" item in the Server Menu) the configured cameras. The
+edit buttons lead to the device settings where the following
+parameters are set (in the Recording and Storage tabs).
-- In the "General" tab, specify:
+
- Recording:
-- Name: Front
-- (Server: None)
-- (Source type: Ffmpeg)
-- Function: Record
-- Enabled: yes
-- (Analysis FPS: <blank>)
-- (Maximum FPS: <blank>)
-- (Alarm Maximum FPS: <blank>)
+- Mode: Constant
+- Encoder: Raw Record Stream
+- Max record time: 900 (15 minutes)
-- In the "Source" tab, specify:
+
- Storage:
-- Src path: rtsp://user:gobbledygook@new.small.private.:554/11
-- (Method: TCP)
-- (Target colorspace: 32 bit colour)
-- Capture Resolution: 1920x1080 1080p
-
-- In the "Timestamp" tab, specify:
+
- Location: DVR/AgentDVR
+- Folder: Outside
+- Storage Management:
-- Timestamp Label X: 10
-- Timestamp Label Y: 10
-- Font Size: Large
+- On: yes
+- Max Size: 0 (unlimited)
+- Max Age: 168 (7 days)
-- In the "Buffers" tab, specify:
-
-- Image Buffer Size (frames): 40
@@ -2929,8 +2482,8 @@ machine simply by adding it to the tvrs group.
-
-9.3. Include Abbey Variables
+
+9.3. Include Abbey Variables
Private variables in private/vars-abbey.yml
are needed, as in the
@@ -3476,7 +3029,7 @@ the list of "inputs" available in a postal code typically ends with
the OTA (over the air) broadcasts.
-
+
$ tv_grab_zz_sdjson --configure --config-file .mythtv/Mr.Antenna.xml
Cache file for lineups, schedules and programs.
Cache file: [/home/mythtv/.xmltv/tv_grab_zz_sdjson.cache]
@@ -3897,9 +3450,6 @@ except the roles are found in Institute/roles/
as well as roles/
.
kamino:
kessel:
ord-mantell:
- weather:
- hosts:
- anoat:
dvrs:
hosts:
dantooine:
@@ -3953,10 +3503,6 @@ institutional roles, then the liturgical roles.
hosts: campus
roles: [ campus, abbey-cloister ]
-- name: Configure Weather
- hosts: weather
- roles: [ abbey-weather ]
-
- name: Configure DVRs
hosts: dvrs
roles: [ abbey-dvr ]
@@ -4184,10 +3730,10 @@ operating system version of all abbey managed machines.
The abbey changes location almost weekly, so its timezone changes
occasionally. Droplet does not move. Gate and other simple servers
-(the weather monitors) are kept in UTC. Core, the DVRs, TVRs, and the
-desktops all want updating to the current local timezone. The
-desktops are managed maually, but the rest can all be updated using
-Ansible.
+are kept in UTC. Core, the DVRs, TVRs, Home Assistant and the
+desktops all want updating to the current local timezone. Home
+Assistant and the desktops are managed maually, but the rest can all
+be updated using Ansible.
@@ -4228,12 +3774,11 @@ last host in the previous play.
- hosts: dvrs
tasks:
- - name: Restart Zoneminder.
+ - name: Restart AgentDVR.
become: yes
systemd:
- service: "{{ item }}"
+ service: AgentDVR
state: restarted
- loop: [ mysql, zoneminder ]
when: new_tz.changed
- hosts: tvrs
@@ -4986,7 +4531,7 @@ to private/db.campus_vpn
.)