34 Commits

Author SHA1 Message Date
Nick Stokoe
5d39245ab9 templates/docker-compose/mopidy - commit extant changes 2022-07-01 07:08:12 +01:00
Nick Stokoe
74d8767b86 snackpot.yml - comment out mrlesmithjr.netplan role
seems not to be present?
2021-07-21 21:15:14 +01:00
Nick Stokoe
52d82a2b52 templates/docker-compose/docker-compose.yml - bump nextcloud to v22 2021-07-21 20:51:12 +01:00
Nick Stokoe
b28998d3d1 docker-compose/docker-compose.yml - upgrade nextcloud and others 2021-03-30 15:19:46 +01:00
Nick Stokoe
22f8ccd7fe docker-compose/docker-compose.yml - add extra_hosts snackpot:host-gateway for mopidy 2021-03-30 12:01:58 +01:00
Nick Stokoe
31434f5cf3 docker-compose/upmpdcli/upmpdcli.conf - don't check the content format
As this will disallow things that should be allowed.
2021-03-01 11:16:09 +00:00
Nick Stokoe
89ba2714a9 docker-compose/upmpdcli/Dockerfile - explicitly specify the config file
In  the command parameters for upmpdcli - otherwise it seems not to be
picked up.
2021-03-01 11:15:15 +00:00
Nick Stokoe
00ca00bf05 fixup add jellyfin 2021-02-19 15:03:46 +00:00
Nick Stokoe
206dc5341b docker-compose/upmpdcli/Dockerfile - use python3 not 2 2021-02-19 11:48:25 +00:00
Nick Stokoe
b006b9c9d5 docker-compose.yml - set mopidy to restart: always 2021-02-19 11:22:45 +00:00
Nick Stokoe
d54be6f1c5 docker-compose.yml - set upmpdcli to restart: always 2021-02-19 11:22:07 +00:00
Nick Stokoe
4c9399fad9 docker-compose/upmpdcli/Dockerfile - add openssl to the package list
upmpdcli seems to use this, optionally
2021-02-19 11:11:27 +00:00
Nick Stokoe
bcb424515f docker-compose.yml - add local audio access to mopidy 2021-02-19 10:58:59 +00:00
Nick Stokoe
65d69c7f58 snackpot.yml, docker-compose.yml - add mopidy and upmpdcli services
mopidy includes icecast
2021-02-14 17:59:30 +00:00
Nick Stokoe
5625a1d51a docker-compose.yml - add MINIDLNA_INOTIFY=yes to minidlna
We want it to spot file changes
2021-02-14 17:52:29 +00:00
Nick Stokoe
874d800b58 snackpot.yml - fixup set nextcloud_src etc. 2021-02-14 17:52:03 +00:00
Nick Stokoe
879da752b6 snackpot.uml - fixup set all ports with firewall_ports 2021-02-14 17:50:40 +00:00
Nick Stokoe
31f0d064ac snackpot.yml etc. - refine docker-compose config deploy
The main job of this commit:
- Be explicit about templates: expect the .j2 extension
- Copy all other files, so that they can be binary
- Don't deploy dotfiles or dotdirectories.

This snuck in:
- Remove `test` tag
- Refine some descriptions
2021-02-14 17:46:45 +00:00
Nick Stokoe
704fbedbae templates/docker-compose/docker-compose.yml - remove some comment cruft 2021-02-12 16:37:00 +00:00
Nick Stokoe
f5026efcc7 snackpot.yml - fixup, get all the ports from firewall_ports 2021-02-12 16:30:21 +00:00
Nick Stokoe
da7a33310b snackpot.yml - add minidlna containiner 2021-02-07 16:02:56 +00:00
Nick Stokoe
be41a87087 roles/ufw/tasks/main.yml - allow more flexible port config
specifically, allow specifying protocol
2021-02-07 16:02:15 +00:00
Nick Stokoe
358734d403 roles/docker_compose/tasks/main.yml - set docker data-root dir 2021-02-07 13:09:03 +00:00
Nick Stokoe
01ddd57da3 docker-compose/docker-compose.yml - add nextcloud_cron
For running the cron job
2021-02-07 13:09:03 +00:00
Nick Stokoe
8fa9d4ceff templates/docker-compose/docker-compose.yml - bump nextcloud to v18.0.13 2021-02-07 13:09:03 +00:00
Nick Stokoe
88a633bb94 snackpot.yml - tag role invocations with docker-config
Else tagging doesn't work correctly
2021-02-07 13:09:03 +00:00
Nick Stokoe
a10e2a6663 templates/bin/ncadmin - remove crufty comments 2021-02-07 13:09:03 +00:00
Nick Stokoe
96727cf17b templates/docker-compose/docker-compose.yml - share /srv with nextcloud
For ease of imports
2021-02-07 13:09:03 +00:00
Nick Stokoe
1b6c2aa19a snackpot.yml - set up networking on server 2021-02-07 13:09:03 +00:00
Nick Stokoe
d2bcfec810 snackpot.yml - add docker compose config
Nominally working and tested on a remote VM
2021-02-07 13:08:33 +00:00
Nick Stokoe
8637cb2af4 snackpot.yml - adapt from server.playbook.yml 2021-02-07 13:08:33 +00:00
Nick Stokoe
40755cdd97 roles/docker_compose/handlers/main.yml - add 'listen' clause
So we can notify from outside the role
2021-02-07 13:08:33 +00:00
Nick Stokoe
9a79fe8078 roles/docker_compose_install/ -> roles/docker_compose 2021-02-07 13:08:33 +00:00
Nick Stokoe
e23ba65b8f docker_compose_install - fixup from docker-install
python 3 etc.
2021-02-07 12:47:53 +00:00
35 changed files with 284 additions and 1480 deletions

3
.gitmodules vendored Normal file
View File

@@ -0,0 +1,3 @@
[submodule "templates/docker-compose/mopidy"]
path = templates/docker-compose/mopidy
url = git@github.com:Log1x/docker-mopidy-iris.git

View File

@@ -1,16 +0,0 @@
run ansible script to deploy basic docker compose + config
run restore script to deploy config, database and files
run backup script to create a copy of config, database and files
# todo
# fix playbook to
# set up redis password?
# schedule backup
# script restore

View File

@@ -1,45 +0,0 @@
## To set up
These subdirectorys need to be cloned, as they are not part of the repo.
The first is Ansible. The exact version is not always important, but
it is wise to keep roughly the same version because Ansible has
changed a lot. I like to be able to use a version which works with my
playbooks... some of my older playbooks contained a lot of
workarounds for old versions of ansible. Newer playbooks use newer
versions of Ansible. At the time of writing, I'm using v2.9.26.
git clone git@github.com:ansible/ansible.git .ansible-src
(cd .ansible-src && git co v2.9.26)
This subdirectory contains the passwords and other secrets this repo
needs access to. It is a Password Store GPG2 encrypted repository,
accessible with the `pass` command. Ansible has a plugin which can
use that.
git clone gitolite:password-store .password-store
You should also make sure that the hosts in the inventory are
accessible - sometimes this requires adding `~/.ssh/config` settings
like this example:
Host mixian mixian.noodlefactory.co.uk
Hostname 142.132.227.118
User root
## Before deploying
This script initialises the environment so that `pass` and
`ansible-playbook` will work as if they were installed in the standard
places (although they are not).
./env-setup
## Dependencies
Ansible role and collection dependencies the need to be installed:
ansible-galaxy install -r requirements.yml
ansible-galaxy collection install -r requirements.yml

View File

@@ -1,19 +0,0 @@
DRAFT!
upgrade one major version at a time
check that the version of postgresql is adequate for the target version before upgrading
if it isn't, upgrade it:
dump the data
move the volume aside
recreate the volume
upgrade
start
re-import
delete the old volume
copy over the pg_hba.conf, otherwise the auth credentials won't be used correctly (need:
host all all all md5)

View File

@@ -1,9 +0,0 @@
---
roles:
# From Galaxy
- name: mrlesmithjr.netplan
version: v0.3.0
collections:
- name: community.general

View File

@@ -47,9 +47,6 @@
"log-driver": "json-file",
"log-opts": {
"max-size": "30m"
},
"features": {
"buildkit": true
}
}
notify:

View File

@@ -6,18 +6,13 @@
postgres_password: "{{lookup('passwordstore', 'servers/snackpot/postgres_db.password')}}"
postgres_db_user: postgres
nextcloud_hostname: nc.noodlefactory.co.uk
nextcloud_base_dir: /var/www/html
nextcloud_data_dir: /var/www/data
nextcloud_ext_dir: /var/www/ext
nextcloud_db_user: nextcloud
nextcloud_db: nextcloud
jellyfin_hostname: jf.noodlefactory.co.uk
navidrome_hostname: nd.noodlefactory.co.uk
letsencrypt_email: webmaster@noodlefactory.co.uk
docker_compose_base_dir: /opt/docker-compose
docker_compose_cmd: docker compose
borg_passphrase: "{{lookup('passwordstore', 'servers/snackpot/borg.passphrase')}}"
smtp_password: "{{lookup('passwordstore', 'servers/snackpot/smtp.password')}}"
borg_ssh_key: "{{lookup('passwordstore', 'servers/snackpot/borg.id_rsa')}}"
borg_ssh_key_pub: "{{lookup('passwordstore', 'servers/snackpot/borg.id_rsa.pub')}}"
borg_repo_key: "{{lookup('passwordstore', 'servers/snackpot/borg_repo.key')}}"
firewall_ports:
- "22"
- "80"
@@ -33,6 +28,12 @@
# upnp (jellyfin, minidlna and upmpdcli)
- proto: udp
port: "1900"
# mopidy
- "6600"
- "6680"
- "5555"
# icecast
- "8000"
tasks:
- hostname:
@@ -60,21 +61,21 @@
vars:
ufw_allow: "{{ firewall_ports }}"
- include_role:
name: mrlesmithjr.netplan
apply: { become: true, tags: [netplan, network] }
tags: netplan, network
vars:
netplan_enabled: true
netplan_configuration:
network:
version: 2
ethernets:
enp3s0:
addresses: [192.168.0.55/24]
gateway4: 192.168.0.1
nameservers:
addresses: [192.168.0.1]
# - include_role:
# name: mrlesmithjr.netplan
# apply: { become: true, tags: [netplan, network] }
# tags: netplan, network
# vars:
# netplan_enabled: true
# netplan_configuration:
# network:
# version: 2
# ethernets:
# enp3s0:
# addresses: [192.168.0.55/24]
# gateway4: 192.168.0.1
# nameservers:
# addresses: [192.168.0.1]
- include_role:
name: docker_compose
@@ -87,7 +88,7 @@
file:
path: "{{ docker_compose_base_dir }}/{{ item.path }}"
state: directory
with_community.general.filetree: templates/docker-compose
with_filetree: templates/docker-compose
when: item.state == "directory" and item.path.count("/.") == 0
tags: docker-config
@@ -99,8 +100,8 @@
group: root
mode: 0660
backup: yes
# notify: restart docker compose services
with_community.general.filetree: templates/docker-compose
notify: restart docker compose services
with_filetree: templates/docker-compose
when: item.state == "file" and item.path.endswith(".j2")
tags: docker-config
@@ -112,8 +113,8 @@
group: root
mode: 0660
backup: yes
# notify: restart docker compose services
with_community.general.filetree: templates/docker-compose
notify: restart docker compose services
with_filetree: templates/docker-compose
when: |-
item.state == "file" and not (
item.path.endswith("~") or item.path.endswith(".j2")
@@ -134,33 +135,8 @@
owner: root
group: root
mode: 0550
with_community.general.filetree: templates/bin
with_filetree: templates/bin
when: item.state == "file" and not item.path.endswith("~")
tags: docker-config
- name: install appserver and borg backup services
template:
dest: "/etc/systemd/system/{{ item }}"
src: "{{ item }}.j2"
owner: root
group: root
mode: 0550
with_items:
- appserver.service
- borg.service
- borg.timer
tags: docker-configz
- name: enable backup service
service:
name: borg
state: started
enabled: yes
with_items:
- borg.service
- borg.timer
- appserver.service
# config nextcloud
# hide pg password

View File

@@ -1,14 +0,0 @@
[Unit]
Description=appserver
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
WorkingDirectory={{docker_compose_base_dir}}
ExecStart={{docker_compose_cmd}} up -d --remove-orphans main-services
ExecStop={{docker_compose_cmd}} down
RemainAfterExit=true
[Install]
WantedBy=multi-user.target

View File

@@ -1,29 +0,0 @@
#!/bin/bash
# Borg Backup runner
set -o pipefail
set -o errexit
cd "/opt/docker-compose"
set -vx
docker compose exec -T -u www-data nextcloud ./occ maintenance:mode --on ||
echo "WARNING: Couldn't stop nextcloud container, proceeding anyway"
docker compose down --remove-orphans || {
echo "ERROR: Couldn't stop docker compose, restarting and aborting"
docker network prune --force
docker compose up -d
exit
}
docker network prune --force # remove dangling networks
docker compose run --name borgmatic -T --rm borgmatic /backup.sh run ||
echo "ERROR: Couldn't run borgmatic"
docker compose up -d main-services || {
echo "ERROR: couldn't restart docker compose services, aborting with no services!"
exit 1
}
docker compose exec -T -u www-data nextcloud ./occ maintenance:mode --off ||
echo "Couldn't turn off nextcloud's maintainance mode"
docker compose run --name borgmatic -T --rm borgmatic /backup.sh check ||
echo "Couldn't run the bormatic backup check"

View File

@@ -1,2 +0,0 @@
#!/bin/sh
exec docker compose run -- borgmatic borg "$@"

View File

@@ -1,2 +0,0 @@
#!/bin/sh
exec docker compose run -- borgmatic borgmatic "$@"

View File

@@ -1,14 +1,14 @@
#!/bin/sh
dc_dir={{ docker_compose_base_dir }}
nextcloud_base_dir=/var/www/html
nextcloud_data_dir=/var/www/data
nextcloud_base_dir={{ nextcloud_base_dir }}
nextcloud_data_dir={{ nextcloud_data_dir }}
postgres_db_user={{ postgres_db_user }}
nextcloud_db_user={{ nextcloud_db_user }}
nextcloud_db={{ nextcloud_db }}
DOCKER_EXE() {
( cd $dc_dir; docker compose exec "$@" )
( cd $dc_dir; docker-compose exec "$@" )
}
ON_POSTGRES() {

View File

@@ -1,6 +0,0 @@
[Unit]
Description=Borg backups
[Service]
Type=oneshot
ExecStart={{docker_compose_base_dir}}/bin/backup

View File

@@ -1,10 +0,0 @@
[Unit]
Description=Run Borg backups nightly
[Timer]
OnCalendar=01:40:00
Persistent=true
[Install]
WantedBy=timers.target
WantedBy=borg.target

View File

@@ -1,3 +0,0 @@
POSTGRES_PASSWORD={{ nextcloud_db_password }}
BORG_PASSPHRASE={{ borg_passphrase }}
SMTP_PASSWORD={{ smtp_password }}

View File

@@ -1,11 +0,0 @@
FROM b3vis/borgmatic:latest
# Install stdbuf, used by backup.sh
RUN \
echo "* Installing Runtime Packages" \
&& apk upgrade --no-cache \
&& echo "* Installing Runtime Packages" \
&& apk add -U --no-cache \
coreutils
COPY --chmod=755 backup.sh /backup.sh

View File

@@ -1,97 +0,0 @@
#!/bin/sh
# Run the backup and mail the logs:
# Depending on parameter 1:
# - test-smtp: just send a test email using $APPRISE_URI
# - run: create the backup, no checks
# - check: prune, compact and check the backup
# Anything else is an error.
set -o pipefail
# Set up environment
RUN_COMMAND="borgmatic --stats -v 2 create"
CHECK_COMMAND="borgmatic --stats -v 1 prune compact check"
LOGFILE="/tmp/backup_run_$(date +%s).log"
SUCCESS_PREFIX="=?utf-8?Q? =E2=9C=85 SUCCESS?="
FAILED_PREFIX="=?utf-8?Q? =E2=9D=8C FAILED?="
PARAM="$1"
# Helper function to prepend a timestamp and the first parameter to every line of STDIN
indent() {
while IFS='' read -rs line; do
echo "$(date -Iminutes)${1:- }$line"
done
}
# This function prepends timestamps to stderr and stdout of the
# command supplied as parameters to this.
log() {
# Adapted from https://stackoverflow.com/a/31151808
{
stdbuf -oL -eL "$@" 2>&1 1>&3 3>&- | indent " ! "
} 3>&1 1>&2 | indent " | " | tee -a "$LOGFILE"
}
report() {
if [ "$RESULT" = "0" ]; then
log echo "SUCCESS!"
PREFIX="$SUCCESS_PREFIX"
else
log echo "FAILED: $RESULT"
PREFIX="$FAILED_PREFIX"
fi
apprise -vv -t "$PREFIX: '$PARAM'" -b "$(cat $LOGFILE)" "$APPRISE_URI&pass=$SMTP_PASSWORD"
log echo "Report sent."
}
testmail() {
apprise -vv -t "TESTING!" -b "test mail, please ignore." "$APPRISE_URI&pass=$SMTP_PASSWORD"
}
failed() {
log echo "Exited abnormally!"
report
rm -f "$LOGFILE"
}
cleanup() {
borgmatic break-lock
echo "Removing $LOGFILE"
rm -f "$LOGFILE"
echo "Exiting."
}
# Handle various kinds of exit
trap failed INT QUIT KILL
trap cleanup EXIT
case "$PARAM" in
test-smtp)
echo "Testing mail to via Apprise ($APPRISE_URI)"
testmail
echo "Done."
;;
check)
log echo STARTED: $CHECK_COMMAND
log $CHECK_COMMAND
RESULT=$?
report
;;
run)
log echo STARTED: $RUN_COMMAND
log $RUN_COMMAND
RESULT=$?
report
;;
dummy-run)
log echo STARTED: dummy-run
borgmatic nonesuch
RESULT=$?
report
;;
*)
log echo "UNKNOWN COMMAND: '$PARAM'"
report
;;
esac

View File

@@ -2,6 +2,7 @@
# Adapted from:
# https://github.com/nextcloud/docker/blob/master/.examples/docker-compose/with-nginx-proxy/postgres/fpm/docker-compose.yml
version: '3'
volumes:
postgres:
@@ -15,24 +16,10 @@ volumes:
jellyfin_cache:
minidlna_state:
minidlna_data:
navidrome_cache:
navidrome_data:
borgmatic-cache:
mopidy_data:
networks:
# This is for proxied containers
proxy-tier:
# This is for containers which need to be host mode
lan:
name: lan
driver: macvlan
driver_opts:
parent: enp3s0 # our ethernet interface
ipam:
config:
- gateway: 192.168.0.1
subnet: 192.168.0.0/24
ip_range: 192.168.0.240/29 # addresses 240-248 (6 usable)
services:
@@ -55,7 +42,7 @@ services:
- redis:/data
nextcloud:
image: nextcloud:31-fpm-alpine
image: nextcloud:22-fpm-alpine
restart: always
volumes:
- nextcloud_src:/var/www/html
@@ -75,13 +62,11 @@ services:
# test: ["CMD-SHELL", "wget -q --spider --proxy=off localhost:3000/health || exit 1"]
nextcloud_cron:
image: nextcloud:31-fpm-alpine
image: nextcloud:22-fpm-alpine
restart: always
volumes:
- nextcloud_src:/var/www/html
- nextcloud_data:/var/www/data
- minidlna_data:/var/www/ext/media
- /srv:/srv
entrypoint: /cron.sh
depends_on:
- postgres
@@ -96,7 +81,6 @@ services:
- web.env
depends_on:
- nextcloud
- letsencrypt-companion
networks:
- proxy-tier
- default
@@ -132,45 +116,6 @@ services:
env_file:
- letsencrypt-companion.env
navidrome:
build: ./navidrome
ports:
- "4533:4533"
volumes:
- navidrome_data:/data
- navidrome_cache:/cache
- minidlna_data:/music
networks:
proxy-tier:
default:
group_add:
# audio group ID (gid) on host system
- "29"
devices:
- "/dev/snd:/dev/snd"
depends_on:
- letsencrypt-companion
env_file:
- navidrome.env
bonob:
image: simojenki/bonob:latest
ports:
- "4534:4534"
networks:
lan: # Static ip for the container on the macvlan net
ipv4_address: 192.168.0.244
restart: unless-stopped
environment:
BNB_PORT: 4534
# ip address of your machine running bonob
BNB_URL: http://192.168.0.244:4534
BNB_SONOS_AUTO_REGISTER: "true"
BNB_SONOS_DEVICE_DISCOVERY: "true"
BNB_SUBSONIC_URL: http://navidrome:4533
depends_on:
- navidrome
jellyfin:
image: jellyfin/jellyfin:latest
restart: always
@@ -179,13 +124,9 @@ services:
- jellyfin_config:/config
- jellyfin_cache:/cache
- minidlna_data:/media
networks:
proxy-tier:
default:
lan: # Static ip for the container on the macvlan net
ipv4_address: 192.168.0.241
env_file:
- jellyfin.env
network_mode: host
environment:
- JELLYFIN_PublishedServerUrl=http://snackpot.local
minidlna:
image: vladgh/minidlna:latest
@@ -193,87 +134,101 @@ services:
volumes:
- minidlna_state:/minidlna
- minidlna_data:/media:ro
networks:
default:
lan: # Static ip for the container on the macvlan net
ipv4_address: 192.168.0.242
network_mode: host
environment:
# UID/GID are assumed to both be 2000 in other containers, to allow access
- UPID=2000
- UGID=2000
- MINIDLNA_INOTIFY=yes
- MINIDLNA_MEDIA_DIR_1=A,/media/audio
- MINIDLNA_MEDIA_DIR_2=V,/media/video
- MINIDLNA_FRIENDLY_NAME=MiniDLNA@Snackpot
- MINIDLNA_FRIENDLY_NAME=Snackpot
mopidy:
build: ./mopidy
ports:
- "6600:6600"
- "6680:6680"
- "8000:8000"
extra_hosts:
- "snackpot:host-gateway"
volumes:
# Makes mopidy data persistent
- mopidy_data:/data
# Add local music folder
- minidlna_data:/music:ro
devices:
- /dev/snd
restart: always
upmpdcli:
build: ./upmpdcli
networks:
default:
lan: # Static ip for the container on the macvlan net
ipv4_address: 192.168.0.243
depends_on:
- mopidy
# Host mode needed for advertisement
network_mode: host
restart: always
# a dummy container to start the main services as deps
# This allows the borgmatic image to be excluded when run as:
# docker-compose up main-services
main-services:
image: alpine:latest # a small dumy image
command: sh -c "sleep infinity"
depends_on:
- bonob
- nextcloud
- nextcloud_cron
- web
- jellyfin
- minidlna
- navidrome
- upmpdcli
# Next three services adapted from
# https://github.com/deisi/audiostation/blob/master/docker-compose.yml
# and https://github.com/IVData/dockerfiles/blob/master/mopidy-multiroom/docker-compose.yml
borgmatic:
build: ./borgmatic
restart: 'no' # This container is only run when required
depends_on: # These containers need to be up for dumps
- postgres
networks:
# Networks for DB access for backups
- default
volumes:
# Backup mount
- /mnt/c/backup/nick:/mnt/borg-repository
# Volumes to back up
- certs:/mnt/source/certs:ro
- nextcloud_data:/mnt/source/nextcloud_data:ro
- vhost.d:/mnt/source/vhost.d:ro
- html:/mnt/source/html:ro
- jellyfin_config:/mnt/source/jellyfin_config:ro
- minidlna_state:/mnt/source/minidlna_state:ro
- minidlna_data:/mnt/source/minidlna_data:ro
- navidrome_data:/mnt/source/navidrome_data:ro
# System volumes
- /etc/timezone:/etc/timezone:ro # timezone
- /etc/localtime:/etc/localtime:ro # localtime
- borgmatic-cache:/root/.cache/borg # non-volatile borg chunk cache
# Config volumes
- ./volumes/borgmatic-config:/etc/borgmatic.d/:ro # config.yaml, crontab.txt, mstmp.env
- ./volumes/borg-config:/root/.config/borg/ # borg encryption keys, other config written here
- ./volumes/borg-ssh-config:/root/.ssh/ # ssh keys; sshd writes knownhosts etc here
# snapserver:
# image: ivdata/snapserver:latest
# # ports:
# # - "1704:1704"
# # - "1705:1705"
# # - "1780:1780"
# volumes:
# # The volume with the sharesound fifo for snapcast to work
# - fifo:/tmp/snapcast
# # command: "snapserver -s pipe:///tmp/sharesound/snapfifo?name=Radio"
# # host mode is needed for snapserver advertisement
# network_mode: host
# restart: unless-stopped
environment:
# Work around the use of a fancy init system s6:
# https://github.com/borgmatic-collective/docker-borgmatic/issues/320#issuecomment-2089003361
S6_KEEP_ENV: 1
# snapclient:
# image: ivdata/snapclient:latest
# # ports:
# # - "1704:1704"
# # - "1705:1705"
# # - "1780:1780"
# devices:
# - /dev/snd
# volumes:
# # The volume with the sharesound fifo for snapcast to work
# - fifo:/tmp/snapcast
# # command: "snapserver -s pipe:///tmp/sharesound/snapfifo?name=Radio"
# # host mode is needed for snapserver advertisement
# network_mode: host
# restart: unless-stopped
# environment:
# - HOST=127.0.0.1
POSTGRES_USER: nextcloud
POSTGRES_DB: nextcloud
POSTGRES_HOST: postgres
BORG_ARCHIVE: nick
BORG_ARCHIVE_LABEL: snackpot
APPRISE_URI: "mailtos://mail.noodlefactory.co.uk:25?user=nc.noodlefactory.co.uk&from=borgmatic@snackpot.noodlefactory.co.uk&to=nick@noodlefactory.co.uk"
# SMTP_PASSWORD is set via borgmatic.env, created via ansible,
# and appended to APPRISE_URL by borgmatic/backup.sh script
# Test SMTP auth on the server https://doc.dovecot.org/admin_manual/debugging/debugging_authentication/
env_file:
- ./borgmatic.env
# mopidy:
# image: ivdata/mopidy:latest
# ports:
# - "6600:6600"
# - "6680:6680"
# - "5555:5555"
# depends_on:
# - snapserver
# volumes:
# # The volume with the fifo for snapcast to work with
# - fifo:/tmp/snapcast
# # Makes mopidy data persistent
# - mopidy_data:/mopidy
# # Add local music folder
# - minidlna_data:/media/music:ro
# restart: unless-stopped
# spotify:
# image: audiostation/spotify:latest
# # host mode is needed for Spotifyd advertisement
# network_mode: host
# depends_on:
# - snapserver
# volumes:
# # The volume with the sharesound fifo for snapcast to work
# - /tmp/sharesound:/tmp/sharesound
# restart: unless-stopped

View File

@@ -1,4 +0,0 @@
VIRTUAL_HOST={{ jellyfin_hostname }}
JELLYFIN_PublishedServerUrl=https://{{ jellyfin_hostname }}/
LETSENCRYPT_HOST={{ jellyfin_hostname }}
LETSENCRYPT_EMAIL={{ letsencrypt_email }}

View File

@@ -1,9 +0,0 @@
ND_SCANSCHEDULE=1h
ND_LOGLEVEL=info
ND_CACHEFOLDER="/cache"
ND_JUKEBOX_ENABLED="true"
ND_BASEURL="https://{{ navidrome_hostname }}"
VIRTUAL_HOST="{{ navidrome_hostname }}"
VIRTUAL_PORT=4533
LETSENCRYPT_HOST="{{ navidrome_hostname }}"
LETSENCRYPT_EMAIL="{{ letsencrypt_email }}"

View File

@@ -1,5 +0,0 @@
FROM deluan/navidrome:0.55.2
RUN apk add --no-cache mpv
# Ensure that navidrome has access to these directories
RUN mkdir -p /data /cache && chown -R 1000:1000 /data /cache

View File

@@ -1,2 +1,2 @@
FROM postgres:17-alpine
FROM postgres:11.9-alpine
COPY --chown={{ postgres_db_user }}:{{ postgres_db_user }} init.sql /docker-entrypoint-initdb.d/

View File

@@ -1,3 +1,3 @@
FROM jwilder/nginx-proxy:1.7-alpine
FROM jwilder/nginx-proxy:alpine-0.7.0
COPY uploadsize.conf /etc/nginx/conf.d/uploadsize.conf

View File

@@ -1,9 +1,8 @@
FROM alpine:3.21
FROM alpine:3.13
RUN apk update \
&& apk upgrade \
&& apk add --no-cache \
--repository http://dl-cdn.alpinelinux.org/alpine/edge/community \
--repository http://dl-cdn.alpinelinux.org/alpine/edge/testing \
python3 \
openssl \

View File

@@ -8,10 +8,9 @@
#upnpiface =
#upnpip =
# upnpport =
upnpip = 192.168.0.243
# media renderer parameters
friendlyname = UpMpdCli@Snackpot
friendlyname = Snackpot
#upnpav = 1
#openhome = 1
#lumincompat = 0
@@ -26,10 +25,4 @@ checkcontentformat = 0
#mpdport = 6600
#mpdpassword =
#ownqueue = 1
#mpdhost = mopidy
subsonicbaseurl = https://nd.noodlefactory.co.uk
subsonicport = 443
subsonicuser = sonos
subsonicpassword = ContactExerciseCharges

View File

@@ -1,3 +0,0 @@
# these files get written here by Bundlewrap
/*
!/.gitignore

View File

@@ -1,5 +0,0 @@
# SSH key files get written here by Bundlewrap
/*
!/.gitignore
!/config

View File

@@ -1 +0,0 @@
StrictHostKeyChecking accept-new

View File

@@ -1,3 +0,0 @@
# these files get written here by Bundlewrap
/msmtp.env

View File

@@ -1,727 +0,0 @@
# Where to look for files to backup, and where to store those backups.
# See https://borgbackup.readthedocs.io/en/stable/quickstart.html and
# https://borgbackup.readthedocs.io/en/stable/usage/create.html
# for details.
location:
# List of source directories to backup. Globs and tildes are
# expanded. Do not backslash spaces in path names.
source_directories:
- /mnt/source/
# Paths to local or remote repositories (required). Tildes are
# expanded. Multiple repositories are backed up to in
# sequence. Borg placeholders can be used. See the output of
# "borg help placeholders" for details. See ssh_command for
# SSH options like identity file or port. If systemd service
# is used, then add local repository paths in the systemd
# service file to the ReadWritePaths list.
repositories:
- /mnt/borg-repository
# - ssh://${BORG_REPO_USER}@${BORG_REPO_HOST}:${BORG_REPO_PORT}/./${BORG_ARCHIVE}
# Working directory for the "borg create" command. Tildes are
# expanded. Useful for backing up using relative paths. See
# http://borgbackup.readthedocs.io/en/stable/usage/create.html
# for details. Defaults to not set.
# working_directory: /path/to/working/directory
# Stay in same file system: do not cross mount points beyond
# the given source directories. Defaults to false. But when a
# database hook is used, the setting here is ignored and
# one_file_system is considered true.
# one_file_system: true
# Only store/extract numeric user and group identifiers.
# Defaults to false.
# numeric_ids: true
# Store atime into archive. Defaults to true in Borg < 1.2,
# false in Borg 1.2+.
# atime: false
# Store ctime into archive. Defaults to true.
# ctime: false
# Store birthtime (creation date) into archive. Defaults to
# true.
# birthtime: false
# Use Borg's --read-special flag to allow backup of block and
# other special devices. Use with caution, as it will lead to
# problems if used when backing up special devices such as
# /dev/zero. Defaults to false. But when a database hook is
# used, the setting here is ignored and read_special is
# considered true.
# read_special: false
# Record filesystem flags (e.g. NODUMP, IMMUTABLE) in archive.
# Defaults to true.
# flags: true
# Mode in which to operate the files cache. See
# http://borgbackup.readthedocs.io/en/stable/usage/create.html
# for details. Defaults to "ctime,size,inode".
# files_cache: ctime,size,inode
# Alternate Borg local executable. Defaults to "borg".
# local_path: borg1
# Alternate Borg remote executable. Defaults to "borg".
# remote_path: borg1
# Any paths matching these patterns are included/excluded from
# backups. Globs are expanded. (Tildes are not.) See the
# output of "borg help patterns" for more details. Quote any
# value if it contains leading punctuation, so it parses
# correctly. Note that only one of "patterns" and
# "source_directories" may be used.
# patterns:
# - R /
# - '- /home/*/.cache'
# - + /home/susan
# - '- /home/*'
# Read include/exclude patterns from one or more separate
# named files, one pattern per line. Note that Borg considers
# this option experimental. See the output of "borg help
# patterns" for more details.
# patterns_from:
# - /etc/borgmatic/patterns
# Any paths matching these patterns are excluded from backups.
# Globs and tildes are expanded. Note that a glob pattern must
# either start with a glob or be an absolute path. Do not
# backslash spaces in path names. See the output of "borg help
# patterns" for more details.
exclude_patterns:
# - '*.pyc'
# - /home/*/.cache
# - '*/.vim*.tmp'
# - /etc/ssl
# - /home/user/path with spaces
- '*~'
- '*#'
- '.cache'
- 'cache'
- 'files_trashbin'
# Read exclude patterns from one or more separate named files,
# one pattern per line. See the output of "borg help patterns"
# for more details.
# exclude_from:
# - /etc/borgmatic/excludes
# Exclude directories that contain a CACHEDIR.TAG file. See
# http://www.brynosaurus.com/cachedir/spec.html for details.
# Defaults to false.
# exclude_caches: true
# Exclude directories that contain a file with the given
# filenames. Defaults to not set.
# exclude_if_present:
# - .nobackup
# If true, the exclude_if_present filename is included in
# backups. Defaults to false, meaning that the
# exclude_if_present filename is omitted from backups.
# keep_exclude_tags: true
# Exclude files with the NODUMP flag. Defaults to false.
# exclude_nodump: true
# Path for additional source files used for temporary internal
# state like borgmatic database dumps. Note that changing this
# path prevents "borgmatic restore" from finding any database
# dumps created before the change. Defaults to ~/.borgmatic
# borgmatic_source_directory: /tmp/borgmatic
# Repository storage options. See
# https://borgbackup.readthedocs.io/en/stable/usage/create.html and
# https://borgbackup.readthedocs.io/en/stable/usage/general.html for
# details.
storage:
# The standard output of this command is used to unlock the
# encryption key. Only use on repositories that were
# initialized with passcommand/repokey/keyfile encryption.
# Note that if both encryption_passcommand and
# encryption_passphrase are set, then encryption_passphrase
# takes precedence. Defaults to not set.
# encryption_passcommand: secret-tool lookup borg-repository repo-name
# Passphrase to unlock the encryption key with. Only use on
# repositories that were initialized with
# passphrase/repokey/keyfile encryption. Quote the value if it
# contains punctuation, so it parses correctly. And backslash
# any quote or backslash literals as well. Defaults to not
# set.
# encryption_passphrase: "!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~"
# Number of seconds between each checkpoint during a
# long-running backup. See
# https://borgbackup.readthedocs.io/en/stable/faq.html
# for details. Defaults to checkpoints every 1800 seconds (30
# minutes).
# checkpoint_interval: 1800
# Specify the parameters passed to then chunker
# (CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS,
# HASH_WINDOW_SIZE). See
# https://borgbackup.readthedocs.io/en/stable/internals.html
# for details. Defaults to "19,23,21,4095".
# chunker_params: 19,23,21,4095
# Type of compression to use when creating archives. See
# http://borgbackup.readthedocs.io/en/stable/usage/create.html
# for details. Defaults to "lz4".
# compression: lz4
# Remote network upload rate limit in kiBytes/second. Defaults
# to unlimited.
# upload_rate_limit: 100
# Number of times to retry a failing backup before giving up.
# Defaults to 0 (i.e., does not attempt retry).
# retries: 3
# Wait time between retries (in seconds) to allow transient
# issues to pass. Increases after each retry as a form of
# backoff. Defaults to 0 (no wait).
# retry_wait: 10
# Directory where temporary files are stored. Defaults to
# $TMPDIR
# temporary_directory: /path/to/tmpdir
# Command to use instead of "ssh". This can be used to specify
# ssh options. Defaults to not set.
# ssh_command: ssh -i /path/to/private/key
# Base path used for various Borg directories. Defaults to
# $HOME, ~$USER, or ~.
# borg_base_directory: /path/to/base
# Path for Borg configuration files. Defaults to
# $borg_base_directory/.config/borg
# borg_config_directory: /path/to/base/config
# Path for Borg cache files. Defaults to
# $borg_base_directory/.cache/borg
# borg_cache_directory: /path/to/base/cache
# Path for Borg security and encryption nonce files. Defaults
# to $borg_base_directory/.config/borg/security
# borg_security_directory: /path/to/base/config/security
# Path for Borg encryption key files. Defaults to
# $borg_base_directory/.config/borg/keys
# borg_keys_directory: /path/to/base/config/keys
# Umask to be used for borg create. Defaults to 0077.
# umask: 0077
# Maximum seconds to wait for acquiring a repository/cache
# lock. Defaults to 1.
# lock_wait: 5
# Name of the archive. Borg placeholders can be used. See the
# output of "borg help placeholders" for details. Defaults to
# "{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}". If you specify this
# option, consider also specifying a prefix in the retention
# and consistency sections to avoid accidental
# pruning/checking of archives with different archive name
# formats.
# archive_name_format: '{hostname}-documents-{now}'
archive_name_format: "${BORG_ARCHIVE_LABEL}-{now:%Y-%m-%dT%H:%M:%S.%f}"
# Bypass Borg error about a repository that has been moved.
# Defaults to false.
# relocated_repo_access_is_ok: true
# Bypass Borg error about a previously unknown unencrypted
# repository. Defaults to false.
# unknown_unencrypted_repo_access_is_ok: true
# Additional options to pass directly to particular Borg
# commands, handy for Borg options that borgmatic does not yet
# support natively. Note that borgmatic does not perform any
# validation on these options. Running borgmatic with
# "--verbosity 2" shows the exact Borg command-line
# invocation.
# extra_borg_options:
# Extra command-line options to pass to "borg init".
# init: --extra-option
# Extra command-line options to pass to "borg prune".
# prune: --extra-option
# Extra command-line options to pass to "borg compact".
# compact: --extra-option
# Extra command-line options to pass to "borg create".
# create: --extra-option
# Extra command-line options to pass to "borg check".
# check: --extra-option
# Retention policy for how many backups to keep in each category. See
# https://borgbackup.readthedocs.io/en/stable/usage/prune.html for
# details. At least one of the "keep" options is required for pruning
# to work. To skip pruning entirely, run "borgmatic create" or "check"
# without the "prune" action. See borgmatic documentation for details.
retention:
# Keep all archives within this time interval.
# keep_within: 3H
# Number of secondly archives to keep.
# keep_secondly: 60
# Number of minutely archives to keep.
# keep_minutely: 60
# Number of hourly archives to keep.
# keep_hourly: 24
# Number of daily archives to keep.
keep_daily: 7
# Number of weekly archives to keep.
keep_weekly: 4
# Number of monthly archives to keep.
keep_monthly: 6
# Number of yearly archives to keep.
keep_yearly: 5
# When pruning, only consider archive names starting with this
# prefix. Borg placeholders can be used. See the output of
# "borg help placeholders" for details. Defaults to
# "{hostname}-". Use an empty value to disable the default.
# prefix: sourcehostname
# Consistency checks to run after backups. See
# https://borgbackup.readthedocs.io/en/stable/usage/check.html and
# https://borgbackup.readthedocs.io/en/stable/usage/extract.html for
# details.
# consistency:
# List of one or more consistency checks to run on a periodic
# basis (if "frequency" is set) or every time borgmatic runs
# checks (if "frequency" is omitted).
# checks:
# Name of consistency check to run: "repository",
# "archives", "data", and/or "extract". Set to
# "disabled" to disable all consistency checks.
# "repository" checks the consistency of the
# repository, "archives" checks all of the
# archives, "data" verifies the integrity of the
# data within the archives, and "extract" does an
# extraction dry-run of the most recent archive.
# Note that "data" implies "archives".
# - name: repository
# How frequently to run this type of consistency
# check (as a best effort). The value is a number
# followed by a unit of time. E.g., "2 weeks" to
# run this consistency check no more than every
# two weeks for a given repository or "1 month" to
# run it no more than monthly. Defaults to
# "always": running this check every time checks
# are run.
# frequency: 2 weeks
# Paths to a subset of the repositories in the location
# section on which to run consistency checks. Handy in case
# some of your repositories are very large, and so running
# consistency checks on them would take too long. Defaults to
# running consistency checks on all repositories configured in
# the location section.
# check_repositories:
# - user@backupserver:sourcehostname.borg
# Restrict the number of checked archives to the last n.
# Applies only to the "archives" check. Defaults to checking
# all archives.
# check_last: 3
# When performing the "archives" check, only consider archive
# names starting with this prefix. Borg placeholders can be
# used. See the output of "borg help placeholders" for
# details. Defaults to "{hostname}-". Use an empty value to
# disable the default.
# prefix: sourcehostname
# Options for customizing borgmatic's own output and logging.
output:
# Apply color to console output. Can be overridden with
# --no-color command-line flag. Defaults to true.
color: false
# Shell commands, scripts, or integrations to execute at various
# points during a borgmatic run. IMPORTANT: All provided commands and
# scripts are executed with user permissions of borgmatic. Do not
# forget to set secure permissions on this configuration file (chmod
# 0600) as well as on any script called from a hook (chmod 0700) to
# prevent potential shell injection or privilege escalation.
hooks:
# List of one or more shell commands or scripts to execute
# before all the actions for each repository.
# before_actions:
# - echo "Starting actions."
# List of one or more shell commands or scripts to execute
# before creating a backup, run once per repository.
# before_backup:
# - echo "Starting a backup."
# List of one or more shell commands or scripts to execute
# before pruning, run once per repository.
# before_prune:
# - echo "Starting pruning."
# List of one or more shell commands or scripts to execute
# before compaction, run once per repository.
# before_compact:
# - echo "Starting compaction."
# List of one or more shell commands or scripts to execute
# before consistency checks, run once per repository.
# before_check:
# - echo "Starting checks."
# List of one or more shell commands or scripts to execute
# before extracting a backup, run once per repository.
# before_extract:
# - echo "Starting extracting."
# List of one or more shell commands or scripts to execute
# after creating a backup, run once per repository.
# after_backup:
# - echo "Finished a backup."
# List of one or more shell commands or scripts to execute
# after compaction, run once per repository.
# after_compact:
# - echo "Finished compaction."
# List of one or more shell commands or scripts to execute
# after pruning, run once per repository.
# after_prune:
# - echo "Finished pruning."
# List of one or more shell commands or scripts to execute
# after consistency checks, run once per repository.
# after_check:
# - echo "Finished checks."
# List of one or more shell commands or scripts to execute
# after extracting a backup, run once per repository.
# after_extract:
# - echo "Finished extracting."
# List of one or more shell commands or scripts to execute
# after all actions for each repository.
# after_actions:
# - echo "Finished actions."
# List of one or more shell commands or scripts to execute
# when an exception occurs during a "prune", "compact",
# "create", or "check" action or an associated before/after
# hook.
# on_error:
# - echo "Error during prune/compact/create/check."
# List of one or more shell commands or scripts to execute
# before running all actions (if one of them is "create").
# These are collected from all configuration files and then
# run once before all of them (prior to all actions).
# before_everything:
# - echo "Starting actions."
# List of one or more shell commands or scripts to execute
# after running all actions (if one of them is "create").
# These are collected from all configuration files and then
# run once after all of them (after any action).
# after_everything:
# - echo "Completed actions."
# List of one or more PostgreSQL databases to dump before
# creating a backup, run once per configuration file. The
# database dumps are added to your source directories at
# runtime, backed up, and removed afterwards. Requires
# pg_dump/pg_dumpall/pg_restore commands. See
# https://www.postgresql.org/docs/current/app-pgdump.html and
# https://www.postgresql.org/docs/current/libpq-ssl.html for
# details.
postgresql_databases:
# Database name (required if using this hook). Or
# "all" to dump all databases on the host. Note
# that using this database hook implicitly enables
# both read_special and one_file_system (see
# above) to support dump and restore streaming.
# - name: users
- name: ${POSTGRES_DB}
# Database hostname to connect to. Defaults to
# connecting via local Unix socket.
# hostname: database.example.org
hostname: ${POSTGRES_HOST}
# Port to connect to. Defaults to 5432.
# port: 5433
# Username with which to connect to the database.
# Defaults to the username of the current user.
# You probably want to specify the "postgres"
# superuser here when the database name is "all".
# username: dbuser
username: ${POSTGRES_USER}
# Password with which to connect to the database.
# Omitting a password will only work if PostgreSQL
# is configured to trust the configured username
# without a password or you create a ~/.pgpass
# file.
# password: trustsome1
password: ${POSTGRES_PASSWORD}
# Database dump output format. One of "plain",
# "custom", "directory", or "tar". Defaults to
# "custom" (unlike raw pg_dump). See pg_dump
# documentation for details. Note that format is
# ignored when the database name is "all".
# format: directory
# SSL mode to use to connect to the database
# server. One of "disable", "allow", "prefer",
# "require", "verify-ca" or "verify-full".
# Defaults to "disable".
# ssl_mode: require
# Path to a client certificate.
# ssl_cert: /root/.postgresql/postgresql.crt
# Path to a private client key.
# ssl_key: /root/.postgresql/postgresql.key
# Path to a root certificate containing a list of
# trusted certificate authorities.
# ssl_root_cert: /root/.postgresql/root.crt
# Path to a certificate revocation list.
# ssl_crl: /root/.postgresql/root.crl
# Additional pg_dump/pg_dumpall options to pass
# directly to the dump command, without performing
# any validation on them. See pg_dump
# documentation for details.
# options: --role=someone
# List of one or more MySQL/MariaDB databases to dump before
# creating a backup, run once per configuration file. The
# database dumps are added to your source directories at
# runtime, backed up, and removed afterwards. Requires
# mysqldump/mysql commands (from either MySQL or MariaDB). See
# https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html or
# https://mariadb.com/kb/en/library/mysqldump/ for details.
# mysql_databases:
# Database name (required if using this hook). Or
# "all" to dump all databases on the host. Note
# that using this database hook implicitly enables
# both read_special and one_file_system (see
# above) to support dump and restore streaming.
# - name: ${POSTGRES_DB}
# Database hostname to connect to. Defaults to
# connecting via local Unix socket.
# hostname: ${POSTGRES_HOST}
# Port to connect to. Defaults to 3306.
# port: 3307
# Username with which to connect to the database.
# Defaults to the username of the current user.
# username: ${POSTGRES_USER}
# Password with which to connect to the database.
# Omitting a password will only work if MySQL is
# configured to trust the configured username
# without a password.
# password: ${POSTGRES_PASSWORD}
# Additional mysql options to pass directly to
# the mysql command that lists available
# databases, without performing any validation on
# them. See mysql documentation for details.
# list_options: --defaults-extra-file=my.cnf
# Additional mysqldump options to pass directly to
# the dump command, without performing any
# validation on them. See mysqldump documentation
# for details.
# options: --skip-comments
# List of one or more MongoDB databases to dump before
# creating a backup, run once per configuration file. The
# database dumps are added to your source directories at
# runtime, backed up, and removed afterwards. Requires
# mongodump/mongorestore commands. See
# https://docs.mongodb.com/database-tools/mongodump/ and
# https://docs.mongodb.com/database-tools/mongorestore/ for
# details.
# mongodb_databases:
# Database name (required if using this hook). Or
# "all" to dump all databases on the host. Note
# that using this database hook implicitly enables
# both read_special and one_file_system (see
# above) to support dump and restore streaming.
# - name: users
# Database hostname to connect to. Defaults to
# connecting to localhost.
# hostname: database.example.org
# Port to connect to. Defaults to 27017.
# port: 27018
# Username with which to connect to the database.
# Skip it if no authentication is needed.
# username: dbuser
# Password with which to connect to the database.
# Skip it if no authentication is needed.
# password: trustsome1
# Authentication database where the specified
# username exists. If no authentication database
# is specified, the database provided in "name"
# is used. If "name" is "all", the "admin"
# database is used.
# authentication_database: admin
# Database dump output format. One of "archive",
# or "directory". Defaults to "archive". See
# mongodump documentation for details. Note that
# format is ignored when the database name is
# "all".
# format: directory
# Additional mongodump options to pass
# directly to the dump command, without performing
# any validation on them. See mongodump
# documentation for details.
# options: --role=someone
# ntfy:
# The topic to publish to.
# (https://ntfy.sh/docs/publish/)
# topic: topic
# The address of your self-hosted ntfy.sh instance.
# server: https://ntfy.your-domain.com
# start:
# The title of the message
# title: Ping!
# The message body to publish.
# message: Your backups have failed.
# The priority to set.
# priority: urgent
# Tags to attach to the message.
# tags: incoming_envelope
# finish:
# The title of the message.
# title: Ping!
# The message body to publish.
# message: Your backups have failed.
# The priority to set.
# priority: urgent
# Tags to attach to the message.
# tags: incoming_envelope
# fail:
# The title of the message.
# title: Ping!
# The message body to publish.
# message: Your backups have failed.
# The priority to set.
# priority: urgent
# Tags to attach to the message.
# tags: incoming_envelope
# List of one or more monitoring states to ping for:
# "start", "finish", and/or "fail". Defaults to
# pinging for failure only.
# states:
# - start
# - finish
# Configuration for a monitoring integration with
# Healthchecks. Create an account at https://healthchecks.io
# (or self-host Healthchecks) if you'd like to use this
# service. See borgmatic monitoring documentation for details.
# healthchecks:
# Healthchecks ping URL or UUID to notify when a
# backup begins, ends, or errors.
# ping_url: https://hc-ping.com/your-uuid-here
# Verify the TLS certificate of the ping URL host.
# Defaults to true.
# verify_tls: false
# Send borgmatic logs to Healthchecks as part the
# "finish" state. Defaults to true.
# send_logs: false
# Number of bytes of borgmatic logs to send to
# Healthchecks, ideally the same as PING_BODY_LIMIT
# configured on the Healthchecks server. Set to 0 to
# send all logs and disable this truncation. Defaults
# to 100000.
# ping_body_limit: 200000
# List of one or more monitoring states to ping for:
# "start", "finish", and/or "fail". Defaults to
# pinging for all states.
# states:
# - finish
# Configuration for a monitoring integration with Cronitor.
# Create an account at https://cronitor.io if you'd
# like to use this service. See borgmatic monitoring
# documentation for details.
# cronitor:
# Cronitor ping URL to notify when a backup begins,
# ends, or errors.
# ping_url: https://cronitor.link/d3x0c1
# Configuration for a monitoring integration with PagerDuty.
# Create an account at https://www.pagerduty.com/ if you'd
# like to use this service. See borgmatic monitoring
# documentation for details.
# pagerduty:
# PagerDuty integration key used to notify PagerDuty
# when a backup errors.
# integration_key: a177cad45bd374409f78906a810a3074
# Configuration for a monitoring integration with Crunhub.
# Create an account at https://cronhub.io if you'd like to
# use this service. See borgmatic monitoring documentation
# for details.
# cronhub:
# Cronhub ping URL to notify when a backup begins,
# ends, or errors.
# ping_url: https://cronhub.io/ping/1f5e3410-254c-5587
# Umask used when executing hooks. Defaults to the umask that
# borgmatic is run with.
# umask: 0077

View File

@@ -1 +0,0 @@
0 1 * * * PATH=$PATH:/usr/bin /usr/local/bin/borgmatic --stats -v 0 2>&1

View File

@@ -1 +0,0 @@
MAIL_PASSWORD={{ smtp_password }}

View File

@@ -1,3 +1,3 @@
FROM nginx:1.28-alpine
FROM nginx:1.19.6-alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/nginx.conf

View File

@@ -1,271 +1,173 @@
# Adapted from https://docs.nextcloud.com/server/31/admin_manual/installation/nginx.html
# Insert as /etc/nginx/conf.d/default.conf
worker_processes auto;
# Set .mjs and .wasm MIME types
# Either include it in the default mime.types list
# and include that list explicitly or add the file extension
# only for Nextcloud like below:
types {
text/javascript mjs;
application/wasm wasm;
}
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
upstream php-handler {
server nextcloud:9000;
}
# Set the `immutable` cache control options only for assets with a cache busting `v` argument
map $arg_v $asset_immutable {
"" "";
default ", immutable";
events {
worker_connections 1024;
}
server {
listen 80;
server_name nc.noodlefactory.co.uk;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Prevent nginx HTTP Server Detection
server_tokens off;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# HSTS settings
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
#add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;
access_log /var/log/nginx/access.log main;
# Add headers to serve security related headers
# Before enabling Strict-Transport-Security headers please read into this
# topic first.
#add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
#
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "noindex, nofollow" always;
add_header X-XSS-Protection "1; mode=block" always;
sendfile on;
#tcp_nopush on;
# Remove X-Powered-By, which is an information leak
fastcgi_hide_header X-Powered-By;
keepalive_timeout 65;
set_real_ip_from 10.0.0.0/8;
set_real_ip_from 172.16.0.0/12;
set_real_ip_from 192.168.0.0/16;
real_ip_header X-Real-IP;
# Path to the root of your installation
root /var/www/html;
#gzip on;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
upstream php-handler {
server nextcloud:9000;
}
# The following 2 rules are only needed for the user_webfinger app.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
#rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
server {
listen 80;
# The following rule is only needed for the Social app.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
# Add headers to serve security related headers
# Before enabling Strict-Transport-Security headers please read into this
# topic first.
#add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
#
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "none" always;
add_header X-XSS-Protection "1; mode=block" always;
# set max upload size and increase upload timeout
client_max_body_size 10G;
client_body_timeout 300s;
fastcgi_buffers 64 4K;
# Remove X-Powered-By, which is an information leak
fastcgi_hide_header X-Powered-By;
# The settings allows you to optimize the HTTP2 bandwidth.
# See https://blog.cloudflare.com/delivering-http-2-upload-speed-improvements/
# for tuning hints
client_body_buffer_size 512k;
# Path to the root of your installation
root /var/www/html;
# Enable gzip but do not remove ETag headers
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml text/javascript application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/wasm application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Pagespeed is not supported by Nextcloud, so if your server is built
# with the `ngx_pagespeed` module, uncomment this line to disable it.
#pagespeed off;
# The following 2 rules are only needed for the user_webfinger app.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
#rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
# The following rule is only needed for the Social app.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
# Specify how to handle directories -- specifying `/index.php$request_uri`
# here as the fallback means that Nginx always exhibits the desired behaviour
# when a client requests a path that corresponds to a directory that exists
# on the server. In particular, if that directory contains an index.php file,
# that file is correctly served; if it doesn't, then the request is passed to
# the front-end controller. This consistent behaviour means that we don't need
# to specify custom rules for certain paths (e.g. images and other assets,
# `/updater`, `/ocs-provider`), and thus
# `try_files $uri $uri/ /index.php$request_uri`
# always provides the desired behaviour.
index index.php index.html /index.php$request_uri;
location = /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
# Rule borrowed from `.htaccess` to handle Microsoft DAV clients
location = / {
if ( $http_user_agent ~ ^DavClnt ) {
return 302 /remote.php/webdav/$is_args$args;
}
location = /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
# set max upload size
client_max_body_size 10G;
fastcgi_buffers 64 4K;
# Enable gzip but do not remove ETag headers
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
# Uncomment if your server is build with the ngx_pagespeed module
# This module is currently not supported.
#pagespeed off;
location / {
rewrite ^ /index.php;
}
location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ {
deny all;
}
location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) {
fastcgi_split_path_info ^(.+?\.php)(\/.*|)$;
set $path_info $fastcgi_path_info;
try_files $fastcgi_script_name =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $path_info;
# fastcgi_param HTTPS on;
# Avoid sending the security headers twice
fastcgi_param modHeadersAvailable true;
# Enable pretty urls
fastcgi_param front_controller_active true;
fastcgi_pass php-handler;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
}
location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) {
try_files $uri/ =404;
index index.php;
}
# Adding the cache control header for js, css and map files
# Make sure it is BELOW the PHP block
location ~ \.(?:css|js|woff2?|svg|gif|map)$ {
try_files $uri /index.php$request_uri;
add_header Cache-Control "public, max-age=15778463";
# Add headers to serve security related headers (It is intended to
# have those duplicated to the ones above)
# Before enabling Strict-Transport-Security headers please read into
# this topic first.
#add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
#
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "none" always;
add_header X-XSS-Protection "1; mode=block" always;
# Optional: Don't log access to assets
access_log off;
}
location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap|mp4|webm)$ {
try_files $uri /index.php$request_uri;
# Optional: Don't log access to other assets
access_log off;
}
}
# Make a regex exception for `/.well-known` so that clients can still
# access it despite the existence of the regex rule
# `location ~ /(\.|autotest|...)` which would otherwise handle requests
# for `/.well-known`.
location ^~ /.well-known {
# The rules in this block are an adaptation of the rules
# in `.htaccess` that concern `/.well-known`.
location = /.well-known/carddav { return 301 /remote.php/dav/; }
location = /.well-known/caldav { return 301 /remote.php/dav/; }
location /.well-known/acme-challenge { try_files $uri $uri/ =404; }
location /.well-known/pki-validation { try_files $uri $uri/ =404; }
# Let Nextcloud's API for `/.well-known` URIs handle all other
# requests by passing them to the front-end controller.
return 301 /index.php$request_uri;
}
# Rules borrowed from `.htaccess` to hide certain paths from clients
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/) { return 404; }
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) { return 404; }
# Ensure this block, which passes PHP files to the PHP process, is above the blocks
# which handle static assets (as seen below). If this block is not declared first,
# then Nginx will encounter an infinite rewriting loop when it prepends `/index.php`
# to the URI, resulting in a HTTP 500 error response.
location ~ \.php(?:$|/) {
# Required for legacy support
rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|ocs-provider\/.+|.+\/richdocumentscode(_arm64)?\/proxy) /index.php$request_uri;
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
set $path_info $fastcgi_path_info;
try_files $fastcgi_script_name =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $path_info;
fastcgi_param HTTPS on;
fastcgi_param modHeadersAvailable true; # Avoid sending the security headers twice
fastcgi_param front_controller_active true; # Enable pretty urls
fastcgi_pass php-handler;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
fastcgi_max_temp_file_size 0;
}
# Serve static files
location ~ \.(?:css|js|mjs|svg|gif|ico|jpg|png|webp|wasm|tflite|map|ogg|flac)$ {
try_files $uri /index.php$request_uri;
# HTTP response headers borrowed from Nextcloud `.htaccess`
add_header Cache-Control "public, max-age=15778463$asset_immutable";
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "noindex, nofollow" always;
add_header X-XSS-Protection "1; mode=block" always;
access_log off; # Optional: Don't log access to assets
}
location ~ \.(otf|woff2?)$ {
try_files $uri /index.php$request_uri;
expires 7d; # Cache-Control policy borrowed from `.htaccess`
access_log off; # Optional: Don't log access to assets
}
# Rule borrowed from `.htaccess`
location /remote {
return 301 /remote.php$request_uri;
}
location / {
try_files $uri $uri/ /index.php$request_uri;
}
##
## location / {
## rewrite ^ /index.php;
## }
##
## location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ {
## deny all;
## }
## location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) {
## deny all;
## }
##
## location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) {
## fastcgi_split_path_info ^(.+?\.php)(\/.*|)$;
## set $path_info $fastcgi_path_info;
## try_files $fastcgi_script_name =404;
## include fastcgi_params;
## fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
## fastcgi_param PATH_INFO $path_info;
## # fastcgi_param HTTPS on;
##
## # Avoid sending the security headers twice
## fastcgi_param modHeadersAvailable true;
##
## # Enable pretty urls
## fastcgi_param front_controller_active true;
## fastcgi_pass php-handler;
## fastcgi_intercept_errors on;
## fastcgi_request_buffering off;
## }
##
## location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) {
## try_files $uri/ =404;
## index index.php;
## }
##
## # Adding the cache control header for js, css and map files
## # Make sure it is BELOW the PHP block
## location ~ \.(?:css|js|woff2?|svg|gif|map)$ {
## try_files $uri /index.php$request_uri;
## add_header Cache-Control "public, max-age=15778463";
## # Add headers to serve security related headers (It is intended to
## # have those duplicated to the ones above)
## # Before enabling Strict-Transport-Security headers please read into
## # this topic first.
## #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
## #
## # WARNING: Only add the preload option once you read about
## # the consequences in https://hstspreload.org/. This option
## # will add the domain to a hardcoded list that is shipped
## # in all major browsers and getting removed from this list
## # could take several months.
## add_header Referrer-Policy "no-referrer" always;
## add_header X-Content-Type-Options "nosniff" always;
## add_header X-Download-Options "noopen" always;
## add_header X-Frame-Options "SAMEORIGIN" always;
## add_header X-Permitted-Cross-Domain-Policies "none" always;
## add_header X-Robots-Tag "none" always;
## add_header X-XSS-Protection "1; mode=block" always;
##
## # Optional: Don't log access to assets
## access_log off;
## }
##
## location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap|mp4|webm)$ {
## try_files $uri /index.php$request_uri;
## # Optional: Don't log access to other assets
## access_log off;
## }
}