76 Commits

Author SHA1 Message Date
Nick Stokoe
4fddb06060 templates/docker-compose/upmpdcli/ - bump alpine to 3.21 2025-05-28 12:22:32 +01:00
Nick Stokoe
c0b289d2bc templates/docker-compose/proxy/Dockerfile - bump to alpine-0.7 2025-05-28 12:22:32 +01:00
Nick Stokoe
ce18785ccb templates/docker-compose/navidrome/Dockerfile - bump to 55.2 2025-05-28 12:22:32 +01:00
Nick Stokoe
ca24f7dae7 templates/docker-compose/ - update Borgmatics image
It's necessary to update to the `latest` image because the
`latest-msmtp` image is now unmaintained and doesn't support postgres
17.

However, this means we need to adjust things (the backup script and
the environment variable config) because we need to use the latest
notification mechanism, `apprise` instead of msmtp.

Tested, seems to be working.
2025-05-28 12:22:32 +01:00
Nick Stokoe
5ca9ecfe2a templates/docker-compose/borgmatic/ - avoid losing STDERR lines
use stdbuf to change the buffereing mode to be linewise
2025-05-28 12:22:32 +01:00
Nick Stokoe
4fecd838ae templates/docker-compose/web/ - fix nginx.conf for latest NC 2025-05-28 12:18:33 +01:00
Nick Stokoe
f35dd620aa UPGRADING-NC.md - draft version 2025-05-26 13:09:28 +01:00
Nick Stokoe
66b472dab2 templates/docker-compose/docker-compose.yml - drop "version" attribute
Docker compose doesn't seem to like it and complains now.
2025-05-26 13:09:28 +01:00
Nick Stokoe
7061ef37f8 templates/docker-compose/docker-compose.yml - bump nextcloud to v31
Actual upgrade done one major version at a time.
2025-05-26 13:09:28 +01:00
Nick Stokoe
5c12a6f053 templates/docker-compose/postgres/Dockerfile.j2 - bump to v17 2025-05-26 13:09:28 +01:00
Nick Stokoe
7c4f1091b4 * - use "docker compose" not "docker-compose"
Latter is obsolete now
2025-05-26 13:09:19 +01:00
Nick Stokoe
b26ac645bd snackpot.yml - comment out nonworking notifies 2025-05-25 18:34:38 +01:00
Nick Stokoe
8ad6e1c81c snackpot.yml,requirements.yml - update for ansible 2.18.6 2025-05-25 18:34:38 +01:00
Nick Stokoe
4a759b3ff1 templates/docker-compose/navidrome/Dockerfile - bump navidrome to v54.4 2025-05-25 18:29:56 +01:00
Nick Stokoe
da90c4713c templates/docker-compose/upmpdcli/upmpdcli.conf - connect to subsonic
i.e. Navidrome.  Using global URL for now - not yet worked out how to
connect to it using Docker network.
2024-01-03 14:20:43 +00:00
Nick Stokoe
deaf0407b8 remove mopidy submodule and uses thereof 2024-01-03 14:20:24 +00:00
Nick Stokoe
2669b6f466 snackpot.yml templates/* - add navidrome and bonob containers 2024-01-03 13:29:51 +00:00
Nick Stokoe
b40cb39327 templates/bin/backup - better protect against failures
I experienced a problem which stopped this backup from running, so
let's allow it to fail more gracefully and not leave the backup in a
state it can't resume from, if possible.
2024-01-02 14:13:10 +00:00
Nick Stokoe
8ae5a1aa60 templates/docker-compose/docker-compose.yml - remove mopidy 2024-01-02 14:12:35 +00:00
Nick Stokoe
598c215a5a templates/docker-compose/upmpdcli/Dockerfile - upgrade alpine image 2024-01-02 14:12:07 +00:00
Nick Stokoe
5462cb9073 templates/docker-compose/docker-compose.yml - upgrade nextcloud 2024-01-02 14:11:51 +00:00
Nick Stokoe
e19f124bb6 templates/borg.service - use templated paths/docker command 2023-04-18 08:58:02 +01:00
Nick Stokoe
a83123377f templates/bin/backup - use templated paths/docker command 2023-04-18 08:57:59 +01:00
Nick Stokoe
56bedda69e templates/bin/{borg,borgmatic} - helper shims for maintainance 2023-04-18 08:29:37 +01:00
Nick Stokoe
f241e98998 templates/bin/backup - put nc into maint mode before backing up
Hoping this will avoid problems with NC restarting broken
2023-04-18 08:29:29 +01:00
Nick Stokoe
86653e5f79 borgmatic/backup.sh - fix printf bug
Interpolation can and does insert % placeholders into the printf
template text - although not valid ones as they're intended for
python.

So be more careful!  Put all inserted text into the parameters to
printf, or use echo.

Also, keep some of the alterations used whilst diagnosing this.
2023-04-15 22:36:57 +01:00
Nick Stokoe
bd3ad70af4 backup/backup.sh - only create backup whilst services down; check after 2023-04-13 08:02:08 +01:00
Nick Stokoe
88d875d638 backup.sh - log with timestamps 2023-04-12 20:20:34 +01:00
Nick Stokoe
c8b1d00230 templates/docker-compose/borgmatic/backup.sh - refinements
Trap failures, ensure cleanup.

Send an email even if we fail.

Break borg locks in cleanup.

Implement testing smtp emails.
2023-04-12 20:01:11 +01:00
Nick Stokoe
68b73990b4 borgmatic config.yaml - set the archive label meaningfully
Currently {hostname} expands to an anonymous number. Set this part of
the archive name to something we can recognise.
2023-04-12 20:01:11 +01:00
Nick Stokoe
fe9f3d9bdd templates/docker-compose/mopidy 2023-04-06 15:14:46 +01:00
Nick Stokoe
ae0ddaea20 templates/docker-compose/docker-compose.yml - add link to doveadm trick 2023-04-04 12:51:25 +01:00
Nick Stokoe
7541bde9c6 snackpot.yml - fixup, enable services 2023-04-04 12:51:00 +01:00
Nick Stokoe
2d3d9217e8 snackpot.yml etc. - add borgmatic backup container 2023-04-04 11:52:08 +01:00
Nick Stokoe
a6872077a9 roles/docker_compose/tasks/main.yml - enable docker buildkit
So that COPY --chmod works, which is useful for an ansible copy which
doesn't preserve permissions.
2023-04-03 22:09:25 +01:00
Nick Stokoe
8d7163e7e6 requirements.yml,SETUP.md - prerequisites 2023-04-03 16:51:24 +01:00
Nick Stokoe
c526c6e9c0 INSTALL.md 2023-04-03 16:39:43 +01:00
Nick Stokoe
018b7ec9af templates/docker-compose/docker-compose.yml - upgrade to NC 26 2023-04-01 18:58:09 +01:00
Nick Stokoe
d088d95f1d templates/docker-compose/docker-compose.yml - bump nextcloud to v25 2022-12-12 12:29:48 +00:00
Nick Stokoe
ea8c22f1ae templates/docker-compose/docker-compose.yml - fix disappearing ext share links
See
https://github.com/nextcloud/server/issues/25852#issuecomment-997964401

External mount share links are disappearing after a few minutes.
2022-08-18 10:44:30 +01:00
Nick Stokoe
c58eed2657 docker-compose.yml - update nextcloud to v24
The upgrade was actually executed in steps, v22 -> v23 -> v24, and the
net result committed.
2022-07-01 06:53:37 +01:00
Nick Stokoe
181a1967f9 name MiniDLNA and UpMpdCli servers distinctly
So we can tell which one we are seeing in listings
2021-12-04 15:59:26 +00:00
Nick Stokoe
31ec4b2d2e fixup hardwired paths 2021-12-04 15:58:52 +00:00
Nick Stokoe
32cabdd1f4 docker-compose.yml etc. - proxy jellyfin on virtual host
nominally working, although some hard-wired values to remove
2021-12-03 17:59:35 +00:00
Nick Stokoe
12f3fcbaaf docker-compose.yml - reverse proxy the jellyfin container as virtual host
we need to take it of host network mode, add it to the proxy-tier and
default networks, then enable proxying and lets-encrypt.

Tested, works.
2021-12-03 17:59:35 +00:00
Nick Stokoe
15ce90e098 docker-compose/docker-compose.yml - upgrade nextcloud to v22 2021-12-03 17:59:35 +00:00
Nick Stokoe
bb20922852 templates/docker-compose/mopidy - update tracked commit 2021-12-03 17:59:35 +00:00
Nick Stokoe
3d09f9d1e9 docker-compose/docker-compose.yml - upgrade nextcloud and others 2021-12-03 17:59:35 +00:00
Nick Stokoe
2512d2ef31 docker-compose/docker-compose.yml - add extra_hosts snackpot:host-gateway for mopidy 2021-12-03 17:59:35 +00:00
Nick Stokoe
a6290fe82d docker-compose/upmpdcli/upmpdcli.conf - don't check the content format
As this will disallow things that should be allowed.
2021-12-03 17:59:35 +00:00
Nick Stokoe
a4f0664663 docker-compose/upmpdcli/Dockerfile - explicitly specify the config file
In  the command parameters for upmpdcli - otherwise it seems not to be
picked up.
2021-12-03 17:59:35 +00:00
Nick Stokoe
5b3440457f add jellyfin 2021-12-03 17:59:35 +00:00
Nick Stokoe
73821733cf docker-compose/upmpdcli/Dockerfile - use python3 not 2 2021-12-03 17:59:35 +00:00
Nick Stokoe
32aaf0fe6b docker-compose.yml - set mopidy to restart: always 2021-12-03 17:59:35 +00:00
Nick Stokoe
17a04fc559 docker-compose.yml - set upmpdcli to restart: always 2021-12-03 17:59:35 +00:00
Nick Stokoe
18ec2c5320 docker-compose/upmpdcli/Dockerfile - add openssl to the package list
upmpdcli seems to use this, optionally
2021-12-03 17:59:35 +00:00
Nick Stokoe
d3fa11cf90 docker-compose.yml - add local audio access to mopidy 2021-12-03 17:59:35 +00:00
Nick Stokoe
caca059da0 snackpot.yml, docker-compose.yml - add mopidy and upmpdcli services
mopidy includes icecast
2021-12-03 17:59:35 +00:00
Nick Stokoe
625b2a656a docker-compose.yml - add MINIDLNA_INOTIFY=yes to minidlna
We want it to spot file changes
2021-12-03 17:59:35 +00:00
Nick Stokoe
71d8edab0a snackpot.yml, docker-compose.yml - these nextcloud paths can be fixed 2021-12-03 17:59:35 +00:00
Nick Stokoe
c160ba5193 snackpot.yml etc. - refine docker-compose config deploy
The main job of this commit:
- Be explicit about templates: expect the .j2 extension
- Copy all other files, so that they can be binary
- Don't deploy dotfiles or dotdirectories.

This snuck in:
- Remove `test` tag
- Refine some descriptions
2021-12-03 17:59:35 +00:00
Nick Stokoe
3aaa6deb34 templates/docker-compose/docker-compose.yml - remove some comment cruft 2021-12-03 17:59:35 +00:00
Nick Stokoe
3acc92043c snackpot.yml - add minidlna containiner
set all ports with firewall_ports
2021-12-03 17:59:35 +00:00
Nick Stokoe
375172e34e roles/ufw/tasks/main.yml - allow more flexible port config
specifically, allow specifying protocol
2021-12-03 17:59:35 +00:00
Nick Stokoe
fcad61a6c4 roles/docker_compose/tasks/main.yml - set docker data-root dir 2021-12-03 17:59:35 +00:00
Nick Stokoe
02b37f5680 docker-compose/docker-compose.yml - add nextcloud_cron
For running the cron job
2021-12-03 17:59:35 +00:00
Nick Stokoe
8df232dd91 templates/docker-compose/docker-compose.yml - bump nextcloud to v18.0.13 2021-12-03 17:59:35 +00:00
Nick Stokoe
108193a007 snackpot.yml - tag role invocations with docker-config
Else tagging doesn't work correctly
2021-12-03 17:59:35 +00:00
Nick Stokoe
3e69a85426 templates/bin/ncadmin - remove crufty comments 2021-12-03 17:59:35 +00:00
Nick Stokoe
b72b413755 templates/docker-compose/docker-compose.yml - share /srv with nextcloud
For ease of imports
2021-12-03 17:59:35 +00:00
Nick Stokoe
b602592ea4 snackpot.yml - set up networking on server 2021-12-03 17:59:35 +00:00
Nick Stokoe
32f6767cd4 snackpot.yml - add docker compose config
Nominally working and tested on a remote VM
2021-12-03 17:59:35 +00:00
Nick Stokoe
e887ad1898 snackpot.yml - adapt from server.playbook.yml 2021-12-03 17:59:35 +00:00
Nick Stokoe
cc89a3f437 roles/docker_compose/handlers/main.yml - add 'listen' clause
So we can notify from outside the role
2021-12-03 17:59:35 +00:00
Nick Stokoe
3866f6a0f2 roles/docker_compose_install/ -> roles/docker_compose 2021-12-03 17:59:35 +00:00
Nick Stokoe
38c2667d2d docker_compose_install - corrections from original copy
Use python 3, don't hardwire docker version, etc.
2021-12-03 17:58:01 +00:00
48 changed files with 2033 additions and 50 deletions

16
INSTALL.md Normal file
View File

@@ -0,0 +1,16 @@
run ansible script to deploy basic docker compose + config
run restore script to deploy config, database and files
run backup script to create a copy of config, database and files
# todo
# fix playbook to
# set up redis password?
# schedule backup
# script restore

45
SETUP.md Normal file
View File

@@ -0,0 +1,45 @@
## To set up
These subdirectorys need to be cloned, as they are not part of the repo.
The first is Ansible. The exact version is not always important, but
it is wise to keep roughly the same version because Ansible has
changed a lot. I like to be able to use a version which works with my
playbooks... some of my older playbooks contained a lot of
workarounds for old versions of ansible. Newer playbooks use newer
versions of Ansible. At the time of writing, I'm using v2.9.26.
git clone git@github.com:ansible/ansible.git .ansible-src
(cd .ansible-src && git co v2.9.26)
This subdirectory contains the passwords and other secrets this repo
needs access to. It is a Password Store GPG2 encrypted repository,
accessible with the `pass` command. Ansible has a plugin which can
use that.
git clone gitolite:password-store .password-store
You should also make sure that the hosts in the inventory are
accessible - sometimes this requires adding `~/.ssh/config` settings
like this example:
Host mixian mixian.noodlefactory.co.uk
Hostname 142.132.227.118
User root
## Before deploying
This script initialises the environment so that `pass` and
`ansible-playbook` will work as if they were installed in the standard
places (although they are not).
./env-setup
## Dependencies
Ansible role and collection dependencies the need to be installed:
ansible-galaxy install -r requirements.yml
ansible-galaxy collection install -r requirements.yml

19
UPGRADING-NC.md Normal file
View File

@@ -0,0 +1,19 @@
DRAFT!
upgrade one major version at a time
check that the version of postgresql is adequate for the target version before upgrading
if it isn't, upgrade it:
dump the data
move the volume aside
recreate the volume
upgrade
start
re-import
delete the old volume
copy over the pg_hba.conf, otherwise the auth credentials won't be used correctly (need:
host all all all md5)

9
requirements.yml Normal file
View File

@@ -0,0 +1,9 @@
---
roles:
# From Galaxy
- name: mrlesmithjr.netplan
version: v0.3.0
collections:
- name: community.general

View File

@@ -1,15 +0,0 @@
---
## Installs docker-CE
# Following guide from here:
# https://docs.docker.com/install/linux/docker-ce/ubuntu/#set-up-the-repository
# The docker apt repo key uri
docker_compose_install_apt_key_uri: https://download.docker.com/linux/ubuntu/gpg
# The docker apt repo config line
docker_compose_install_apt_repo: deb https://download.docker.com/linux/ubuntu bionic stable
# Get this version from https://github.com/docker/compose/releases/
# Check compatibility with docker.
docker_compose_install_compose_verion: 1.22.0

View File

@@ -0,0 +1,23 @@
---
## Installs docker-CE
# Following guide from here:
# https://docs.docker.com/install/linux/docker-ce/ubuntu/#set-up-the-repository
# The docker apt repo key uri
docker_compose_apt_key_uri: https://download.docker.com/linux/ubuntu/gpg
# The docker apt repo config line
docker_compose_apt_repo: deb https://download.docker.com/linux/ubuntu {{ansible_lsb.codename}} stable
# Get this version from https://github.com/docker/compose/releases/
# Check compatibility with docker.
# This needs to be supplied externally.
docker_compose_install_version: Change me!
# Define where the docker project source directory is
# This needs to be supplied externally
docker_compose_base_dir: /opt/docker-compose
# Where the docker-compose binary is put (assumed executable)
docker_compose_exe: /usr/local/bin/docker-compose

View File

@@ -0,0 +1,13 @@
---
- name: restart docker daemon
systemd:
name: docker
state: restarted
listen: restart docker daemon
- name: restart docker compose services
docker_compose:
restarted: yes
project_src: "{{ docker_compose_base_dir }}"
build: no
listen: restart docker compose services

View File

@@ -7,19 +7,19 @@
- apt-transport-https
- ca-certificates
- software-properties-common
- python-pip
- python3-pip
- virtualenv
- python-setuptools
- python-docker
- python3-setuptools
- python3-docker
- name: add docker repository key
apt_key:
url: "{{ docker_compose_install_apt_key_uri }}"
url: "{{ docker_compose_apt_key_uri }}"
state: present
- name: add docker repository
apt_repository:
repo: "{{ docker_compose_install_apt_repo }}"
repo: "{{ docker_compose_apt_repo }}"
filename: docker-ce
state: present
update_cache: true
@@ -38,6 +38,23 @@
name:
- docker-compose
- name: docker daemon configuration
copy:
dest: /etc/docker/daemon.json
content: |-
{
"data-root": "/srv/docker",
"log-driver": "json-file",
"log-opts": {
"max-size": "30m"
},
"features": {
"buildkit": true
}
}
notify:
- restart docker daemon
- name: enable docker
service:
name: docker

View File

@@ -0,0 +1,5 @@
---
# Enables a sudoer group
# (Debianoid specific)
root_sudoers_group: sudo

View File

@@ -0,0 +1,15 @@
---
- name: Install sudo on debian
apt:
name: sudo
update_cache: yes
- name: configure sudo to allow root access for {{root_sudoers_group}} members
template:
dest: '/etc/sudoers.d/allow-sudoing'
src: 'sudoers.d/allow-sudoing.j2'
owner: root
group: root
mode: 0440
backup: no

View File

@@ -0,0 +1,2 @@
## Allows people in this group to run all commands
%{{ root_sudoers_group }} ALL=(ALL) ALL

View File

@@ -0,0 +1,4 @@
---
# A list of ports to allow incomming connections on
ufw_allow_in: [22]

30
roles/ufw/tasks/main.yml Normal file
View File

@@ -0,0 +1,30 @@
---
- name: install base packages
apt:
name: ufw
state: present
update_cache: true
- name: deny all incoming traffic
ufw:
policy: deny
direction: incoming
- name: allow all outgoing traffic
ufw:
policy: allow
direction: outgoing
- name: incoming rules
ufw:
rule: allow
direction: in
to_port: "{{ item.port if 'port' in item else item }}"
proto: "{{ item.proto if 'proto' in item else 'tcp' }}"
loop: "{{ ufw_allow }}"
- name: enable ufw
ufw:
state: enabled

View File

@@ -1,30 +0,0 @@
---
- name: social.coop | server
hosts: all
become: yes
vars_files:
- secrets.vars.yml
vars:
s3_access_key_id: "{{lookup('passwordstore', 'deployment/backupninja/s3access')}}"
s3_secret_access_key: "{{lookup('passwordstore', 'deployment/backupninja/s3sec')}}"
roles:
- role: server
- role: social-coop
- role: logcheck-custom
tags: logcheck-custom
# Installs a script to dump the mastodon-live PgSQL database, and
# copy the GPG encrypted archive to our S3 space with rclone. This
# is invoked daily using a systemd timer. Encryption is done with
# the public key in the password store
# deployment/backupninja/pub. To decrypt, you need to use the
# associated private key
- role: pg-dump-to-s3
tags: pg-dump-to-s3
pg_dump_to_s3_systemd_timer_section: OnCalendar=00:40:00
pg_dump_to_s3_desturl: "spaces:social-coop-media/backups/{{inventory_hostname_short}}/"
pg_dump_to_s3_pgdump_opts: -h localhost -U root -d mastodon-live -Fc
pg_dump_to_s3_pubkey: "{{lookup('passwordstore', 'deployment/backupninja/pub returnall=true')}}"
pg_dump_to_s3_rclone_config: "{{lookup('template', 'templates/rclone-conf.j2')}}"

166
snackpot.yml Normal file
View File

@@ -0,0 +1,166 @@
---
- name: snackpot | server
hosts: all
vars:
nextcloud_db_password: "{{lookup('passwordstore', 'servers/snackpot/nextcloud_db.password')}}"
postgres_password: "{{lookup('passwordstore', 'servers/snackpot/postgres_db.password')}}"
postgres_db_user: postgres
nextcloud_hostname: nc.noodlefactory.co.uk
nextcloud_db_user: nextcloud
nextcloud_db: nextcloud
jellyfin_hostname: jf.noodlefactory.co.uk
navidrome_hostname: nd.noodlefactory.co.uk
letsencrypt_email: webmaster@noodlefactory.co.uk
docker_compose_base_dir: /opt/docker-compose
docker_compose_cmd: docker compose
borg_passphrase: "{{lookup('passwordstore', 'servers/snackpot/borg.passphrase')}}"
smtp_password: "{{lookup('passwordstore', 'servers/snackpot/smtp.password')}}"
borg_ssh_key: "{{lookup('passwordstore', 'servers/snackpot/borg.id_rsa')}}"
borg_ssh_key_pub: "{{lookup('passwordstore', 'servers/snackpot/borg.id_rsa.pub')}}"
borg_repo_key: "{{lookup('passwordstore', 'servers/snackpot/borg_repo.key')}}"
firewall_ports:
- "22"
- "80"
- "443"
# jellyfin
- "8096"
#- "8920" https
- "7359"
# minidlna
- "8200"
# upmpdcli
- port: "49152"
# upnp (jellyfin, minidlna and upmpdcli)
- proto: udp
port: "1900"
tasks:
- hostname:
name: "{{ nextcloud_hostname }}"
tags: network
- name: install packages
apt:
update_cache: true
name:
- emacs
- strace
- nmap
- git
- include_role:
name: root_sudoers
apply: { tags: root_sudoers }
tags: root_sudoers
- include_role:
name: ufw
apply: { tags: ufw }
tags: ufw
vars:
ufw_allow: "{{ firewall_ports }}"
- include_role:
name: mrlesmithjr.netplan
apply: { become: true, tags: [netplan, network] }
tags: netplan, network
vars:
netplan_enabled: true
netplan_configuration:
network:
version: 2
ethernets:
enp3s0:
addresses: [192.168.0.55/24]
gateway4: 192.168.0.1
nameservers:
addresses: [192.168.0.1]
- include_role:
name: docker_compose
apply: { tags: docker_compose }
tags: docker_compose
vars:
docker_compose_version: 1.27.4
- name: ensure directory exists
file:
path: "{{ docker_compose_base_dir }}/{{ item.path }}"
state: directory
with_community.general.filetree: templates/docker-compose
when: item.state == "directory" and item.path.count("/.") == 0
tags: docker-config
- name: deploy docker compose templates
template:
dest: "{{ docker_compose_base_dir }}/{{ item.path | splitext | first }}"
src: "docker-compose/{{ item.path }}"
owner: root
group: root
mode: 0660
backup: yes
# notify: restart docker compose services
with_community.general.filetree: templates/docker-compose
when: item.state == "file" and item.path.endswith(".j2")
tags: docker-config
- name: deploy docker compose files
copy:
dest: "{{ docker_compose_base_dir }}/{{ item.path }}"
src: "templates/docker-compose/{{ item.path }}"
owner: root
group: root
mode: 0660
backup: yes
# notify: restart docker compose services
with_community.general.filetree: templates/docker-compose
when: |-
item.state == "file" and not (
item.path.endswith("~") or item.path.endswith(".j2")
or item.path.count("/.") > 0
)
tags: docker-config
- name: ensure directory exists
file:
path: "{{ docker_compose_base_dir }}/bin"
state: directory
tags: docker-config
- name: install executables
template:
dest: "{{ docker_compose_base_dir }}/bin/{{ item.path }}"
src: "bin/{{ item.path }}"
owner: root
group: root
mode: 0550
with_community.general.filetree: templates/bin
when: item.state == "file" and not item.path.endswith("~")
tags: docker-config
- name: install appserver and borg backup services
template:
dest: "/etc/systemd/system/{{ item }}"
src: "{{ item }}.j2"
owner: root
group: root
mode: 0550
with_items:
- appserver.service
- borg.service
- borg.timer
tags: docker-configz
- name: enable backup service
service:
name: borg
state: started
enabled: yes
with_items:
- borg.service
- borg.timer
- appserver.service
# config nextcloud
# hide pg password

View File

@@ -0,0 +1,14 @@
[Unit]
Description=appserver
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
WorkingDirectory={{docker_compose_base_dir}}
ExecStart={{docker_compose_cmd}} up -d --remove-orphans main-services
ExecStop={{docker_compose_cmd}} down
RemainAfterExit=true
[Install]
WantedBy=multi-user.target

29
templates/bin/backup Executable file
View File

@@ -0,0 +1,29 @@
#!/bin/bash
# Borg Backup runner
set -o pipefail
set -o errexit
cd "/opt/docker-compose"
set -vx
docker compose exec -T -u www-data nextcloud ./occ maintenance:mode --on ||
echo "WARNING: Couldn't stop nextcloud container, proceeding anyway"
docker compose down --remove-orphans || {
echo "ERROR: Couldn't stop docker compose, restarting and aborting"
docker network prune --force
docker compose up -d
exit
}
docker network prune --force # remove dangling networks
docker compose run --name borgmatic -T --rm borgmatic /backup.sh run ||
echo "ERROR: Couldn't run borgmatic"
docker compose up -d main-services || {
echo "ERROR: couldn't restart docker compose services, aborting with no services!"
exit 1
}
docker compose exec -T -u www-data nextcloud ./occ maintenance:mode --off ||
echo "Couldn't turn off nextcloud's maintainance mode"
docker compose run --name borgmatic -T --rm borgmatic /backup.sh check ||
echo "Couldn't run the bormatic backup check"

2
templates/bin/borg Executable file
View File

@@ -0,0 +1,2 @@
#!/bin/sh
exec docker compose run -- borgmatic borg "$@"

2
templates/bin/borgmatic Executable file
View File

@@ -0,0 +1,2 @@
#!/bin/sh
exec docker compose run -- borgmatic borgmatic "$@"

106
templates/bin/ncadmin Executable file
View File

@@ -0,0 +1,106 @@
#!/bin/sh
dc_dir={{ docker_compose_base_dir }}
nextcloud_base_dir=/var/www/html
nextcloud_data_dir=/var/www/data
postgres_db_user={{ postgres_db_user }}
nextcloud_db_user={{ nextcloud_db_user }}
nextcloud_db={{ nextcloud_db }}
DOCKER_EXE() {
( cd $dc_dir; docker compose exec "$@" )
}
ON_POSTGRES() {
DOCKER_EXE -T -u postgres postgres "$@"
}
ON_POSTGRESi() {
DOCKER_EXE -u postgres postgres "$@"
}
ON_NEXTCLOUD() {
DOCKER_EXE -T -u www-data nextcloud "$@"
}
ON_NEXTCLOUDi() {
DOCKER_EXE -u www-data nextcloud "$@"
}
PSQL() {
ON_POSTGRES /usr/local/bin/psql "$@"
}
PGDUMP() {
ON_POSTGRES /usr/local/bin/pg_dump "$@"
}
PSQLi() {
ON_POSTGRESi /usr/local/bin/psql "$@"
}
PHP() {
ON_NEXTCLOUD /usr/local/bin/php "$@"
}
TEE() {
ON_NEXTCLOUD /usr/bin/tee "$1"
}
CAT() {
ON_NEXTCLOUD /bin/cat "$1"
}
DUMP() {
ON_NEXTCLOUD /bin/sh -c "for d in $*; do /usr/bin/tar -C \$d -c . ; done"
}
UNDUMP() {
ON_NEXTCLOUD /bin/sh -c "for d in $*; do /usr/bin/tar -C \$d -x ; done"
}
_gen_config() {
local config=$nextcloud_base_dir/config/config.php
script=$( cat <<EOF )
require("$config");
\$CONFIG["password"] = "password";
// FIXME more here
file_put_contents("$config.2", "<?php\\n\\\$CONFIG = ". var_export(\$CONFIG, true) .";\\n");
EOF
PHP -r "$script"
}
unpack_db() {
tar t >/dev/null && tar t >/dev/null && cat
}
# FIXME override selected config settings
restore() {
( UNDUMP $nextcloud_base_dir $nextcloud_data_dir
#FIXME [ -n "$config" ] && gen_config <<<'$config' | WRITE $nextcloud_base_dir/config/config.php
PSQL -U $postgres_db_user < $dc_dir/postgres/init.sql
cat | PSQL -U $postgres_db_user -d $nextcloud_db )
}
backup() {
( DUMP $nextcloud_base_dir $nextcloud_data_dir
PGDUMP -U $postgres_db_user $nextcloud_db )
}
prune() {
docker system prune -a --volumes
}
OCC() {
ON_NEXTCLOUD ./occ "$@"
}
NSH() {
ON_NEXTCLOUDi sh "$@"
}
set -vx
set -e
"$@"

View File

@@ -0,0 +1,6 @@
[Unit]
Description=Borg backups
[Service]
Type=oneshot
ExecStart={{docker_compose_base_dir}}/bin/backup

10
templates/borg.timer.j2 Normal file
View File

@@ -0,0 +1,10 @@
[Unit]
Description=Run Borg backups nightly
[Timer]
OnCalendar=01:40:00
Persistent=true
[Install]
WantedBy=timers.target
WantedBy=borg.target

View File

@@ -0,0 +1,3 @@
POSTGRES_PASSWORD={{ nextcloud_db_password }}
BORG_PASSPHRASE={{ borg_passphrase }}
SMTP_PASSWORD={{ smtp_password }}

View File

@@ -0,0 +1,11 @@
FROM b3vis/borgmatic:latest
# Install stdbuf, used by backup.sh
RUN \
echo "* Installing Runtime Packages" \
&& apk upgrade --no-cache \
&& echo "* Installing Runtime Packages" \
&& apk add -U --no-cache \
coreutils
COPY --chmod=755 backup.sh /backup.sh

View File

@@ -0,0 +1,97 @@
#!/bin/sh
# Run the backup and mail the logs:
# Depending on parameter 1:
# - test-smtp: just send a test email using $APPRISE_URI
# - run: create the backup, no checks
# - check: prune, compact and check the backup
# Anything else is an error.
set -o pipefail
# Set up environment
RUN_COMMAND="borgmatic --stats -v 2 create"
CHECK_COMMAND="borgmatic --stats -v 1 prune compact check"
LOGFILE="/tmp/backup_run_$(date +%s).log"
SUCCESS_PREFIX="=?utf-8?Q? =E2=9C=85 SUCCESS?="
FAILED_PREFIX="=?utf-8?Q? =E2=9D=8C FAILED?="
PARAM="$1"
# Helper function to prepend a timestamp and the first parameter to every line of STDIN
indent() {
while IFS='' read -rs line; do
echo "$(date -Iminutes)${1:- }$line"
done
}
# This function prepends timestamps to stderr and stdout of the
# command supplied as parameters to this.
log() {
# Adapted from https://stackoverflow.com/a/31151808
{
stdbuf -oL -eL "$@" 2>&1 1>&3 3>&- | indent " ! "
} 3>&1 1>&2 | indent " | " | tee -a "$LOGFILE"
}
report() {
if [ "$RESULT" = "0" ]; then
log echo "SUCCESS!"
PREFIX="$SUCCESS_PREFIX"
else
log echo "FAILED: $RESULT"
PREFIX="$FAILED_PREFIX"
fi
apprise -vv -t "$PREFIX: '$PARAM'" -b "$(cat $LOGFILE)" "$APPRISE_URI&pass=$SMTP_PASSWORD"
log echo "Report sent."
}
testmail() {
apprise -vv -t "TESTING!" -b "test mail, please ignore." "$APPRISE_URI&pass=$SMTP_PASSWORD"
}
failed() {
log echo "Exited abnormally!"
report
rm -f "$LOGFILE"
}
cleanup() {
borgmatic break-lock
echo "Removing $LOGFILE"
rm -f "$LOGFILE"
echo "Exiting."
}
# Handle various kinds of exit
trap failed INT QUIT KILL
trap cleanup EXIT
case "$PARAM" in
test-smtp)
echo "Testing mail to via Apprise ($APPRISE_URI)"
testmail
echo "Done."
;;
check)
log echo STARTED: $CHECK_COMMAND
log $CHECK_COMMAND
RESULT=$?
report
;;
run)
log echo STARTED: $RUN_COMMAND
log $RUN_COMMAND
RESULT=$?
report
;;
dummy-run)
log echo STARTED: dummy-run
borgmatic nonesuch
RESULT=$?
report
;;
*)
log echo "UNKNOWN COMMAND: '$PARAM'"
report
;;
esac

View File

@@ -0,0 +1,279 @@
---
# Adapted from:
# https://github.com/nextcloud/docker/blob/master/.examples/docker-compose/with-nginx-proxy/postgres/fpm/docker-compose.yml
volumes:
postgres:
nextcloud_src:
nextcloud_data:
certs:
vhost.d:
html:
redis:
jellyfin_config:
jellyfin_cache:
minidlna_state:
minidlna_data:
navidrome_cache:
navidrome_data:
borgmatic-cache:
networks:
# This is for proxied containers
proxy-tier:
# This is for containers which need to be host mode
lan:
name: lan
driver: macvlan
driver_opts:
parent: enp3s0 # our ethernet interface
ipam:
config:
- gateway: 192.168.0.1
subnet: 192.168.0.0/24
ip_range: 192.168.0.240/29 # addresses 240-248 (6 usable)
services:
postgres:
build: ./postgres
restart: always
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
volumes:
- postgres:/var/lib/postgresql/data
env_file:
- postgres.env
redis:
restart: always
image: redis:6-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
volumes:
- redis:/data
nextcloud:
image: nextcloud:31-fpm-alpine
restart: always
volumes:
- nextcloud_src:/var/www/html
- nextcloud_data:/var/www/data
- minidlna_data:/var/www/ext/media
- /srv:/srv
links:
- postgres
- redis
env_file:
- nextcloud.env
environment:
- POSTGRES_HOST=postgres
- REDIS_HOST=redis
- POSTGRES_USER=nextcloud
# healthcheck:
# test: ["CMD-SHELL", "wget -q --spider --proxy=off localhost:3000/health || exit 1"]
nextcloud_cron:
image: nextcloud:31-fpm-alpine
restart: always
volumes:
- nextcloud_src:/var/www/html
- nextcloud_data:/var/www/data
- minidlna_data:/var/www/ext/media
- /srv:/srv
entrypoint: /cron.sh
depends_on:
- postgres
- redis
web:
build: ./web
restart: always
volumes:
- nextcloud_src:/var/www/html:ro
env_file:
- web.env
depends_on:
- nextcloud
- letsencrypt-companion
networks:
- proxy-tier
- default
proxy:
build: ./proxy
restart: always
ports:
- "80:80"
- "443:443"
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
volumes:
- certs:/etc/nginx/certs:ro
- vhost.d:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- proxy-tier
letsencrypt-companion:
image: jrcs/letsencrypt-nginx-proxy-companion:v1.13.1
restart: always
volumes:
- certs:/etc/nginx/certs
- vhost.d:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- proxy-tier
depends_on:
- proxy
env_file:
- letsencrypt-companion.env
navidrome:
build: ./navidrome
ports:
- "4533:4533"
volumes:
- navidrome_data:/data
- navidrome_cache:/cache
- minidlna_data:/music
networks:
proxy-tier:
default:
group_add:
# audio group ID (gid) on host system
- "29"
devices:
- "/dev/snd:/dev/snd"
depends_on:
- letsencrypt-companion
env_file:
- navidrome.env
bonob:
image: simojenki/bonob:latest
ports:
- "4534:4534"
networks:
lan: # Static ip for the container on the macvlan net
ipv4_address: 192.168.0.244
restart: unless-stopped
environment:
BNB_PORT: 4534
# ip address of your machine running bonob
BNB_URL: http://192.168.0.244:4534
BNB_SONOS_AUTO_REGISTER: "true"
BNB_SONOS_DEVICE_DISCOVERY: "true"
BNB_SUBSONIC_URL: http://navidrome:4533
depends_on:
- navidrome
jellyfin:
image: jellyfin/jellyfin:latest
restart: always
user: daemon:daemon
volumes:
- jellyfin_config:/config
- jellyfin_cache:/cache
- minidlna_data:/media
networks:
proxy-tier:
default:
lan: # Static ip for the container on the macvlan net
ipv4_address: 192.168.0.241
env_file:
- jellyfin.env
minidlna:
image: vladgh/minidlna:latest
restart: always
volumes:
- minidlna_state:/minidlna
- minidlna_data:/media:ro
networks:
default:
lan: # Static ip for the container on the macvlan net
ipv4_address: 192.168.0.242
environment:
# UID/GID are assumed to both be 2000 in other containers, to allow access
- UPID=2000
- UGID=2000
- MINIDLNA_INOTIFY=yes
- MINIDLNA_MEDIA_DIR_1=A,/media/audio
- MINIDLNA_MEDIA_DIR_2=V,/media/video
- MINIDLNA_FRIENDLY_NAME=MiniDLNA@Snackpot
upmpdcli:
build: ./upmpdcli
networks:
default:
lan: # Static ip for the container on the macvlan net
ipv4_address: 192.168.0.243
restart: always
# a dummy container to start the main services as deps
# This allows the borgmatic image to be excluded when run as:
# docker-compose up main-services
main-services:
image: alpine:latest # a small dumy image
command: sh -c "sleep infinity"
depends_on:
- bonob
- nextcloud
- nextcloud_cron
- web
- jellyfin
- minidlna
- navidrome
- upmpdcli
borgmatic:
build: ./borgmatic
restart: 'no' # This container is only run when required
depends_on: # These containers need to be up for dumps
- postgres
networks:
# Networks for DB access for backups
- default
volumes:
# Backup mount
- /mnt/c/backup/nick:/mnt/borg-repository
# Volumes to back up
- certs:/mnt/source/certs:ro
- nextcloud_data:/mnt/source/nextcloud_data:ro
- vhost.d:/mnt/source/vhost.d:ro
- html:/mnt/source/html:ro
- jellyfin_config:/mnt/source/jellyfin_config:ro
- minidlna_state:/mnt/source/minidlna_state:ro
- minidlna_data:/mnt/source/minidlna_data:ro
- navidrome_data:/mnt/source/navidrome_data:ro
# System volumes
- /etc/timezone:/etc/timezone:ro # timezone
- /etc/localtime:/etc/localtime:ro # localtime
- borgmatic-cache:/root/.cache/borg # non-volatile borg chunk cache
# Config volumes
- ./volumes/borgmatic-config:/etc/borgmatic.d/:ro # config.yaml, crontab.txt, mstmp.env
- ./volumes/borg-config:/root/.config/borg/ # borg encryption keys, other config written here
- ./volumes/borg-ssh-config:/root/.ssh/ # ssh keys; sshd writes knownhosts etc here
environment:
# Work around the use of a fancy init system s6:
# https://github.com/borgmatic-collective/docker-borgmatic/issues/320#issuecomment-2089003361
S6_KEEP_ENV: 1
POSTGRES_USER: nextcloud
POSTGRES_DB: nextcloud
POSTGRES_HOST: postgres
BORG_ARCHIVE: nick
BORG_ARCHIVE_LABEL: snackpot
APPRISE_URI: "mailtos://mail.noodlefactory.co.uk:25?user=nc.noodlefactory.co.uk&from=borgmatic@snackpot.noodlefactory.co.uk&to=nick@noodlefactory.co.uk"
# SMTP_PASSWORD is set via borgmatic.env, created via ansible,
# and appended to APPRISE_URL by borgmatic/backup.sh script
# Test SMTP auth on the server https://doc.dovecot.org/admin_manual/debugging/debugging_authentication/
env_file:
- ./borgmatic.env

View File

@@ -0,0 +1,4 @@
VIRTUAL_HOST={{ jellyfin_hostname }}
JELLYFIN_PublishedServerUrl=https://{{ jellyfin_hostname }}/
LETSENCRYPT_HOST={{ jellyfin_hostname }}
LETSENCRYPT_EMAIL={{ letsencrypt_email }}

View File

@@ -0,0 +1 @@
DEFAULT_EMAIL={{ letsencrypt_email }}

View File

@@ -0,0 +1,9 @@
ND_SCANSCHEDULE=1h
ND_LOGLEVEL=info
ND_CACHEFOLDER="/cache"
ND_JUKEBOX_ENABLED="true"
ND_BASEURL="https://{{ navidrome_hostname }}"
VIRTUAL_HOST="{{ navidrome_hostname }}"
VIRTUAL_PORT=4533
LETSENCRYPT_HOST="{{ navidrome_hostname }}"
LETSENCRYPT_EMAIL="{{ letsencrypt_email }}"

View File

@@ -0,0 +1,5 @@
FROM deluan/navidrome:0.55.2
RUN apk add --no-cache mpv
# Ensure that navidrome has access to these directories
RUN mkdir -p /data /cache && chown -R 1000:1000 /data /cache

View File

@@ -0,0 +1 @@
POSTGRES_PASSWORD={{ nextcloud_db_password }}

View File

@@ -0,0 +1 @@
POSTGRES_PASSWORD={{ postgres_password }}

View File

@@ -0,0 +1,2 @@
FROM postgres:17-alpine
COPY --chown={{ postgres_db_user }}:{{ postgres_db_user }} init.sql /docker-entrypoint-initdb.d/

View File

@@ -0,0 +1,6 @@
CREATE USER {{ nextcloud_db_user }};
ALTER USER {{ nextcloud_db_user }} WITH ENCRYPTED PASSWORD 'md5{{ (nextcloud_db_password + nextcloud_db_user) | hash("md5") }}';
DROP DATABASE IF EXISTS {{ nextcloud_db }};
CREATE DATABASE {{ nextcloud_db }} TEMPLATE template0 ENCODING 'UNICODE';
ALTER DATABASE {{ nextcloud_db }} OWNER TO {{ nextcloud_db_user }};
GRANT ALL PRIVILEGES ON DATABASE {{ nextcloud_db }} TO {{ nextcloud_db_user }};

View File

@@ -0,0 +1,3 @@
FROM jwilder/nginx-proxy:1.7-alpine
COPY uploadsize.conf /etc/nginx/conf.d/uploadsize.conf

View File

@@ -0,0 +1,2 @@
client_max_body_size 10G;
proxy_request_buffering off;

View File

@@ -0,0 +1,18 @@
FROM alpine:3.21
RUN apk update \
&& apk upgrade \
&& apk add --no-cache \
--repository http://dl-cdn.alpinelinux.org/alpine/edge/community \
--repository http://dl-cdn.alpinelinux.org/alpine/edge/testing \
python3 \
openssl \
upmpdcli
COPY upmpdcli.conf /etc/upmpdcli.conf
#RUN adduser -S upmpdcli
EXPOSE 1900/udp
EXPOSE 49152
ENTRYPOINT ["upmpdcli", "-c", "/etc/upmpdcli.conf"]

View File

@@ -0,0 +1,35 @@
# upmpdcli general parameters
#logfilename = /var/log/upmpdcli.log
# loglevel = 0
#pkgdatadir=/usr/share/upmpdcli
#pidfile = /var/run/upmpdcli.pid
# upnp network parameters
#upnpiface =
#upnpip =
# upnpport =
upnpip = 192.168.0.243
# media renderer parameters
friendlyname = UpMpdCli@Snackpot
#upnpav = 1
#openhome = 1
#lumincompat = 0
#saveohcredentials = 1
checkcontentformat = 0
#iconpath = /usr/share/upmpdcli/icon.png
#cachedir = /var/cache/upmpdcli
#presentationhtml = /usr/share/upmpdcli/presentation.html
# mpd parameters
#mpdhost = 127.0.0.1
#mpdport = 6600
#mpdpassword =
#ownqueue = 1
#mpdhost = mopidy
subsonicbaseurl = https://nd.noodlefactory.co.uk
subsonicport = 443
subsonicuser = sonos
subsonicpassword = ContactExerciseCharges

View File

@@ -0,0 +1,3 @@
# these files get written here by Bundlewrap
/*
!/.gitignore

View File

@@ -0,0 +1,5 @@
# SSH key files get written here by Bundlewrap
/*
!/.gitignore
!/config

View File

@@ -0,0 +1 @@
StrictHostKeyChecking accept-new

View File

@@ -0,0 +1,3 @@
# these files get written here by Bundlewrap
/msmtp.env

View File

@@ -0,0 +1,727 @@
# Where to look for files to backup, and where to store those backups.
# See https://borgbackup.readthedocs.io/en/stable/quickstart.html and
# https://borgbackup.readthedocs.io/en/stable/usage/create.html
# for details.
location:
# List of source directories to backup. Globs and tildes are
# expanded. Do not backslash spaces in path names.
source_directories:
- /mnt/source/
# Paths to local or remote repositories (required). Tildes are
# expanded. Multiple repositories are backed up to in
# sequence. Borg placeholders can be used. See the output of
# "borg help placeholders" for details. See ssh_command for
# SSH options like identity file or port. If systemd service
# is used, then add local repository paths in the systemd
# service file to the ReadWritePaths list.
repositories:
- /mnt/borg-repository
# - ssh://${BORG_REPO_USER}@${BORG_REPO_HOST}:${BORG_REPO_PORT}/./${BORG_ARCHIVE}
# Working directory for the "borg create" command. Tildes are
# expanded. Useful for backing up using relative paths. See
# http://borgbackup.readthedocs.io/en/stable/usage/create.html
# for details. Defaults to not set.
# working_directory: /path/to/working/directory
# Stay in same file system: do not cross mount points beyond
# the given source directories. Defaults to false. But when a
# database hook is used, the setting here is ignored and
# one_file_system is considered true.
# one_file_system: true
# Only store/extract numeric user and group identifiers.
# Defaults to false.
# numeric_ids: true
# Store atime into archive. Defaults to true in Borg < 1.2,
# false in Borg 1.2+.
# atime: false
# Store ctime into archive. Defaults to true.
# ctime: false
# Store birthtime (creation date) into archive. Defaults to
# true.
# birthtime: false
# Use Borg's --read-special flag to allow backup of block and
# other special devices. Use with caution, as it will lead to
# problems if used when backing up special devices such as
# /dev/zero. Defaults to false. But when a database hook is
# used, the setting here is ignored and read_special is
# considered true.
# read_special: false
# Record filesystem flags (e.g. NODUMP, IMMUTABLE) in archive.
# Defaults to true.
# flags: true
# Mode in which to operate the files cache. See
# http://borgbackup.readthedocs.io/en/stable/usage/create.html
# for details. Defaults to "ctime,size,inode".
# files_cache: ctime,size,inode
# Alternate Borg local executable. Defaults to "borg".
# local_path: borg1
# Alternate Borg remote executable. Defaults to "borg".
# remote_path: borg1
# Any paths matching these patterns are included/excluded from
# backups. Globs are expanded. (Tildes are not.) See the
# output of "borg help patterns" for more details. Quote any
# value if it contains leading punctuation, so it parses
# correctly. Note that only one of "patterns" and
# "source_directories" may be used.
# patterns:
# - R /
# - '- /home/*/.cache'
# - + /home/susan
# - '- /home/*'
# Read include/exclude patterns from one or more separate
# named files, one pattern per line. Note that Borg considers
# this option experimental. See the output of "borg help
# patterns" for more details.
# patterns_from:
# - /etc/borgmatic/patterns
# Any paths matching these patterns are excluded from backups.
# Globs and tildes are expanded. Note that a glob pattern must
# either start with a glob or be an absolute path. Do not
# backslash spaces in path names. See the output of "borg help
# patterns" for more details.
exclude_patterns:
# - '*.pyc'
# - /home/*/.cache
# - '*/.vim*.tmp'
# - /etc/ssl
# - /home/user/path with spaces
- '*~'
- '*#'
- '.cache'
- 'cache'
- 'files_trashbin'
# Read exclude patterns from one or more separate named files,
# one pattern per line. See the output of "borg help patterns"
# for more details.
# exclude_from:
# - /etc/borgmatic/excludes
# Exclude directories that contain a CACHEDIR.TAG file. See
# http://www.brynosaurus.com/cachedir/spec.html for details.
# Defaults to false.
# exclude_caches: true
# Exclude directories that contain a file with the given
# filenames. Defaults to not set.
# exclude_if_present:
# - .nobackup
# If true, the exclude_if_present filename is included in
# backups. Defaults to false, meaning that the
# exclude_if_present filename is omitted from backups.
# keep_exclude_tags: true
# Exclude files with the NODUMP flag. Defaults to false.
# exclude_nodump: true
# Path for additional source files used for temporary internal
# state like borgmatic database dumps. Note that changing this
# path prevents "borgmatic restore" from finding any database
# dumps created before the change. Defaults to ~/.borgmatic
# borgmatic_source_directory: /tmp/borgmatic
# Repository storage options. See
# https://borgbackup.readthedocs.io/en/stable/usage/create.html and
# https://borgbackup.readthedocs.io/en/stable/usage/general.html for
# details.
storage:
# The standard output of this command is used to unlock the
# encryption key. Only use on repositories that were
# initialized with passcommand/repokey/keyfile encryption.
# Note that if both encryption_passcommand and
# encryption_passphrase are set, then encryption_passphrase
# takes precedence. Defaults to not set.
# encryption_passcommand: secret-tool lookup borg-repository repo-name
# Passphrase to unlock the encryption key with. Only use on
# repositories that were initialized with
# passphrase/repokey/keyfile encryption. Quote the value if it
# contains punctuation, so it parses correctly. And backslash
# any quote or backslash literals as well. Defaults to not
# set.
# encryption_passphrase: "!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~"
# Number of seconds between each checkpoint during a
# long-running backup. See
# https://borgbackup.readthedocs.io/en/stable/faq.html
# for details. Defaults to checkpoints every 1800 seconds (30
# minutes).
# checkpoint_interval: 1800
# Specify the parameters passed to then chunker
# (CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS,
# HASH_WINDOW_SIZE). See
# https://borgbackup.readthedocs.io/en/stable/internals.html
# for details. Defaults to "19,23,21,4095".
# chunker_params: 19,23,21,4095
# Type of compression to use when creating archives. See
# http://borgbackup.readthedocs.io/en/stable/usage/create.html
# for details. Defaults to "lz4".
# compression: lz4
# Remote network upload rate limit in kiBytes/second. Defaults
# to unlimited.
# upload_rate_limit: 100
# Number of times to retry a failing backup before giving up.
# Defaults to 0 (i.e., does not attempt retry).
# retries: 3
# Wait time between retries (in seconds) to allow transient
# issues to pass. Increases after each retry as a form of
# backoff. Defaults to 0 (no wait).
# retry_wait: 10
# Directory where temporary files are stored. Defaults to
# $TMPDIR
# temporary_directory: /path/to/tmpdir
# Command to use instead of "ssh". This can be used to specify
# ssh options. Defaults to not set.
# ssh_command: ssh -i /path/to/private/key
# Base path used for various Borg directories. Defaults to
# $HOME, ~$USER, or ~.
# borg_base_directory: /path/to/base
# Path for Borg configuration files. Defaults to
# $borg_base_directory/.config/borg
# borg_config_directory: /path/to/base/config
# Path for Borg cache files. Defaults to
# $borg_base_directory/.cache/borg
# borg_cache_directory: /path/to/base/cache
# Path for Borg security and encryption nonce files. Defaults
# to $borg_base_directory/.config/borg/security
# borg_security_directory: /path/to/base/config/security
# Path for Borg encryption key files. Defaults to
# $borg_base_directory/.config/borg/keys
# borg_keys_directory: /path/to/base/config/keys
# Umask to be used for borg create. Defaults to 0077.
# umask: 0077
# Maximum seconds to wait for acquiring a repository/cache
# lock. Defaults to 1.
# lock_wait: 5
# Name of the archive. Borg placeholders can be used. See the
# output of "borg help placeholders" for details. Defaults to
# "{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}". If you specify this
# option, consider also specifying a prefix in the retention
# and consistency sections to avoid accidental
# pruning/checking of archives with different archive name
# formats.
# archive_name_format: '{hostname}-documents-{now}'
archive_name_format: "${BORG_ARCHIVE_LABEL}-{now:%Y-%m-%dT%H:%M:%S.%f}"
# Bypass Borg error about a repository that has been moved.
# Defaults to false.
# relocated_repo_access_is_ok: true
# Bypass Borg error about a previously unknown unencrypted
# repository. Defaults to false.
# unknown_unencrypted_repo_access_is_ok: true
# Additional options to pass directly to particular Borg
# commands, handy for Borg options that borgmatic does not yet
# support natively. Note that borgmatic does not perform any
# validation on these options. Running borgmatic with
# "--verbosity 2" shows the exact Borg command-line
# invocation.
# extra_borg_options:
# Extra command-line options to pass to "borg init".
# init: --extra-option
# Extra command-line options to pass to "borg prune".
# prune: --extra-option
# Extra command-line options to pass to "borg compact".
# compact: --extra-option
# Extra command-line options to pass to "borg create".
# create: --extra-option
# Extra command-line options to pass to "borg check".
# check: --extra-option
# Retention policy for how many backups to keep in each category. See
# https://borgbackup.readthedocs.io/en/stable/usage/prune.html for
# details. At least one of the "keep" options is required for pruning
# to work. To skip pruning entirely, run "borgmatic create" or "check"
# without the "prune" action. See borgmatic documentation for details.
retention:
# Keep all archives within this time interval.
# keep_within: 3H
# Number of secondly archives to keep.
# keep_secondly: 60
# Number of minutely archives to keep.
# keep_minutely: 60
# Number of hourly archives to keep.
# keep_hourly: 24
# Number of daily archives to keep.
keep_daily: 7
# Number of weekly archives to keep.
keep_weekly: 4
# Number of monthly archives to keep.
keep_monthly: 6
# Number of yearly archives to keep.
keep_yearly: 5
# When pruning, only consider archive names starting with this
# prefix. Borg placeholders can be used. See the output of
# "borg help placeholders" for details. Defaults to
# "{hostname}-". Use an empty value to disable the default.
# prefix: sourcehostname
# Consistency checks to run after backups. See
# https://borgbackup.readthedocs.io/en/stable/usage/check.html and
# https://borgbackup.readthedocs.io/en/stable/usage/extract.html for
# details.
# consistency:
# List of one or more consistency checks to run on a periodic
# basis (if "frequency" is set) or every time borgmatic runs
# checks (if "frequency" is omitted).
# checks:
# Name of consistency check to run: "repository",
# "archives", "data", and/or "extract". Set to
# "disabled" to disable all consistency checks.
# "repository" checks the consistency of the
# repository, "archives" checks all of the
# archives, "data" verifies the integrity of the
# data within the archives, and "extract" does an
# extraction dry-run of the most recent archive.
# Note that "data" implies "archives".
# - name: repository
# How frequently to run this type of consistency
# check (as a best effort). The value is a number
# followed by a unit of time. E.g., "2 weeks" to
# run this consistency check no more than every
# two weeks for a given repository or "1 month" to
# run it no more than monthly. Defaults to
# "always": running this check every time checks
# are run.
# frequency: 2 weeks
# Paths to a subset of the repositories in the location
# section on which to run consistency checks. Handy in case
# some of your repositories are very large, and so running
# consistency checks on them would take too long. Defaults to
# running consistency checks on all repositories configured in
# the location section.
# check_repositories:
# - user@backupserver:sourcehostname.borg
# Restrict the number of checked archives to the last n.
# Applies only to the "archives" check. Defaults to checking
# all archives.
# check_last: 3
# When performing the "archives" check, only consider archive
# names starting with this prefix. Borg placeholders can be
# used. See the output of "borg help placeholders" for
# details. Defaults to "{hostname}-". Use an empty value to
# disable the default.
# prefix: sourcehostname
# Options for customizing borgmatic's own output and logging.
output:
# Apply color to console output. Can be overridden with
# --no-color command-line flag. Defaults to true.
color: false
# Shell commands, scripts, or integrations to execute at various
# points during a borgmatic run. IMPORTANT: All provided commands and
# scripts are executed with user permissions of borgmatic. Do not
# forget to set secure permissions on this configuration file (chmod
# 0600) as well as on any script called from a hook (chmod 0700) to
# prevent potential shell injection or privilege escalation.
hooks:
# List of one or more shell commands or scripts to execute
# before all the actions for each repository.
# before_actions:
# - echo "Starting actions."
# List of one or more shell commands or scripts to execute
# before creating a backup, run once per repository.
# before_backup:
# - echo "Starting a backup."
# List of one or more shell commands or scripts to execute
# before pruning, run once per repository.
# before_prune:
# - echo "Starting pruning."
# List of one or more shell commands or scripts to execute
# before compaction, run once per repository.
# before_compact:
# - echo "Starting compaction."
# List of one or more shell commands or scripts to execute
# before consistency checks, run once per repository.
# before_check:
# - echo "Starting checks."
# List of one or more shell commands or scripts to execute
# before extracting a backup, run once per repository.
# before_extract:
# - echo "Starting extracting."
# List of one or more shell commands or scripts to execute
# after creating a backup, run once per repository.
# after_backup:
# - echo "Finished a backup."
# List of one or more shell commands or scripts to execute
# after compaction, run once per repository.
# after_compact:
# - echo "Finished compaction."
# List of one or more shell commands or scripts to execute
# after pruning, run once per repository.
# after_prune:
# - echo "Finished pruning."
# List of one or more shell commands or scripts to execute
# after consistency checks, run once per repository.
# after_check:
# - echo "Finished checks."
# List of one or more shell commands or scripts to execute
# after extracting a backup, run once per repository.
# after_extract:
# - echo "Finished extracting."
# List of one or more shell commands or scripts to execute
# after all actions for each repository.
# after_actions:
# - echo "Finished actions."
# List of one or more shell commands or scripts to execute
# when an exception occurs during a "prune", "compact",
# "create", or "check" action or an associated before/after
# hook.
# on_error:
# - echo "Error during prune/compact/create/check."
# List of one or more shell commands or scripts to execute
# before running all actions (if one of them is "create").
# These are collected from all configuration files and then
# run once before all of them (prior to all actions).
# before_everything:
# - echo "Starting actions."
# List of one or more shell commands or scripts to execute
# after running all actions (if one of them is "create").
# These are collected from all configuration files and then
# run once after all of them (after any action).
# after_everything:
# - echo "Completed actions."
# List of one or more PostgreSQL databases to dump before
# creating a backup, run once per configuration file. The
# database dumps are added to your source directories at
# runtime, backed up, and removed afterwards. Requires
# pg_dump/pg_dumpall/pg_restore commands. See
# https://www.postgresql.org/docs/current/app-pgdump.html and
# https://www.postgresql.org/docs/current/libpq-ssl.html for
# details.
postgresql_databases:
# Database name (required if using this hook). Or
# "all" to dump all databases on the host. Note
# that using this database hook implicitly enables
# both read_special and one_file_system (see
# above) to support dump and restore streaming.
# - name: users
- name: ${POSTGRES_DB}
# Database hostname to connect to. Defaults to
# connecting via local Unix socket.
# hostname: database.example.org
hostname: ${POSTGRES_HOST}
# Port to connect to. Defaults to 5432.
# port: 5433
# Username with which to connect to the database.
# Defaults to the username of the current user.
# You probably want to specify the "postgres"
# superuser here when the database name is "all".
# username: dbuser
username: ${POSTGRES_USER}
# Password with which to connect to the database.
# Omitting a password will only work if PostgreSQL
# is configured to trust the configured username
# without a password or you create a ~/.pgpass
# file.
# password: trustsome1
password: ${POSTGRES_PASSWORD}
# Database dump output format. One of "plain",
# "custom", "directory", or "tar". Defaults to
# "custom" (unlike raw pg_dump). See pg_dump
# documentation for details. Note that format is
# ignored when the database name is "all".
# format: directory
# SSL mode to use to connect to the database
# server. One of "disable", "allow", "prefer",
# "require", "verify-ca" or "verify-full".
# Defaults to "disable".
# ssl_mode: require
# Path to a client certificate.
# ssl_cert: /root/.postgresql/postgresql.crt
# Path to a private client key.
# ssl_key: /root/.postgresql/postgresql.key
# Path to a root certificate containing a list of
# trusted certificate authorities.
# ssl_root_cert: /root/.postgresql/root.crt
# Path to a certificate revocation list.
# ssl_crl: /root/.postgresql/root.crl
# Additional pg_dump/pg_dumpall options to pass
# directly to the dump command, without performing
# any validation on them. See pg_dump
# documentation for details.
# options: --role=someone
# List of one or more MySQL/MariaDB databases to dump before
# creating a backup, run once per configuration file. The
# database dumps are added to your source directories at
# runtime, backed up, and removed afterwards. Requires
# mysqldump/mysql commands (from either MySQL or MariaDB). See
# https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html or
# https://mariadb.com/kb/en/library/mysqldump/ for details.
# mysql_databases:
# Database name (required if using this hook). Or
# "all" to dump all databases on the host. Note
# that using this database hook implicitly enables
# both read_special and one_file_system (see
# above) to support dump and restore streaming.
# - name: ${POSTGRES_DB}
# Database hostname to connect to. Defaults to
# connecting via local Unix socket.
# hostname: ${POSTGRES_HOST}
# Port to connect to. Defaults to 3306.
# port: 3307
# Username with which to connect to the database.
# Defaults to the username of the current user.
# username: ${POSTGRES_USER}
# Password with which to connect to the database.
# Omitting a password will only work if MySQL is
# configured to trust the configured username
# without a password.
# password: ${POSTGRES_PASSWORD}
# Additional mysql options to pass directly to
# the mysql command that lists available
# databases, without performing any validation on
# them. See mysql documentation for details.
# list_options: --defaults-extra-file=my.cnf
# Additional mysqldump options to pass directly to
# the dump command, without performing any
# validation on them. See mysqldump documentation
# for details.
# options: --skip-comments
# List of one or more MongoDB databases to dump before
# creating a backup, run once per configuration file. The
# database dumps are added to your source directories at
# runtime, backed up, and removed afterwards. Requires
# mongodump/mongorestore commands. See
# https://docs.mongodb.com/database-tools/mongodump/ and
# https://docs.mongodb.com/database-tools/mongorestore/ for
# details.
# mongodb_databases:
# Database name (required if using this hook). Or
# "all" to dump all databases on the host. Note
# that using this database hook implicitly enables
# both read_special and one_file_system (see
# above) to support dump and restore streaming.
# - name: users
# Database hostname to connect to. Defaults to
# connecting to localhost.
# hostname: database.example.org
# Port to connect to. Defaults to 27017.
# port: 27018
# Username with which to connect to the database.
# Skip it if no authentication is needed.
# username: dbuser
# Password with which to connect to the database.
# Skip it if no authentication is needed.
# password: trustsome1
# Authentication database where the specified
# username exists. If no authentication database
# is specified, the database provided in "name"
# is used. If "name" is "all", the "admin"
# database is used.
# authentication_database: admin
# Database dump output format. One of "archive",
# or "directory". Defaults to "archive". See
# mongodump documentation for details. Note that
# format is ignored when the database name is
# "all".
# format: directory
# Additional mongodump options to pass
# directly to the dump command, without performing
# any validation on them. See mongodump
# documentation for details.
# options: --role=someone
# ntfy:
# The topic to publish to.
# (https://ntfy.sh/docs/publish/)
# topic: topic
# The address of your self-hosted ntfy.sh instance.
# server: https://ntfy.your-domain.com
# start:
# The title of the message
# title: Ping!
# The message body to publish.
# message: Your backups have failed.
# The priority to set.
# priority: urgent
# Tags to attach to the message.
# tags: incoming_envelope
# finish:
# The title of the message.
# title: Ping!
# The message body to publish.
# message: Your backups have failed.
# The priority to set.
# priority: urgent
# Tags to attach to the message.
# tags: incoming_envelope
# fail:
# The title of the message.
# title: Ping!
# The message body to publish.
# message: Your backups have failed.
# The priority to set.
# priority: urgent
# Tags to attach to the message.
# tags: incoming_envelope
# List of one or more monitoring states to ping for:
# "start", "finish", and/or "fail". Defaults to
# pinging for failure only.
# states:
# - start
# - finish
# Configuration for a monitoring integration with
# Healthchecks. Create an account at https://healthchecks.io
# (or self-host Healthchecks) if you'd like to use this
# service. See borgmatic monitoring documentation for details.
# healthchecks:
# Healthchecks ping URL or UUID to notify when a
# backup begins, ends, or errors.
# ping_url: https://hc-ping.com/your-uuid-here
# Verify the TLS certificate of the ping URL host.
# Defaults to true.
# verify_tls: false
# Send borgmatic logs to Healthchecks as part the
# "finish" state. Defaults to true.
# send_logs: false
# Number of bytes of borgmatic logs to send to
# Healthchecks, ideally the same as PING_BODY_LIMIT
# configured on the Healthchecks server. Set to 0 to
# send all logs and disable this truncation. Defaults
# to 100000.
# ping_body_limit: 200000
# List of one or more monitoring states to ping for:
# "start", "finish", and/or "fail". Defaults to
# pinging for all states.
# states:
# - finish
# Configuration for a monitoring integration with Cronitor.
# Create an account at https://cronitor.io if you'd
# like to use this service. See borgmatic monitoring
# documentation for details.
# cronitor:
# Cronitor ping URL to notify when a backup begins,
# ends, or errors.
# ping_url: https://cronitor.link/d3x0c1
# Configuration for a monitoring integration with PagerDuty.
# Create an account at https://www.pagerduty.com/ if you'd
# like to use this service. See borgmatic monitoring
# documentation for details.
# pagerduty:
# PagerDuty integration key used to notify PagerDuty
# when a backup errors.
# integration_key: a177cad45bd374409f78906a810a3074
# Configuration for a monitoring integration with Crunhub.
# Create an account at https://cronhub.io if you'd like to
# use this service. See borgmatic monitoring documentation
# for details.
# cronhub:
# Cronhub ping URL to notify when a backup begins,
# ends, or errors.
# ping_url: https://cronhub.io/ping/1f5e3410-254c-5587
# Umask used when executing hooks. Defaults to the umask that
# borgmatic is run with.
# umask: 0077

View File

@@ -0,0 +1 @@
0 1 * * * PATH=$PATH:/usr/bin /usr/local/bin/borgmatic --stats -v 0 2>&1

View File

@@ -0,0 +1 @@
MAIL_PASSWORD={{ smtp_password }}

View File

@@ -0,0 +1,3 @@
VIRTUAL_HOST={{ nextcloud_hostname }}
LETSENCRYPT_HOST={{ nextcloud_hostname }}
LETSENCRYPT_EMAIL={{ letsencrypt_email }}

View File

@@ -0,0 +1,3 @@
FROM nginx:1.28-alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf

View File

@@ -0,0 +1,271 @@
# Adapted from https://docs.nextcloud.com/server/31/admin_manual/installation/nginx.html
# Insert as /etc/nginx/conf.d/default.conf
# Set .mjs and .wasm MIME types
# Either include it in the default mime.types list
# and include that list explicitly or add the file extension
# only for Nextcloud like below:
types {
text/javascript mjs;
application/wasm wasm;
}
upstream php-handler {
server nextcloud:9000;
}
# Set the `immutable` cache control options only for assets with a cache busting `v` argument
map $arg_v $asset_immutable {
"" "";
default ", immutable";
}
server {
listen 80;
server_name nc.noodlefactory.co.uk;
# Prevent nginx HTTP Server Detection
server_tokens off;
# HSTS settings
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
#add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;
# Add headers to serve security related headers
# Before enabling Strict-Transport-Security headers please read into this
# topic first.
#add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
#
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "noindex, nofollow" always;
add_header X-XSS-Protection "1; mode=block" always;
# Remove X-Powered-By, which is an information leak
fastcgi_hide_header X-Powered-By;
# Path to the root of your installation
root /var/www/html;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# The following 2 rules are only needed for the user_webfinger app.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
#rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
# The following rule is only needed for the Social app.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
# set max upload size and increase upload timeout
client_max_body_size 10G;
client_body_timeout 300s;
fastcgi_buffers 64 4K;
# The settings allows you to optimize the HTTP2 bandwidth.
# See https://blog.cloudflare.com/delivering-http-2-upload-speed-improvements/
# for tuning hints
client_body_buffer_size 512k;
# Enable gzip but do not remove ETag headers
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml text/javascript application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/wasm application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
# Pagespeed is not supported by Nextcloud, so if your server is built
# with the `ngx_pagespeed` module, uncomment this line to disable it.
#pagespeed off;
# Specify how to handle directories -- specifying `/index.php$request_uri`
# here as the fallback means that Nginx always exhibits the desired behaviour
# when a client requests a path that corresponds to a directory that exists
# on the server. In particular, if that directory contains an index.php file,
# that file is correctly served; if it doesn't, then the request is passed to
# the front-end controller. This consistent behaviour means that we don't need
# to specify custom rules for certain paths (e.g. images and other assets,
# `/updater`, `/ocs-provider`), and thus
# `try_files $uri $uri/ /index.php$request_uri`
# always provides the desired behaviour.
index index.php index.html /index.php$request_uri;
# Rule borrowed from `.htaccess` to handle Microsoft DAV clients
location = / {
if ( $http_user_agent ~ ^DavClnt ) {
return 302 /remote.php/webdav/$is_args$args;
}
}
# Make a regex exception for `/.well-known` so that clients can still
# access it despite the existence of the regex rule
# `location ~ /(\.|autotest|...)` which would otherwise handle requests
# for `/.well-known`.
location ^~ /.well-known {
# The rules in this block are an adaptation of the rules
# in `.htaccess` that concern `/.well-known`.
location = /.well-known/carddav { return 301 /remote.php/dav/; }
location = /.well-known/caldav { return 301 /remote.php/dav/; }
location /.well-known/acme-challenge { try_files $uri $uri/ =404; }
location /.well-known/pki-validation { try_files $uri $uri/ =404; }
# Let Nextcloud's API for `/.well-known` URIs handle all other
# requests by passing them to the front-end controller.
return 301 /index.php$request_uri;
}
# Rules borrowed from `.htaccess` to hide certain paths from clients
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/) { return 404; }
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) { return 404; }
# Ensure this block, which passes PHP files to the PHP process, is above the blocks
# which handle static assets (as seen below). If this block is not declared first,
# then Nginx will encounter an infinite rewriting loop when it prepends `/index.php`
# to the URI, resulting in a HTTP 500 error response.
location ~ \.php(?:$|/) {
# Required for legacy support
rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|ocs-provider\/.+|.+\/richdocumentscode(_arm64)?\/proxy) /index.php$request_uri;
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
set $path_info $fastcgi_path_info;
try_files $fastcgi_script_name =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $path_info;
fastcgi_param HTTPS on;
fastcgi_param modHeadersAvailable true; # Avoid sending the security headers twice
fastcgi_param front_controller_active true; # Enable pretty urls
fastcgi_pass php-handler;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
fastcgi_max_temp_file_size 0;
}
# Serve static files
location ~ \.(?:css|js|mjs|svg|gif|ico|jpg|png|webp|wasm|tflite|map|ogg|flac)$ {
try_files $uri /index.php$request_uri;
# HTTP response headers borrowed from Nextcloud `.htaccess`
add_header Cache-Control "public, max-age=15778463$asset_immutable";
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "noindex, nofollow" always;
add_header X-XSS-Protection "1; mode=block" always;
access_log off; # Optional: Don't log access to assets
}
location ~ \.(otf|woff2?)$ {
try_files $uri /index.php$request_uri;
expires 7d; # Cache-Control policy borrowed from `.htaccess`
access_log off; # Optional: Don't log access to assets
}
# Rule borrowed from `.htaccess`
location /remote {
return 301 /remote.php$request_uri;
}
location / {
try_files $uri $uri/ /index.php$request_uri;
}
##
## location / {
## rewrite ^ /index.php;
## }
##
## location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ {
## deny all;
## }
## location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) {
## deny all;
## }
##
## location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) {
## fastcgi_split_path_info ^(.+?\.php)(\/.*|)$;
## set $path_info $fastcgi_path_info;
## try_files $fastcgi_script_name =404;
## include fastcgi_params;
## fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
## fastcgi_param PATH_INFO $path_info;
## # fastcgi_param HTTPS on;
##
## # Avoid sending the security headers twice
## fastcgi_param modHeadersAvailable true;
##
## # Enable pretty urls
## fastcgi_param front_controller_active true;
## fastcgi_pass php-handler;
## fastcgi_intercept_errors on;
## fastcgi_request_buffering off;
## }
##
## location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) {
## try_files $uri/ =404;
## index index.php;
## }
##
## # Adding the cache control header for js, css and map files
## # Make sure it is BELOW the PHP block
## location ~ \.(?:css|js|woff2?|svg|gif|map)$ {
## try_files $uri /index.php$request_uri;
## add_header Cache-Control "public, max-age=15778463";
## # Add headers to serve security related headers (It is intended to
## # have those duplicated to the ones above)
## # Before enabling Strict-Transport-Security headers please read into
## # this topic first.
## #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
## #
## # WARNING: Only add the preload option once you read about
## # the consequences in https://hstspreload.org/. This option
## # will add the domain to a hardcoded list that is shipped
## # in all major browsers and getting removed from this list
## # could take several months.
## add_header Referrer-Policy "no-referrer" always;
## add_header X-Content-Type-Options "nosniff" always;
## add_header X-Download-Options "noopen" always;
## add_header X-Frame-Options "SAMEORIGIN" always;
## add_header X-Permitted-Cross-Domain-Policies "none" always;
## add_header X-Robots-Tag "none" always;
## add_header X-XSS-Protection "1; mode=block" always;
##
## # Optional: Don't log access to assets
## access_log off;
## }
##
## location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap|mp4|webm)$ {
## try_files $uri /index.php$request_uri;
## # Optional: Don't log access to other assets
## access_log off;
## }
}