1
0
mirror of https://github.com/chylex/My-Server-Docker-Setup.git synced 2025-04-10 16:15:42 +02:00

Add db-backup image

This commit is contained in:
chylex 2022-05-10 19:19:55 +02:00
parent 24f5e0d0b8
commit 04323576c3
Signed by: chylex
GPG Key ID: 4DE42C8F19A80548
11 changed files with 503 additions and 1 deletions

View File

@ -8,4 +8,9 @@ if [[ "$1" == "" ]] || [[ "$1" == "nginx-proxy" ]]; then
docker build --pull -t local/nginx-proxy "$BASE/nginx-proxy"
fi
if [[ "$1" == "" ]] || [[ "$1" == "db-backup" ]]; then
echo "Building local/db-backup..."
docker build --pull -t local/db-backup "$BASE/db-backup"
fi
echo "Done!"

View File

@ -0,0 +1,35 @@
FROM alpine AS cron
ENV SUPERCRONIC_VERSION="v0.1.12"
ENV SUPERCRONIC_PACKAGE="supercronic-linux-amd64"
ENV SUPERCRONIC_SHA1SUM="048b95b48b708983effb2e5c935a1ef8483d9e3e"
ENV SUPERCRONIC_URL="https://github.com/aptible/supercronic/releases/download/$SUPERCRONIC_VERSION/$SUPERCRONIC_PACKAGE"
RUN apk add --update --no-cache ca-certificates curl && \
curl --fail --silent --show-error --location --output /supercronic "${SUPERCRONIC_URL}" && \
echo "${SUPERCRONIC_SHA1SUM} /supercronic" | sha1sum -c - && \
chmod +x /supercronic
FROM alpine
ENV CONTAINER_LOG_LEVEL=NOTICE
COPY --from=cron /supercronic /bin/supercronic
RUN apk --update --no-cache add \
bash \
postgresql14-client \
tzdata \
zstd
COPY ["scripts/*", "/scripts/"]
RUN touch /crontab
RUN mkdir /tmp/backups
RUN chmod 755 /scripts/*
RUN chmod 777 /tmp/backups
RUN chmod 666 /crontab
ENTRYPOINT ["/scripts/entrypoint.sh"]

21
.images/db-backup/LICENSE Normal file
View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2022 Dave Conroy, chylex
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

103
.images/db-backup/README.md Normal file
View File

@ -0,0 +1,103 @@
This image contains scripts that periodically backup an SQL database server (currently only PostgreSQL is supported). It is a heavily modified version of [tiredofit/docker-db-backup](https://github.com/tiredofit/docker-db-backup), with these improvements and changes:
- Ability to run as a non-root user
- Scheduling based on `cron` (via [supercronic](https://github.com/aptible/supercronic))
- Fully automated [backup restoration](#restoring-a-backup)
- Removal of features not necessary for this server
# Environment Variables
- `DB_TYPE` is the type of database server.
- Allowed: `postgres`
- `DB_HOST` is the hostname of the database server. The hostname is the name of the service container in `docker-compose.yml`.
- Example: `postgres`
- `DB_PORT` is the port of the database server.
- Default: `5432` for `postgres`
- `DB_USER` is the database user that has permissions to create or restore the backup.
- `DB_PASS` is the database user's password.
- `COMPRESSION` sets the compression algorithm for backup files.
- Default: `zstd`
- Allowed: `none`, `gzip`, `zstd`
- `COMPRESSION_LEVEL` is the level or strength of compression. Higher levels take longer but usually result in smaller files.
- Default: `9` when using `gzip`
- Default: `10` when using `zstd`
- Allowed: `1` to `9` when using `gzip`
- Allowed: `1` to `19` when using `zstd`
- `BACKUP_RETENTION_SECONDS` is the amount of seconds backups should be kept for. Older backups are deleted when a new backup is created.
- Default: `1440` (24 hours)
- Example: `10080` (7 days)
- Example: `43200` (30 days)
- Example: `525600` (365 days)
- Example: `""` (disables automatic cleanup)
- `CRON` is the cron expression which determines how often backups are made. The format is based on [cronexpr](https://github.com/aptible/supercronic/tree/master/cronexpr).
- Default: `0 */2 * * *` (every 2 hours)
- Example: `0 */4 * * *` (every 4 hours)
- Example: `0 * * * *` (every hour)
- Example: `0 0 * * *` (every day at midnight)
- `TZ` is the server's timezone.
- Default: `UTC`
- Example: `Europe/Prague`
# Locations
Database files and backups are stored in folders that follow this pattern:
- `/srv/<service>/postgres` for the database files
- `/srv/<service>/postgres.backup` for the backups
# Retention
By default, backups are made every two hours, and are **only kept for one day**.
The rationale behind these defaults is that I expect you to have daily backups of your whole server. With daily server backups, there is no need for these database backups to be stored for more than a day, since backups older than a day can be restored from your daily backups.
You can change both the frequency and retention of backups using environment variables.
# Compression
The default compression settings use `zstd` with a compression level 10, and a 24-bit (`16 MiB`) sliding window (the sliding window is hardcoded in the scripts). I found this to be a good balance of speed, size, and memory usage.
You can experiment with different settings, but keep in mind that increasing either compression level or the sliding window size will increase memory requirements. All services that use this image have a hard limit of `128 MB` RAM for the backup container, which is already very close to the actual amount this image uses with default compression settings.
# Server Downtime
If the database server is not available, the backup script will wait for a minute and then check the server's availability again. If the server is unavailable after the next scheduled backup was supposed to start, the scheduler ([supercronic](https://github.com/aptible/supercronic)) detects that the previous backup has not finished yet, and it will keep postponing the next backup until the previous one finishes.
# Restoring a Backup
Every service folder that uses this image includes a `restorebackup.sh` script. To restore the backup, `cd` into the service folder and run `./restorebackup.sh` as `root`.
The script will show you a numbered list of available backups (see [Locations](#locations)), and ask you which backup you want to restore. Type the number next to the file name and press `Enter` to proceed with the restoration, or press `Ctrl + C` to exit the script.
You may see the following two errors during the restoration, which is expected:
- `ERROR: current user cannot be dropped`
- `ERROR: role "<role>" already exists`
The restoration process should finish automatically. In case something goes wrong, here is a step-by-step description to help you troubleshoot:
1. The script checks whether the database container is running.
- If the database container is not running, the script shows how to start the database container, and exits. This gives you a chance to ensure the database container is working.
- If the database container isn't working, you can try wiping the folder with the actual database files (see [Locations](#locations)), and starting the database container again to create a fresh database.
2. If the backup container is still running, it is stopped.
3. If the service's server container is running, it is stopped and will be restarted when the script exits (whether it finishes successfully or not).
4. In the folder where backups are stored (see [Locations](#locations)), a file named `restore` is created. The file contains the name of the selected backup.
5. The backup container is started. It finds the `restore` file, and initiates restoration.
- All active connections to the database server are terminated (PostgreSQL does not allow deleting databases with active connections).
- The SQL statements in the backup file are executed.
- The restored databases are vacuumed and analyzed.
6. If the restoration succeeded, the `restore` file is deleted and the backup container is started again to resume scheduled backups. Otherwise, it stays stopped so that you can fix the issue.
You can start the backup container manually using `docker compose up -d backup`, and see its logs using `docker compose logs -f backup`.
If the backup container is started while the `restore` file still exists, it will print an error and wait for the file to be deleted before it starts scheduling backups again.
## Restoring from an Alternative Location
If you need to restore a backup from an alternative location (for ex. from a daily system backup, as described in the [Retention](#retention) section), do the following:
1. Stop the backup container using `docker compose stop backup`
2. Copy the backup file into the folder where current backups are stored (see [Locations](#locations))
3. Run the restoration script
You must ensure the file is readable and writable by the service's designated database user. If the file in the alternative location already has correct ownership information, use `cp -a` when copying the file to preserve ownership.

View File

@ -0,0 +1,73 @@
#!/bin/bash
set -e
function restore() {
BACKUP_FOLDER="/srv/$1/postgres.backup"
RESTORE_FILE="$BACKUP_FOLDER/restore"
cd "/app/$1"
if ! docker compose ps --services | grep -q postgres; then
echo "The PostgreSQL container is not running!"
echo "You can start it using:"
echo " docker compose up -d postgres"
exit 1
fi
if [ ! -d "$BACKUP_FOLDER" ]; then
echo "The backup folder is missing: $BACKUP_FOLDER"
exit 1
fi
readarray -t BACKUP_FILES < <(find "$BACKUP_FOLDER"/ -mindepth 1 -type f -name '*.sql*' -printf '%P\n' | sort --reverse --field-separator=_ --key=2,2)
if [[ ${#BACKUP_FILES[@]} == 0 ]]; then
echo "The backup folder contains no backups: $BACKUP_FOLDER"
exit 1
fi
for ((i = 0; i < ${#BACKUP_FILES[@]}; i++)); do
path="$BACKUP_FOLDER/${BACKUP_FILES[$i]}"
item="$((i + 1))) ${BACKUP_FILES[$i]}"
echo -n "$item "
printf "%$((28-${#item}))s" " "
echo -n "| "
du -h "$path" | awk '{ print $1 }'
done
filename=""
read -rp "Select file to restore: " option
if [[ "$option" =~ ^[1-9][0-9]*$ ]]; then
filename=${BACKUP_FILES[$option-1]}
fi
if [ -z "$filename" ]; then
echo "Invalid option, exiting..."
exit 1
fi
if docker compose ps --services --status running | grep -q -x "backup"; then
docker compose stop backup
fi
if docker compose ps --services --status running | grep -q -x "$2"; then
docker compose stop "$2"
trap 'echo "Restarting server container..." && docker compose up -d "'"$2"'"' EXIT
fi
echo "Marking file for restoration: $filename"
echo "$filename" > "$RESTORE_FILE"
chmod 600 "$RESTORE_FILE"
chown "app_$1_db:app_$1_db" "$RESTORE_FILE"
echo "Starting backup restoration..."
docker compose run --rm --entrypoint=/scripts/restore.sh backup
echo "Starting backup container to resume scheduled backups..."
docker compose up -d backup
echo "Backup restored!"
}

View File

@ -0,0 +1,63 @@
#!/bin/bash
set -e
source /scripts/database.sh
backupdir=/backup
tmpdir=/tmp/backups
COMPRESSION=${COMPRESSION:-ZSTD}
BACKUP_RETENTION_MINUTES=${BACKUP_RETENTION_MINUTES:-1440}
target="$(date +%Y%m%d-%H%M%S).sql"
### Functions
backup_postgresql() {
pg_dumpall --clean --if-exists --quote-all-identifiers | $dumpoutput > "$tmpdir/$target"
}
compression() {
case "${COMPRESSION,,}" in
"gzip")
target="$target.gz"
level="${COMPRESSION_LEVEL:-"9"}"
dumpoutput="gzip -$level "
print_notice "Compressing backup with gzip (level $level)"
;;
"zstd")
target="$target.zst"
level="${COMPRESSION_LEVEL:-"10"}"
dumpoutput="zstd --rm -$level --long=24 "
print_notice "Compressing backup with zstd (level $level)"
;;
"none")
dumpoutput="cat "
;;
esac
}
move_backup() {
SIZE_BYTES=$(stat -c%s "$tmpdir/$target")
SIZE_HUMAN=$(du -h "$tmpdir/$target" | awk '{ print $1 }')
print_notice "Backup ${target} created with the size of ${SIZE_BYTES} bytes (${SIZE_HUMAN})"
mkdir -p "$backupdir"
mv "$tmpdir/$target" "$backupdir/$target"
}
### Commence Backup
mkdir -p "$tmpdir"
print_notice "Starting backup at $(date)"
### Take a Dump
check_db_availability 1m
compression
backup_"${DB_TYPE}"
move_backup
### Automatic Cleanup
if [[ -n "$BACKUP_RETENTION_MINUTES" ]]; then
print_notice "Cleaning up old backups"
find "$backupdir"/ -mmin +"${BACKUP_RETENTION_MINUTES}" -iname "*" -exec rm {} \;
fi

View File

@ -0,0 +1,46 @@
#!/bin/bash
source /scripts/utils.sh
sanity_var DB_TYPE "Database Type"
sanity_var DB_HOST "Database Host"
sanity_var DB_USER "Database User"
sanity_var DB_PASS "Database Password"
if [ -n "${DB_PASS}" ] && [ -z "${DB_PASS_FILE}" ]; then
file_env 'DB_PASS'
fi
case "${DB_TYPE,,}" in
"postgres" | "postgresql")
DB_TYPE=postgresql
DB_PORT="${DB_PORT:-5432}"
export PGHOST="${DB_HOST}"
export PGPORT="${DB_PORT}"
export PGUSER="${DB_USER}"
export PGPASSWORD="${DB_PASS}"
;;
*)
echo "Unknown database type: ${DB_TYPE}"
exit 1
;;
esac
COUNTER=0
report_db_unavailable() {
print_warn "Database server '${DB_HOST}' is not accessible, retrying... waited $COUNTER${1: -1} so far"
sleep "$1"
(( COUNTER+="${1::-1}" ))
}
check_db_availability() {
case "${DB_TYPE}" in
"postgresql")
until pg_isready -q; do
report_db_unavailable "$1"
done
;;
esac
COUNTER=0
}

View File

@ -0,0 +1,12 @@
#!/bin/bash
set -e
if [ -f "/backup/restore" ]; then
echo "A backup restore file is present, it is not safe to resume the backup schedule. Waiting for the file to be removed..."
while [ -f "/backup/restore" ]; do
sleep 5s
done
fi
echo "${CRON:-"0 */2 * * *"} /bin/bash /scripts/backup.sh" > /crontab
/bin/supercronic /crontab

View File

@ -0,0 +1,49 @@
#!/bin/bash
set -e
source /scripts/database.sh
RESTORE_FILENAME="$(cat /backup/restore)"
RESTORE_PATH="/backup/$RESTORE_FILENAME"
if [ ! -f "$RESTORE_PATH" ]; then
echo "Backup file missing: $RESTORE_FILENAME"
exit 1
fi
# Setup Decompression
COMPRESSED_EXTENSION="${RESTORE_FILENAME##*.sql}"
case "$COMPRESSED_EXTENSION" in
".gz")
dumpoutput="zcat"
echo "Decompressing backup with gzip"
;;
".zst")
dumpoutput="zstdcat"
echo "Decompressing backup with zstd"
;;
"")
dumpoutput="cat"
;;
*)
echo "Unknown extension: $COMPRESSED_EXTENSION"
exit 1
;;
esac
# Functions
restore_postgresql() {
echo "Restoring PostgreSQL..."
echo "SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE pid != pg_backend_pid() AND datname IS NOT NULL" | psql --echo-errors -d postgres >/dev/null
$dumpoutput "$RESTORE_PATH" | psql --echo-errors -d postgres >/dev/null
vacuumdb --all --analyze
}
# Restore Backup
check_db_availability 10s
restore_"${DB_TYPE}"
### Cleanup
rm "/backup/restore"

View File

@ -0,0 +1,95 @@
#!/bin/bash
## Docker Secrets Support
## usage: file_env VAR [DEFAULT]
## ie: file_env 'XYZ_DB_PASSWORD' 'example'
## (will allow for "$XYZ_DB_PASSWORD_FILE" to fill in the value of "$XYZ_DB_PASSWORD" from a file, especially for Docker's secrets feature)
file_env() {
local var="$1"
local fileVar="${var}_FILE"
local def="${2:-}"
local val="$def"
if [ "${!fileVar:-}" ]; then
val="$(cat "${!fileVar}")"
elif [ "${!var:-}" ]; then
val="${!var}"
fi
if [ -z "${val}" ]; then
print_error "error: neither $var nor $fileVar are set but are required"
exit 1
fi
export "$var"="$val"
unset "$fileVar"
}
## An attempt to shut down so much noise in the log files, specifically for echo statements
output_off() {
if [ "${DEBUG_MODE,,}" = "true" ] ; then
set +x
fi
}
output_on() {
if [ "${DEBUG_MODE,,}" = "true" ] ; then
set -x
fi
}
print_info() {
output_off
echo -e "[INFO] $1"
output_on
}
print_debug() {
output_off
case "$CONTAINER_LOG_LEVEL" in
"DEBUG" )
echo -e "[DEBUG] $1"
;;
esac
output_on
}
print_notice() {
output_off
case "$CONTAINER_LOG_LEVEL" in
"DEBUG" | "NOTICE" )
echo -e "[NOTICE] $1"
;;
esac
output_on
}
print_warn() {
output_off
case "$CONTAINER_LOG_LEVEL" in
"DEBUG" | "NOTICE" | "WARN")
echo -e "[WARN] $1"
;;
esac
output_on
}
print_error() {
output_off
case "$CONTAINER_LOG_LEVEL" in
"DEBUG" | "NOTICE" | "WARN" | "ERROR")
echo -e "[ERROR] $1"
;;
esac
output_on
}
## Check is Variable is Defined
## Usage: sanity_var varname "Description"
sanity_var() {
print_debug "Looking for existence of $1 environment variable"
if [ ! -v "$1" ]; then
print_error "No '$2' Entered! - Set '\$$1'"
exit 1
fi
}

View File

@ -14,7 +14,7 @@ This repository contains configuration and setup guides for services that run on
| Image | Description | License (*) |
|------------------------------------|-------------------------------------------------------------------------------|------------------------------------------|
| [nginx-proxy](.images/nginx-proxy) | Reverse proxy that provides HTTP / HTTPS access to web servers in containers. | [Unlicense](.images/nginx-proxy/LICENSE) |
| TODO | | |
| [db-backup](.images/db-backup) | Periodic SQL database server backup. | [MIT](.images/db-backup/LICENSE) |
# 1. Requirements