Code Monkey home page Code Monkey logo

docker-db-backup's Introduction

github.com/tiredofit/docker-db-backup

GitHub release Build Status Docker Stars Docker Pulls Become a sponsor Paypal Donate


About

This will build a container for backing up multiple types of DB Servers

Backs up CouchDB, InfluxDB, MySQL/MariaDB, Microsoft SQL, MongoDB, Postgres, Redis servers.

  • dump to local filesystem or backup to S3 Compatible services, and Azure.
  • multiple backup job support
    • selectable when to start the first dump, whether time of day or relative to container start time
    • selectable interval
    • selectable omit scheduling during periods of time
    • selectable database user and password
    • selectable cleanup and archive capabilities
    • selectable database name support - all databases, single, or multiple databases
    • backup all to separate files or one singular file
  • checksum support choose to have an MD5 or SHA1 hash generated after backup for verification
  • compression support (none, gz, bz, xz, zstd)
  • encryption support (passphrase and public key)
  • notify upon job failure to email, matrix, mattermost, rocketchat, custom script
  • zabbix metrics support
  • hooks to execute pre and post backup job for customization purposes
  • companion script to aid in restores

Maintainer

Table of Contents

Prerequisites and Assumptions

  • You must have a working connection to one of the supported DB Servers and appropriate credentials

Installation

Build from Source

Clone this repository and build the image with docker build <arguments> (imagename) .

Prebuilt Images

Builds of the image are available on Docker Hub

Builds of the image are also available on the Github Container Registry

docker pull ghcr.io/tiredofit/docker-db-backup:(imagetag)

The following image tags are available along with their tagged release based on what's written in the Changelog:

Alpine Base Tag
latest :latest
docker pull docker.io/tiredofit/db-backup:(imagetag)

Multi Architecture

Images are built primarily for amd64 architecture, and may also include builds for arm/v7, arm64 and others. These variants are all unsupported. Consider sponsoring my work so that I can work with various hardware. To see if this image supports multiple architectures, type docker manifest (image):(tag)

Configuration

Quick Start

  • The quickest way to get started is using docker-compose. See the examples folder for a series of example compose.yml that can be modified for development or production use.

  • Set various environment variables to understand the capabilities of this image.

  • Map persistent storage for access to configuration and data files for backup.

Persistent Storage

The following directories are used for configuration and can be mapped for persistent storage.

Directory Description
/backup Backups
/assets/scripts/pre Optional Put custom scripts in this directory to execute before backup operations
/assets/scripts/post Optional Put custom scripts in this directory to execute after backup operations
/logs Optional Logfiles for backup jobs

Environment Variables

Base Images used

This image relies on an Alpine Linux base image that relies on an init system for added capabilities. Outgoing SMTP capabilities are handled via msmtp. Individual container performance monitoring is performed by zabbix-agent. Additional tools include: bash,curl,less,logrotate, nano.

Be sure to view the following repositories to understand all the customizable options:

Image Description
OS Base Customized Image based on Alpine Linux

Container Options

Parameter Description Default
MODE AUTO mode to use internal scheduling routines or MANUAL to simply use this as manual backups only executed by your own means AUTO
USER_DBBACKUP The uid that the image should read and write files as (username is dbbackup) 10000
GROUP_DBBACKUP The gid that the image should read and write files as (groupname is dbbackup) 10000
LOG_PATH Path to log files /logs
TEMP_PATH Perform Backups and Compression in this temporary directory /tmp/backups/
MANUAL_RUN_FOREVER TRUE or FALSE if you wish to try to make the container exit after the backup TRUE
DEBUG_MODE If set to true, print copious shell script messages to the container log. Otherwise only basic messages are printed. FALSE
BACKUP_JOB_CONCURRENCY How many backup jobs to run concurrently 1

Job Defaults

If these are set and no other defaults or variables are set explicitly, they will be added to any of the backup jobs.

Variable Description Default
DEFAULT_BACKUP_LOCATION Backup to FILESYSTEM, blobxfer or S3 compatible services like S3, Minio, Wasabi FILESYSTEM
DEFAULT_CHECKSUM Either MD5 or SHA1 or NONE MD5
DEFAULT_LOG_LEVEL Log output on screen and in files INFO NOTICE ERROR WARN DEBUG notice
DEFAULT_RESOURCE_OPTIMIZED Perform operations at a lower priority to the CPU and IO scheduler FALSE
DEFAULT_SKIP_AVAILABILITY_CHECK Before backing up - skip connectivity check FALSE
Compression Options
Variable Description Default
DEFAULT_COMPRESSION Use either Gzip GZ, Bzip2 BZ, XZip XZ, ZSTD ZSTD or none NONE ZSTD
DEFAULT_COMPRESSION_LEVEL Numerical value of what level of compression to use, most allow 1 to 9 3
except for ZSTD which allows for 1 to 19
DEFAULT_GZ_RSYNCABLE Use --rsyncable (gzip only) for faster rsync transfers and incremental backup deduplication. FALSE
DEFAULT_ENABLE_PARALLEL_COMPRESSION Use multiple cores when compressing backups TRUE or FALSE TRUE
DEFAULT_PARALLEL_COMPRESSION_THREADS Maximum amount of threads to use when compressing - Integer value e.g. 8 autodetected
Encryption Options

Encryption occurs after compression and the encrypted filename will have a .gpg suffix

Variable Description Default _FILE
DEFAULT_ENCRYPT Encrypt file after backing up with GPG FALSE
DEFAULT_ENCRYPT_PASSPHRASE Passphrase to encrypt file with GPG x
or
DEFAULT_ENCRYPT_PUBLIC_KEY Path of public key to encrypt file with GPG x
DEFAULT_ENCRYPT_PRIVATE_KEY Path of private key to encrypt file with GPG x
Scheduling Options
Variable Description Default
DEFAULT_BACKUP_INTERVAL How often to do a backup, in minutes after the first backup. Defaults to 1440 minutes, or once per day. 1440
DEFAULT_BACKUP_BEGIN What time to do the initial backup. Defaults to immediate. (+1) +0
Must be in one of four formats:
Absolute HHMM, e.g. 2330 or 0415
Relative +MM, i.e. how many minutes after starting the container, e.g. +0 (immediate), +10 (in 10 minutes), or +90 in an hour and a half
Full datestamp e.g. 2023-12-21 23:30:00
Cron expression e.g. 30 23 * * * Understand the format - BACKUP_INTERVAL is ignored
DEFAULT_CLEANUP_TIME Value in minutes to delete old backups (only fired when backup interval executes) FALSE
1440 would delete anything above 1 day old. You don't need to set this variable if you want to hold onto everything.
DEFAULT_ARCHIVE_TIME Value in minutes to move all files files older than (x) from
DEFAULT_BACKUP_BLACKOUT_BEGIN Use HHMM notation to start a blackout period where no backups occur eg 0420
DEFAULT_BACKUP_BLACKOUT_END Use HHMM notation to set the end period where no backups occur eg 0430

You may need to wrap your DEFAULT_BACKUP_BEGIN value in quotes for it to properly parse. There have been reports of backups that start with a 0 get converted into a different format which will not allow the timer to start at the correct time.

Default Database Options
CouchDB
Variable Description Default _FILE
DEFAULT_PORT CouchDB Port 5984 x
InfluxDB
Variable Description Default _FILE
DEFAULT_PORT InfluxDB Port x
Version 1.x 8088
Version 2.x 8086
DEFAULT_INFLUX_VERSION What Version of Influx are you backing up from 1.x or 2 series - amd64 and aarch/armv8 only for 2 2
MariaDB/MySQL
Variable Description Default _FILE
DEFAULT_PORT MySQL / MariaDB Port 3306 x
DEFAULT_EXTRA_BACKUP_OPTS Pass extra arguments to the backup command only, add them here e.g. --extra-command
DEFAULT_EXTRA_ENUMERATION_OPTS Pass extra arguments to the database enumeration command only, add them here e.g. --extra-command
DEFAULT_EXTRA_OPTS Pass extra arguments to the backup and database enumeration command, add them here e.g. --extra-command
DEFAULT_MYSQL_CLIENT Choose between mariadb or mysql client to perform dump operations for compatibility purposes mariadb
DEFAULT_MYSQL_EVENTS Backup Events TRUE
DEFAULT_MYSQL_MAX_ALLOWED_PACKET Max allowed packet 512M
DEFAULT_MYSQL_SINGLE_TRANSACTION Backup in a single transaction TRUE
DEFAULT_MYSQL_STORED_PROCEDURES Backup stored procedures TRUE
DEFAULT_MYSQL_ENABLE_TLS Enable TLS functionality FALSE
DEFAULT_MYSQL_TLS_VERIFY (optional) If using TLS (by means of MYSQL_TLS_* variables) verify remote host FALSE
DEFAULT_MYSQL_TLS_VERSION What TLS v1.1 v1.2 v1.3 version to utilize TLSv1.1,TLSv1.2,TLSv1.3
DEFAULT_MYSQL_TLS_CA_FILE Filename to load custom CA certificate for connecting via TLS /etc/ssl/cert.pem x
DEFAULT_MYSQL_TLS_CERT_FILE Filename to load client certificate for connecting via TLS x
DEFAULT_MYSQL_TLS_KEY_FILE Filename to load client key for connecting via TLS x
Microsoft SQL
Variable Description Default _FILE
DEFAULT_PORT Microsoft SQL Port 1433 x
DEFAULT_MSSQL_MODE Backup DATABASE or TRANSACTION logs DATABASE
MongoDB
Variable Description Default _FILE
DEFAULT_AUTH (Optional) Authentication Database x
DEFAULT_PORT MongoDB Port 27017 x
MONGO_CUSTOM_URI If you wish to override the MongoDB Connection string enter it here e.g. mongodb+srv://username:[email protected] x
This environment variable will be parsed and populate the DB_NAME and DB_HOST variables to properly build your backup filenames.
You can override them by making your own entries
Postgresql
Variable Description Default _FILE
DEFAULT_AUTH (Optional) Authentication Database x
DEFAULT_BACKUP_GLOBALS Backup Globals as part of backup procedure
DEFAULT_EXTRA_BACKUP_OPTS Pass extra arguments to the backup command only, add them here e.g. --extra-command
DEFAULT_EXTRA_ENUMERATION_OPTS Pass extra arguments to the database enumeration command only, add them here e.g. --extra-command
DEFAULT_EXTRA_OPTS Pass extra arguments to the backup and database enumeration command, add them here e.g. --extra-command
DEFAULT_PORT PostgreSQL Port 5432 x
Redis
Variable Description Default _FILE
DEFAULT_PORT Default Redis Port 6379 x
DEFAULT_EXTRA_ENUMERATION_OPTS Pass extra arguments to the database enumeration command only, add them here e.g. --extra-command
Default Storage Options

Options that are related to the value of DEFAULT_BACKUP_LOCATION

Filesystem

If DEFAULT_BACKUP_LOCTION = FILESYSTEM then the following options are used:

Variable Description Default
DEFAULT_CREATE_LATEST_SYMLINK Create a symbolic link pointing to last backup in this format: latest-(DB_TYPE)_(DB_NAME)_(DB_HOST) TRUE
DEFAULT_FILESYSTEM_PATH Directory where the database dumps are kept. /backup
DEFAULT_FILESYSTEM_PATH_PERMISSION Permissions to apply to backup directory 700
DEFAULT_FILESYSTEM_ARCHIVE_PATH Optional Directory where the database dumps archives are kept ${DEFAULT_FILESYSTEM_PATH}/archive/
DEFAULT_FILESYSTEM_PERMISSION Permissions to apply to files. 600
S3

If DEFAULT_BACKUP_LOCATION = S3 then the following options are used:

Parameter Description Default _FILE
DEFAULT_S3_BUCKET S3 Bucket name e.g. mybucket x
DEFAULT_S3_KEY_ID S3 Key ID (Optional) x
DEFAULT_S3_KEY_SECRET S3 Key Secret (Optional) x
DEFAULT_S3_PATH S3 Pathname to save to (must NOT end in a trailing slash e.g. 'backup') x
DEFAULT_S3_REGION Define region in which bucket is defined. Example: ap-northeast-2 x
DEFAULT_S3_HOST Hostname (and port) of S3-compatible service, e.g. minio:8080. Defaults to AWS. x
DEFAULT_S3_PROTOCOL Protocol to connect to DEFAULT_S3_HOST. Either http or https. Defaults to https. https x
DEFAULT_S3_EXTRA_OPTS Add any extra options to the end of the aws-cli process execution x
DEFAULT_S3_CERT_CA_FILE Map a volume and point to your custom CA Bundle for verification e.g. /certs/bundle.pem x
OR
DEFAULT_S3_CERT_SKIP_VERIFY Skip verifying self signed certificates when connecting TRUE
  • When DEFAULT_S3_KEY_ID and/or DEFAULT_S3_KEY_SECRET is not set, will try to use IAM role assigned (if any) for uploading the backup files to S3 bucket.
Azure

If DEFAULT_BACKUP_LOCATION = blobxfer then the following options are used:.

Parameter Description Default _FILE
DEFAULT_BLOBXFER_STORAGE_ACCOUNT Microsoft Azure Cloud storage account name. x
DEFAULT_BLOBXFER_STORAGE_ACCOUNT_KEY Microsoft Azure Cloud storage account key. x
DEFAULT_BLOBXFER_REMOTE_PATH Remote Azure path /docker-db-backup x
DEFAULT_BLOBXFER_MODE Azure Storage mode e.g. auto, file, append, block or page auto x
  • When DEFAULT_BLOBXFER_MODE is set to auto it will use blob containers by default. If the DEFAULT_BLOBXFER_REMOTE_PATH path does not exist a blob container with that name will be created.

This service uploads files from backup targed directory DEFAULT_FILESYSTEM_PATH. If the a cleanup configuration in DEFAULT_CLEANUP_TIME is defined, the remote directory on Azure storage will also be cleaned automatically.

Hooks
Path Options
Parameter Description Default
DEFAULT_SCRIPT_LOCATION_PRE Location on filesystem inside container to execute bash scripts pre backup /assets/scripts/pre/
DEFAULT_SCRIPT_LOCATION_POST Location on filesystem inside container to execute bash scripts post backup /assets/scripts/post/
DEFAULT_PRE_SCRIPT Fill this variable in with a command to execute pre backing up
DEFAULT_POST_SCRIPT Fill this variable in with a command to execute post backing up
Pre Backup

If you want to execute a custom script before a backup starts, you can drop bash scripts with the extension of .sh in the location defined in DB01_SCRIPT_LOCATION_PRE. See the following example to utilize:

$ cat pre-script.sh
##!/bin/bash

# #### Example Pre Script
# #### $1=DBXX_TYPE (Type of Backup)
# #### $2=DBXX_HOST (Backup Host)
# #### $3=DBXX_NAME (Name of Database backed up
# #### $4=BACKUP START TIME (Seconds since Epoch)
# #### $5=BACKUP FILENAME (Filename)

echo "${1} Backup Starting on ${2} for ${3} at ${4}. Filename: ${5}"
## script DBXX_TYPE DBXX_HOST DBXX_NAME STARTEPOCH BACKUP_FILENAME
${f} "${backup_job_db_type}" "${backup_job_db_host}" "${backup_job_db_name}" "${backup_routines_start_time}" "${backup_job_file}"

Outputs the following on the console:

mysql Backup Starting on example-db for example at 1647370800. Filename: mysql_example_example-db_202200315-000000.sql.bz2

Post backup

If you want to execute a custom script at the end of a backup, you can drop bash scripts with the extension of .sh in the location defined in DB01_SCRIPT_LOCATION_POST. Also to support legacy users /assets/custom-scripts is also scanned and executed.See the following example to utilize:

$ cat post-script.sh
##!/bin/bash

# #### Example Post Script
# #### $1=EXIT_CODE (After running backup routine)
# #### $2=DBXX_TYPE (Type of Backup)
# #### $3=DBXX_HOST (Backup Host)
# #### #4=DBXX_NAME (Name of Database backed up
# #### $5=BACKUP START TIME (Seconds since Epoch)
# #### $6=BACKUP FINISH TIME (Seconds since Epoch)
# #### $7=BACKUP TOTAL TIME (Seconds between Start and Finish)
# #### $8=BACKUP FILENAME (Filename)
# #### $9=BACKUP FILESIZE
# #### $10=HASH (If CHECKSUM enabled)
# #### $11=MOVE_EXIT_CODE

echo "${1} ${2} Backup Completed on ${3} for ${4} on ${5} ending ${6} for a duration of ${7} seconds. Filename: ${8} Size: ${9} bytes MD5: ${10}"
  ## script EXIT_CODE DB_TYPE DB_HOST DB_NAME STARTEPOCH FINISHEPOCH DURATIONEPOCH BACKUP_FILENAME FILESIZE CHECKSUMVALUE
  ${f} "${exit_code}" "${dbtype}" "${backup_job_db_host}" "${backup_job_db_name}" "${backup_routines_start_time}" "${backup_routines_finish_time}" "${backup_routines_total_time}" "${backup_job_file}" "${filesize}" "${checksum_value}" "${move_exit_code}

Outputs the following on the console:

0 mysql Backup Completed on example-db for example on 1647370800 ending 1647370920 for a duration of 120 seconds. Filename: mysql_example_example-db_202200315-000000.sql.bz2 Size: 7795 bytes Hash: 952fbaafa30437494fdf3989a662cd40 0

If you wish to change the size value from bytes to megabytes set environment variable DB01_SIZE_VALUE=megabytes

You must make your scripts executable otherwise there is an internal check that will skip trying to run it otherwise. If for some reason your filesystem or host is not detecting it right, use the environment variable DB01_POST_SCRIPT_SKIP_X_VERIFY=TRUE to bypass.

Job Backup Options

If DEFAULT_ variables are set and you do not wish for the settings to carry over into your jobs, you can set the appropriate environment variable with the value of unset. Otherwise, override them per backup job. Additional backup jobs can be scheduled by using DB02_,DB03_,DB04_ ... prefixes. See Specific Database Options which may overrule this list.

Parameter Description Default _FILE
DB01_TYPE Type of DB Server to backup couch influx mysql mssql pgsql mongo redis sqlite3
DB01_HOST Server Hostname e.g. mariadb. For sqlite3, full path to DB file e.g. /backup/db.sqlite3 x
DB01_NAME Schema Name e.g. database x
DB01_USER username for the database(s) - Can use root for MySQL x
DB01_PASS (optional if DB doesn't require it) password for the database x
Variable Description Default
DB01_BACKUP_LOCATION Backup to FILESYSTEM, blobxfer or S3 compatible services like S3, Minio, Wasabi FILESYSTEM
DB01_CHECKSUM Either MD5 or SHA1 or NONE MD5
DB01_EXTRA_BACKUP_OPTS Pass extra arguments to the backup command only, add them here e.g. --extra-command
DB01_EXTRA_ENUMERATION_OPTS Pass extra arguments to the database enumeration command only, add them here e.g. --extra-command
DB01_EXTRA_OPTS Pass extra arguments to the backup and database enumeration command, add them here e.g. --extra-command
DB01_LOG_LEVEL Log output on screen and in files INFO NOTICE ERROR WARN DEBUG debug
DB01_RESOURCE_OPTIMIZED Perform operations at a lower priority to the CPU and IO scheduler FALSE
DB01_SKIP_AVAILABILITY_CHECK Before backing up - skip connectivity check FALSE
Compression Options
Variable Description Default
DB01_COMPRESSION Use either Gzip GZ, Bzip2 BZ, XZip XZ, ZSTD ZSTD or none NONE ZSTD
DB01_COMPRESSION_LEVEL Numerical value of what level of compression to use, most allow 1 to 9 3
except for ZSTD which allows for 1 to 19
DB01_GZ_RSYNCABLE Use --rsyncable (gzip only) for faster rsync transfers and incremental backup deduplication. FALSE
DB01_ENABLE_PARALLEL_COMPRESSION Use multiple cores when compressing backups TRUE or FALSE TRUE
DB01_PARALLEL_COMPRESSION_THREADS Maximum amount of threads to use when compressing - Integer value e.g. 8 autodetected
Encryption Options

Encryption will occur after compression and the resulting filename will have a .gpg suffix

Variable Description Default _FILE
DB01_ENCRYPT Encrypt file after backing up with GPG FALSE
DB01_ENCRYPT_PASSPHRASE Passphrase to encrypt file with GPG x
or
DB01_ENCRYPT_PUBLIC_KEY Path of public key to encrypt file with GPG x
DB01_ENCRYPT_PRIVATE_KEY Path of private key to encrypt file with GPG x
Scheduling Options
Variable Description Default
DB01_BACKUP_INTERVAL How often to do a backup, in minutes after the first backup. Defaults to 1440 minutes, or once per day. 1440
DB01_BACKUP_BEGIN What time to do the initial backup. Defaults to immediate. (+1) +0
Must be in one of four formats:
Absolute HHMM, e.g. 2330 or 0415
Relative +MM, i.e. how many minutes after starting the container, e.g. +0 (immediate), +10 (in 10 minutes), or +90 in an hour and a half
Full datestamp e.g. 2023-12-21 23:30:00
Cron expression e.g. 30 23 * * * Understand the format - BACKUP_INTERVAL is ignored
DB01_CLEANUP_TIME Value in minutes to delete old backups (only fired when backup interval executes) FALSE
1440 would delete anything above 1 day old. You don't need to set this variable if you want to hold onto everything.
DB01_ARCHIVE_TIME Value in minutes to move all files files older than (x) from DB01_BACKUP_FILESYSTEM_PATH
to DB01_BACKUP_FILESYSTEM_ARCHIVE_PATH - which is useful when pairing against an external backup system.
DB01_BACKUP_BLACKOUT_BEGIN Use HHMM notation to start a blackout period where no backups occur eg 0420
DB01_BACKUP_BLACKOUT_END Use HHMM notation to set the end period where no backups occur eg 0430
Specific Database Options
CouchDB
Variable Description Default _FILE
DB01_PORT CouchDB Port 5984 x
InfluxDB
Variable Description Default _FILE
DB01_PORT InfluxDB Port x
Version 1.x 8088
Version 2.x 8086
DB01_INFLUX_VERSION What Version of Influx are you backing up from 1.x or 2 series - amd64 and aarch/armv8 only for 2 2

Your Organization will be mapped to DB_USER and your root token will need to be mapped to DB_PASS. You may use DB_NAME=ALL to backup the entire set of databases. For DB_HOST use syntax of http(s)://db-name

MariaDB/MySQL
Variable Description Default _FILE
DB01_EXTRA_OPTS Pass extra arguments to the backup and database enumeration command, add them here e.g. --extra-command
DB01_EXTRA_BACKUP_OPTS Pass extra arguments to the backup command only, add them here e.g. --extra-command
DB01_EXTRA_ENUMERATION_OPTS Pass extra arguments to the database enumeration command only, add them here e.g. --extra-command
DB01_NAME Schema Name e.g. database or ALL to backup all databases the user has access to.
Backup multiple by separating with commas eg db1,db2 x
DB01_NAME_EXCLUDE If using ALL - use this as to exclude databases separated via commas from being backed up x
DB01_SPLIT_DB If using ALL - use this to split each database into its own file as opposed to one singular file FALSE
DB01_PORT MySQL / MariaDB Port 3306 x
DB01_MYSQL_EVENTS Backup Events for TRUE
DB01_MYSQL_MAX_ALLOWED_PACKET Max allowed packet 512M
DB01_MYSQL_SINGLE_TRANSACTION Backup in a single transaction TRUE
DB01_MYSQL_STORED_PROCEDURES Backup stored procedures TRUE
DB01_MYSQL_ENABLE_TLS Enable TLS functionality FALSE
DB01_MYSQL_TLS_VERIFY (optional) If using TLS (by means of MYSQL_TLS_* variables) verify remote host FALSE
DB01_MYSQL_TLS_VERSION What TLS v1.1 v1.2 v1.3 version to utilize TLSv1.1,TLSv1.2,TLSv1.3
DB01_MYSQL_TLS_CA_FILE Filename to load custom CA certificate for connecting via TLS /etc/ssl/cert.pem x
DB01_MYSQL_TLS_CERT_FILE Filename to load client certificate for connecting via TLS x
DB01_MYSQL_TLS_KEY_FILE Filename to load client key for connecting via TLS x
Microsoft SQL
Variable Description Default _FILE
DB01_PORT Microsoft SQL Port 1433 x
DB01_MSSQL_MODE Backup DATABASE or TRANSACTION logs DATABASE
MongoDB
Variable Description Default _FILE
DB01_AUTH (Optional) Authentication Database
DB01_PORT MongoDB Port 27017 x
DB01_MONGO_CUSTOM_URI If you wish to override the MongoDB Connection string enter it here e.g. mongodb+srv://username:[email protected] x
This environment variable will be parsed and populate the DB_NAME and DB_HOST variables to properly build your backup filenames.
You can override them by making your own entries
Postgresql
Variable Description Default _FILE
DB01_AUTH (Optional) Authentication Database
DB01_BACKUP_GLOBALS Backup Globals after backing up database (forces TRUE if `_NAME=ALL``) FALSE
DB01_EXTRA_OPTS Pass extra arguments to the backup and database enumeration command, add them here e.g. --extra-command
DB01_EXTRA_BACKUP_OPTS Pass extra arguments to the backup command only, add them here e.g. --extra-command
DB01_EXTRA_ENUMERATION_OPTS Pass extra arguments to the database enumeration command only, add them here e.g. --extra-command
DB01_NAME Schema Name e.g. database or ALL to backup all databases the user has access to.
Backup multiple by separating with commas eg db1,db2 x
DB01_SPLIT_DB If using ALL - use this to split each database into its own file as opposed to one singular file FALSE
DB01_PORT PostgreSQL Port 5432 x
Redis
Variable Description Default _FILE
DB01_EXTRA_OPTS Pass extra arguments to the backup and database enumeration command, add them here e.g. --extra-command
DB01_EXTRA_BACKUP_OPTS Pass extra arguments to the backup command only, add them here e.g. --extra-command
DB01_PORT Redis Port 6379 x
SQLite
Variable Description Default _FILE
DB01_HOST Enter the full path to DB file e.g. /backup/db.sqlite3 x
Specific Storage Options

Options that are related to the value of DB01_BACKUP_LOCATION

Filesystem

If DB01_BACKUP_LOCTION = FILESYSTEM then the following options are used:

Variable Description Default
DB01_CREATE_LATEST_SYMLINK Create a symbolic link pointing to last backup in this format: latest-(DB_TYPE)-(DB_NAME)-(DB_HOST) TRUE
DB01_FILESYSTEM_PATH Directory where the database dumps are kept. /backup
DB01_FILESYSTEM_PATH_PERMISSION Permissions to apply to backup directory 700
DB01_FILESYSTEM_ARCHIVE_PATH Optional Directory where the database dumps archives are kept ${DB01_FILESYSTEM_PATH}/archive/
DB01_FILESYSTEM_PERMISSION Directory and File permissions to apply to files. 600
S3

If DB01_BACKUP_LOCATION = S3 then the following options are used:

Parameter Description Default _FILE
DB01_S3_BUCKET S3 Bucket name e.g. mybucket x
DB01_S3_KEY_ID S3 Key ID (Optional) x
DB01_S3_KEY_SECRET S3 Key Secret (Optional) x
DB01_S3_PATH S3 Pathname to save to (must NOT end in a trailing slash e.g. 'backup') x
DB01_S3_REGION Define region in which bucket is defined. Example: ap-northeast-2 x
DB01_S3_HOST Hostname (and port) of S3-compatible service, e.g. minio:8080. Defaults to AWS. x
DB01_S3_PROTOCOL Protocol to connect to DB01_S3_HOST. Either http or https. Defaults to https. https x
DB01_S3_EXTRA_OPTS Add any extra options to the end of the aws-cli process execution x
DB01_S3_CERT_CA_FILE Map a volume and point to your custom CA Bundle for verification e.g. /certs/bundle.pem x
OR
DB01_S3_CERT_SKIP_VERIFY Skip verifying self signed certificates when connecting TRUE

When DB01_S3_KEY_ID and/or DB01_S3_KEY_SECRET is not set, will try to use IAM role assigned (if any) for uploading the backup files to S3 bucket.

Azure

If DB01_BACKUP_LOCATION = blobxfer then the following options are used:.

Parameter Description Default _FILE
DB01_BLOBXFER_STORAGE_ACCOUNT Microsoft Azure Cloud storage account name. x
DB01_BLOBXFER_STORAGE_ACCOUNT_KEY Microsoft Azure Cloud storage account key. x
DB01_BLOBXFER_REMOTE_PATH Remote Azure path /docker-db-backup x
DB01_BLOBXFER_REMOTE_MODE Azure Storage mode e.g. auto, file, append, block or page auto x
  • When DEFAULT_BLOBXFER_MODE is set to auto it will use blob containers by default. If the DEFAULT_BLOBXFER_REMOTE_PATH path does not exist a blob container with that name will be created.

This service uploads files from backup directory DB01_BACKUP_FILESYSTEM_PATH. If the a cleanup configuration in DB01_CLEANUP_TIME is defined, the remote directory on Azure storage will also be cleaned automatically.

Hooks
Path Options
Parameter Description Default
DB01_SCRIPT_LOCATION_PRE Location on filesystem inside container to execute bash scripts pre backup /assets/scripts/pre/
DB01_SCRIPT_LOCATION_POST Location on filesystem inside container to execute bash scripts post backup /assets/scripts/post/
DB01_PRE_SCRIPT Fill this variable in with a command to execute pre backing up
DB01_POST_SCRIPT Fill this variable in with a command to execute post backing up
Pre Backup

If you want to execute a custom script before a backup starts, you can drop bash scripts with the extension of .sh in the location defined in DB01_SCRIPT_LOCATION_PRE. See the following example to utilize:

$ cat pre-script.sh
##!/bin/bash

# #### Example Pre Script
# #### $1=DB01_TYPE (Type of Backup)
# #### $2=DB01_HOST (Backup Host)
# #### $3=DB01_NAME (Name of Database backed up
# #### $4=BACKUP START TIME (Seconds since Epoch)
# #### $5=BACKUP FILENAME (Filename)

echo "${1} Backup Starting on ${2} for ${3} at ${4}. Filename: ${5}"
## script DB01_TYPE DB01_HOST DB01_NAME STARTEPOCH BACKUP_FILENAME
${f} "${backup_job_db_type}" "${backup_job_db_host}" "${backup_job_db_name}" "${backup_routines_start_time}" "${backup_job_filename}"

Outputs the following on the console:

mysql Backup Starting on example-db for example at 1647370800. Filename: mysql_example_example-db_202200315-000000.sql.bz2

Post backup

If you want to execute a custom script at the end of a backup, you can drop bash scripts with the extension of .sh in the location defined in DB01_SCRIPT_LOCATION_POST. Also to support legacy users /assets/custom-scripts is also scanned and executed.See the following example to utilize:

$ cat post-script.sh
##!/bin/bash

# #### Example Post Script
# #### $1=EXIT_CODE (After running backup routine)
# #### $2=DB_TYPE (Type of Backup)
# #### $3=DB_HOST (Backup Host)
# #### #4=DB_NAME (Name of Database backed up
# #### $5=BACKUP START TIME (Seconds since Epoch)
# #### $6=BACKUP FINISH TIME (Seconds since Epoch)
# #### $7=BACKUP TOTAL TIME (Seconds between Start and Finish)
# #### $8=BACKUP FILENAME (Filename)
# #### $9=BACKUP FILESIZE
# #### $10=HASH (If CHECKSUM enabled)
# #### $11=MOVE_EXIT_CODE

echo "${1} ${2} Backup Completed on ${3} for ${4} on ${5} ending ${6} for a duration of ${7} seconds. Filename: ${8} Size: ${9} bytes MD5: ${10}"
  ## script EXIT_CODE DB_TYPE DB_HOST DB_NAME STARTEPOCH FINISHEPOCH DURATIONEPOCH BACKUP_FILENAME FILESIZE CHECKSUMVALUE
  ${f} "${exit_code}" "${dbtype}" "${dbhost}" "${dbname}" "${backup_routines_start_time}" "${backup_routines_finish_time}" "${backup_routines_total_time}" "${backup_job_filename}" "${filesize}" "${checksum_value}" "${move_exit_code}

Outputs the following on the console:

0 mysql Backup Completed on example-db for example on 1647370800 ending 1647370920 for a duration of 120 seconds. Filename: mysql_example_example-db_202200315-000000.sql.bz2 Size: 7795 bytes Hash: 952fbaafa30437494fdf3989a662cd40 0

If you wish to change the size value from bytes to megabytes set environment variable DB01_SIZE_VALUE=megabytes

You must make your scripts executable otherwise there is an internal check that will skip trying to run it otherwise. If for some reason your filesystem or host is not detecting it right, use the environment variable DB01_POST_SCRIPT_SKIP_X_VERIFY=TRUE to bypass.

Notifications

This image has capabilities on sending notifications via a handful of services when a backup job fails. This is a global option that cannot be individually set per backup job.

Parameter Description Default
ENABLE_NOTIFICATIONS Enable Notifications FALSE
NOTIFICATION_TYPE CUSTOM EMAIL MATRIX MATTERMOST ROCKETCHAT - Seperate Multiple by commas
Custom Notifications

The following is sent to the custom script. Use how you wish:

$1 unix timestamp
$2 logfile
$3 errorcode
$4 subject
$5 body/error message
Parameter Description Default
NOTIFICATION_CUSTOM_SCRIPT Path and name of custom script to execute notification.
Email Notifications

See more details in the base image listed above for more mail environment variables.

Parameter Description Default _FILE
MAIL_FROM What email address to send mail from for errors
MAIL_TO What email address to send mail to for errors. Send to multiple by seperating with comma.
SMTP_HOST What SMTP server to use for sending mail x
SMTP_PORT What SMTP port to use for sending mail x
Matrix Notifications

Fetch a MATRIX_ACCESS_TOKEN:

curl -XPOST -d '{"type":"m.login.password", "user":"myuserid", "password":"mypass"}' "https://matrix.org/_matrix/client/r0/login"

Copy the JSON response access_token that will look something like this:

{"access_token":"MDAxO...blahblah","refresh_token":"MDAxO...blahblah","home_server":"matrix.org","user_id":"@myuserid:matrix.org"}
Parameter Description Default _FILE
MATRIX_HOST URL (https://matrix.example.com) of Matrix Homeserver x
MATRIX_ROOM Room ID eg \!abcdef:example.com to send to. Send to multiple by seperating with comma. x
MATRIX_ACCESS_TOKEN Access token of user authorized to send to room x
Mattermost Notifications
Parameter Description Default _FILE
MATTERMOST_WEBHOOK_URL Full URL to send webhook notifications to x
MATTERMOST_RECIPIENT Channel or User to send Webhook notifications to. Send to multiple by seperating with comma. x
MATTERMOST_USERNAME Username to send as eg tiredofit x
Rocketchat Notifications
Parameter Description Default _FILE
ROCKETCHAT_WEBHOOK_URL Full URL to send webhook notifications to x
ROCKETCHAT_RECIPIENT Channel or User to send Webhook notifications to. Send to multiple by seperating with comma. x
ROCKETCHAT_USERNAME Username to send as eg tiredofit x

Maintenance

Shell Access

For debugging and maintenance purposes you may want access the containers shell.

bash docker exec -it (whatever your container name is) bash

Manual Backups

Manual Backups can be performed by entering the container and typing backup-now. This will execute all the backup tasks that are scheduled by means of the BACKUPXX_ variables. Alternatively if you wanted to execute a job on its own you could simply type backup01-now (or whatever your number would be). There is no concurrency, and jobs will be executed sequentially.

  • Recently there was a request to have the container work with Kubernetes cron scheduling. This can theoretically be accomplished by setting the container MODE=MANUAL and then setting MANUAL_RUN_FOREVER=FALSE - You would also want to disable a few features from the upstream base images specifically CONTAINER_ENABLE_SCHEDULING and CONTAINER_ENABLE_MONITORING. This should allow the container to start, execute a backup by executing and then exit cleanly. An alternative way to running the script is to execute /etc/services.available/10-db-backup/run.

Restoring Databases

Entering in the container and executing restore will execute a menu based script to restore your backups - MariaDB, Postgres, and Mongo supported.

You will be presented with a series of menus allowing you to choose:

  • What file to restore
  • What type of DB Backup
  • What Host to restore to
  • What Database Name to restore to
  • What Database User to use
  • What Database Password to use
  • What Database Port to use

The image will try to do auto detection based on the filename for the type, hostname, and database name. The image will also allow you to use environment variables or Docker secrets used to backup the images

The script can also be executed skipping the interactive mode by using the following syntax/

`restore <filename> <db_type> <db_hostname> <db_name> <db_user> <db_pass> <db_port>`

If you only enter some of the arguments you will be prompted to fill them in.

Support

These images were built to serve a specific need in a production environment and gradually have had more functionality added based on requests from the community.

Usage

  • The Discussions board is a great place for working with the community on tips and tricks of using this image.
  • Sponsor me for personalized support

Bugfixes

  • Please, submit a Bug Report if something isn't working as expected. I'll do my best to issue a fix in short order.

Feature Requests

  • Feel free to submit a feature request, however there is no guarantee that it will be added, or at what timeline.
  • Sponsor me regarding development of features.

Updates

  • Best effort to track upstream changes, More priority if I am actively using the image in a production environment.
  • Sponsor me for up to date releases.

License

MIT. See LICENSE for more details.

docker-db-backup's People

Contributors

alexbarcelo avatar alwynpan avatar benvia avatar claudioaltamura avatar effectivelywild avatar eoehen avatar greenatwork avatar jacksgt avatar james-song avatar joergmschulz avatar melwinkfr avatar milenkara avatar oscarsiles avatar pascalberger avatar piemonkey avatar pimjansen avatar sbrunecker avatar shlomiporush avatar skylord123 avatar steve-todorov avatar teenigma avatar teun95 avatar the1ts avatar thomas-negrault avatar tiredofit avatar tito avatar toshy avatar tpansino avatar vanzhiganov avatar zicklag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-db-backup's Issues

Script is inconsistent for mysql and pgsql

The following two functions work differently and I think we should make sure they work the same:

backup_mysql() {
if var_true "$SPLIT_DB" ; then
DATABASES=$(mysql -h ${dbhost} -P $dbport -u$dbuser --batch -e "SHOW DATABASES;" | grep -v Database|grep -v schema)
for db in $DATABASES; do
if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then
print_notice "Dumping MariaDB database: $db"
target=mysql_${db}_${dbhost}_${now}.sql
compression
mysqldump --max-allowed-packet=512M -h $dbhost -P $dbport -u$dbuser ${EXTRA_OPTS} --databases $db | $dumpoutput > ${tmpdir}/${target}
exit_code=$?
generate_md5
move_backup
fi
done
else
compression
mysqldump --max-allowed-packet=512M -A -h $dbhost -P $dbport -u$dbuser ${EXTRA_OPTS} | $dumpoutput > ${tmpdir}/${target}
exit_code=$?
generate_md5
move_backup
fi
}
backup_pgsql() {
if var_true $SPLIT_DB ; then
export PGPASSWORD=${dbpass}
authdb=${DB_USER}
[ -n "${DB_NAME}" ] && authdb=${DB_NAME}
DATABASES=$(psql -h $dbhost -U $dbuser -p ${dbport} -d ${authdb} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' )
for db in $DATABASES; do
print_info "Dumping database: $db"
target=pgsql_${db}_${dbhost}_${now}.sql
compression
pg_dump -h ${dbhost} -p ${dbport} -U ${dbuser} $db ${EXTRA_OPTS} | $dumpoutput > ${tmpdir}/${target}
exit_code=$?
generate_md5
move_backup
done
else
export PGPASSWORD=${dbpass}
compression
pg_dump -h ${dbhost} -U ${dbuser} -p ${dbport} ${dbname} ${EXTRA_OPTS} | $dumpoutput > ${tmpdir}/${target}
exit_code=$?
generate_md5
move_backup
fi
}

Case 1:

  • In mysql setting DB_NAME has no effect
  • In pgsql you need to set DB_NAME to make the export work (*)

Case 2:

  • In mysql you can export a file with all DBs when setting SPLIT_DB: FALSE
  • In pgsql it will only export the DB set in DB_NAME when setting SPLIT_DB: FALSE

Case 3:

  • In mysql you can export each DB in it's own file when setting SPLIT_DB:TRUE
  • In pgsql you can export each DB in it's own file when setting SPLIT_DB:TRUE - and DB_NAME to the auth DB (*)

(*) In case of SPLIT_DB: FALSE or if POSTGRES_DB was manually set in the postgres image and is not equals to the user name POSTGRES_USER . When not specifying POSTGRES_DB then the value of POSTGRES_USER will be used.


My proposal is to introduce 3 backup options so that it works consistent:

  • Export single database
  • Export all DBs in single file
  • Export all DBs in multiple files

Alternatively we should add a segment to the README explaining the difference in functionality so that it is predictable.

Can't get this to work

Hi i have the following docker-compose

  mariadb_backup:
    image: tiredofit/db-backup
    restart: unless-stopped
    volumes:
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
      - ./mariadb/backups:/backup
    links:
      - mariadb
    environment:
      - DB_TYPE=mariadb
      - DB_HOST=mariadb
      - DB_NAME=homeassistant
      - DB_USER=backup
      - DB_PASS="secret"
      - DB_DUMP_FREQ=1440
      - DB_DUMP_BEGIN=1700
      - DB_CLEANUP_TIME=8640
      - MD5=TRUE
      - COMPRESSION=XZ
      - SPLIT_DB=TRUE

But i can't get it to work, when it runs i get the following error:

** [db-backup] Dumping database: homeassistant,
./run: line 206: =xz: command not found,
mv: can't rename '/tmp/backups/mysql_homeassistant_mariadb_20181005-171600.sql.xz': No such file or directory,

it creates a md5 file but no actual backup file!

Way to Nofiy on Backup Failure?

Hey there, I'm looking to use this container, but I would like a way to run a simple curl command to notify my webhook when a backup fails. Is there a way to detect a backup failure in the post-script?

Edit: I see that you have SMTP setup in the base image. Is there a way to just have it send an Email when the backup fails? That would also work, while the webhook would still be preferable.

--Feature Request-- support docker secrets

I use the mariadb root user to backup (as I want always all my databases backuped). It would be really nice, if you can provide the password inside a docker secret to add some security =). I know you've a lot of amazing containers on github =). So if you need some help, I'll do what I can to support you =). Thanks for all this amazing contribution to the community =). Your container is the ONLY solution for docker swarm in combination with rancheros to backup multiple databases! really cool!

Backup compression

The backup and compression process seems not to be optimal.

We have the following problem:

  • Original MongoDB database size is +9 GB
  • +50 GB .bson files are created
  • A .tar +50 GB is created
  • A Gzip / Bzip / Xzip +15 GB file is generated
  • Once completed, de process delete temporary files

In our case, files take up more than 115 GB (50 + 50 + 15 GB) are created temporarily.

Is there a way to optimize this? For example:

  • Skip the intermediate step of generating the .tar file (directly generating .tar.gz)
  • Directly generating the compressed backup mongodump --gzip or mongodump | gzip

Cannot backup PostgreSQL 13

[INFO] ** [db-backup] Dumping database: postgres
[NOTICE] ** [db-backup] Compressing backup with gzip
pg_dump: error: server version: 13.0 (Debian 13.0-1.pgdg100+1); pg_dump version: 12.4
pg_dump: error: aborting because of server version mismatch
[NOTICE] ** [db-backup] Generating MD5 for pgsql_postgres_10.0.0.10_20201008-113031.sql.gz
[NOTICE] ** [db-backup] Backup of pgsql_postgres_10.0.0.10_20201008-113031.sql.gz created with the size of 20 bytes

Segfault

Does this still work? Im seeing this on my logs:

TARGET=db_phpipam_mariadb_20200303-064449.sql
+ '[' FALSE = TRUE ']'
+ mysqldump --max-allowed-packet=512M -A -h mariadb -uroot -p
./run: line 290: 1281 Segmentation fault mysqldump --max-allowed-packet=512M -A -h $DBSERVER -u$DBUSER -p$DBPASS > ${TMPDIR}/${TARGET}
+ '[' FALSE = TRUE ']'

neither DB_USER nor DB_USER_FILE are set but are required

On version 1.21.3 I got this error when I run backup of MongoDB instance without authentication.

bash-5.0# backup-now
** Performing Manual Backup
[ERROR] ** [db-backup] error: neither DB_USER nor DB_USER_FILE are set but are required

DB_DUMP_BEGIN not working for Absolute or Relative time.

DB_DUMP_BEGIN not working for Absolute or Relative time. However, it is working when it is set to DB_DUMP_BEGIN: +0. Is there any way to troubleshoot this?

version: '3.1'

services:

Restore options:

Log into mysql container and change to backup directory

gunzip < db-backup.sql.gz | mysql -p -u cicgate_main

Relative +MM, i.e. how many minutes after starting the container or Absolute HHMM, e.g. 2330 or 0415

my-mysql-backup:
image: tiredofit/db-backup
hostname: my-mysql-backup
volumes:
- ./mysql/backups:/usr/src/backups
environment:
DB_TYPE: mysql
DB_HOST: my-mysql
DB_NAME: cicgate_ctmp
DB_USER: cicgate_main
DB_PASS: xxxxx
DB_DUMP_FREQ: 1440
DB_DUMP_BEGIN: 1615
DB_CLEANUP_TIME: 1440
DB_DUMP_TARGET: /usr/src/backups
DB_DUMP_DEBUG: "true"
MD5: "false"
depends_on:
- my-mysql
deploy:
replicas: 1

my-mysql:
image: mysql:5.7.22
hostname: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=myxxx
- MYSQL_USER=myuser
- MYSQL_PASSWORD=myrootxxx
volumes:
- my_mysql_data:/var/lib/mysql
- ./mysql/my.cnf:/etc/mysql/conf.d/mysite.cnf
- ./mysql/data:/docker-entrypoint-initdb.d
- ./mysql/backups:/usr/src/backups
secrets:
- mysql_config
deploy:
replicas: 1

Azure support

Good day

I would just like to find out if support for Azure is on the horizon?

Cheers

s3 glacier

First of all, thanks for such a great tool. Local backups are working great!

I configured a backup job to backup to s3 glacier. The backups are created in the /tmp/backups folder as they should be, but the upload to s3 fails with error:

{"code":"MissingParameterValueException","message":"Required parameter missing: API version","type":"Client"}

According to: https://docs.aws.amazon.com/amazonglacier/latest/dev/api-common-request-headers.html

We are missing header 'x-amz-glacier-version'. The current api version is: 2012-06-01

Timezone-setting missing

Hi =) I've discovered that the container is not able to set a Timezone...So you can't plan Backups properly...a TZ-variable would be really nice (as this would also work in swarm) =)

s6-maximumtime: warning: child process crashed

It seems, if the database is bigger and consumes more time, it crashes due to the s6 maximumtime limit.

When I manually run backup-now everything is fine.
But the automatic crons wont work. In my logs I find something like:

[INFO] ** [db-backup] Backup routines Initialized on Thu Aug 27 03:09:30 CEST 2020
[NOTICE] ** [db-backup] Compressing backup with gzip
[cont-finish.d] executing container finish scripts...
[cont-finish.d] 10-db-backup: executing...
s6-maximumtime: warning: child process crashed
[cont-finish.d] 10-db-backup: exited 111.
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.

I was reading something about this on https://github.com/just-containers/s6-overlay:
S6_KILL_FINISH_MAXTIME (default = 5000): The maximum time (in milliseconds) a script in /etc/cont-finish.d could take before sending a KILL signal to it. Take into account that this parameter will be used per each script execution, it's not a max time for the whole set of scripts.

Not sure if it's related. But anyways, it shouldn't right?

Unable to run backup

Using unRaid Docker container (Both the CA app store one, as well as creating a custom one with the latest release from the docker hub), I get the following error when attempting to run 'backup-now':

/usr/local/bin/backup-now: line 4: /etc/s6/services/10-db-backup/run: No such file or directory

Is this a configuration error on my part? I am attempting to backup a mongo database from an external host.

Postgres Backup fails with user "root"

I have this docker-compose file:

version: '3.1'

services:
  # ...SOME OTHER SERVICES...

  pg:
    image: postgres:11
    container_name: pg-my-service-prod
    restart: always
    volumes:
      - $HOME/data/postgres-my-service-prod:/var/lib/postgresql/data
    environment:
      - POSTGRES_USER=root
      - POSTGRES_PASSWORD=root
      - POSTGRES_DB=myservicePROD
    ports:
      - 127.0.0.1:5432:5432
    networks:
      - my-service-prod

  db-backup:
    image: tiredofit/db-backup
    restart: always
    container_name: db-backup
    depends_on:
      - pg
    volumes:
      - $HOME/data/backups:/backup
    environment:
      - DB_TYPE=pgsql
      - DB_HOST=pg
      - DB_NAME=myservicePROD
      - DB_PORT=5432
      - DB_USER="root"
      - DB_PASS="root"
      # - DB_DUMP_BEGIN=0415
      - DB_DUMP_BEGIN=+01
      - MD5=TRUE
      - SPLIT_DB=TRUE
      - DEBUG_MODE=TRUE
    networks:
      - my-service-prod

  # ...SOME OTHER SERVICES...

networks:
    my-service-prod:

The backup fails with this message:

db-backup     | [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
db-backup     | [s6-init] ensuring user provided files have correct perms...exited 0.
db-backup     | [fix-attrs.d] applying ownership & permissions fixes...
db-backup     | [fix-attrs.d] 01-s6: applying...
db-backup     | [fix-attrs.d] 01-s6: exited 0.
db-backup     | [fix-attrs.d] 02-zabbix: applying...
db-backup     | [fix-attrs.d] 02-zabbix: exited 0.
db-backup     | [fix-attrs.d] 03-logrotate: applying...
db-backup     | [fix-attrs.d] 03-logrotate: exited 0.
db-backup     | [fix-attrs.d] done.
db-backup     | [cont-init.d] executing container initialization scripts...
db-backup     | [cont-init.d] 01-permissions: executing...
db-backup     | + DEBUG_PERMISSIONS=FALSE
db-backup     | + ENABLE_PERMISSIONS=TRUE
db-backup     | + '[' TRUE = TRUE ']'
db-backup     | + varenvusername=(`env | grep USER_ | awk -F= '{print tolower($1)}' | awk -F_ '{print $2}'`)
db-backup     | ++ grep USER_
db-backup     | ++ awk -F_ '{print $2}'
db-backup     | ++ awk -F= '{print tolower($1)}'
db-backup     | ++ env
db-backup     | + varenvuid=(`env | grep USER_ | awk -F= '{print tolower($2)}'`)
db-backup     | ++ env
db-backup     | ++ awk -F= '{print tolower($2)}'
db-backup     | ++ grep USER_
db-backup     | ++ echo ''
db-backup     | ++ sed 's/ /\\|/g'
db-backup     | + strusers=
db-backup     | + [[ ! -z '' ]]
db-backup     | + '[' FALSE = TRUE ']'
db-backup     | + '[' FALSE = true ']'
db-backup     | + '[' TRUE = TRUE ']'
db-backup     | + echo '**** [permissions] [debug] Users (varenvusername) from Docker env are: '
db-backup     | + echo '**** [permissions] [debug] UIDs (varenvuid) from Docker env are: '
db-backup     | + echo '**** [permissions] [debug] The string (strusers) used to grep the users is: '
db-backup     | + echo '**** [permissions] [debug] Users (varpassuser) from /etc/passwd are: '
db-backup     | + echo '**** [permissions] [debug] UIDs (varpassuserid) from /etc/passwd are: '
db-backup     | + counter=0
db-backup     | + '[' 0 -gt 0 ']'
db-backup     | + counter=0
db-backup     | + varenvgroupname=(`env | grep ^GROUP_ | grep -v GROUP_ADD_  | awk -F= '{print tolower($1)}' | awk -F_ '{print $2}'`)
db-backup     | **** [permissions] [debug] Users (varenvusername) from Docker env are:
db-backup     | **** [permissions] [debug] UIDs (varenvuid) from Docker env are:
db-backup     | **** [permissions] [debug] The string (strusers) used to grep the users is:
db-backup     | **** [permissions] [debug] Users (varpassuser) from /etc/passwd are:
db-backup     | **** [permissions] [debug] UIDs (varpassuserid) from /etc/passwd are:
db-backup     | ++ grep '^GROUP_'
db-backup     | ++ awk -F= '{print tolower($1)}'
db-backup     | ++ grep -v GROUP_ADD_
db-backup     | ++ env
db-backup     | ++ awk -F_ '{print $2}'
db-backup     | + varenvgid=(`env | grep ^GROUP_ | grep -v GROUP_ADD_ | awk -F= '{print tolower($2)}'`)
db-backup     | ++ env
db-backup     | ++ ++ ++ grep '^GROUP_'
db-backup     | grep -v GROUP_ADD_
db-backup     | awk -F= '{print tolower($2)}'
db-backup     | ++ sed 's/ /\\|/g'
db-backup     | ++ echo ''
db-backup     | + strgroups=
db-backup     | + [[ ! -z '' ]]
db-backup     | + '[' FALSE = TRUE ']'
db-backup     | + '[' FALSE = true ']'
db-backup     | + '[' TRUE = TRUE ']'
db-backup     | + echo '**** [permissions] [debug] Group names (varenvgroupname) from Docker environment settings are: '
db-backup     | + echo '**** [permissions] [debug] GIDs (grvarenvgid) from Docker environment settings are: '
db-backup     | + echo '**** [permissions] [debug] The string (strgroup) used to grep the groups is: '
db-backup     | + echo '**** [permissions] [debug] Group names (vargroupname) from /etc/group are: '
db-backup     | + echo '**** [permissions] [debug] GIDs (vargroupid) from /etc/group are: '
db-backup     | + '[' 0 -gt 0 ']'
db-backup     | + counter=0
db-backup     | + varenvuser2add=(`env | grep ^GROUP_ADD_ | awk -F= '{print $1}' | awk -F_ '{print tolower($3)}'`)
db-backup     | **** [permissions] [debug] Group names (varenvgroupname) from Docker environment settings are:
db-backup     | **** [permissions] [debug] GIDs (grvarenvgid) from Docker environment settings are:
db-backup     | **** [permissions] [debug] The string (strgroup) used to grep the groups is:
db-backup     | **** [permissions] [debug] Group names (vargroupname) from /etc/group are:
db-backup     | **** [permissions] [debug] GIDs (vargroupid) from /etc/group are:
db-backup     | ++ env
db-backup     | ++ grep '^GROUP_ADD_'
db-backup     | ++ awk -F_ '{print tolower($3)}'
db-backup     | ++ awk -F= '{print $1}'
db-backup     | + varenvdestgroup=(`env | grep ^GROUP_ADD_ | awk -F= '{print tolower($2)}'`)
db-backup     | ++ env
db-backup     | ++ grep ++ awk -F= '{print tolower($2)}'
db-backup     | '^GROUP_ADD_'
db-backup     | + '[' FALSE = TRUE ']'
db-backup     | + '[' FALSE = true ']'
db-backup     | + '[' TRUE = TRUE ']'
db-backup     | + echo '**** [permissions] [debug] Users (varenvuser2add) to add to groups are: '
db-backup     | + echo '**** [permissions] [debug] Groups (varenvdestgroup) to add users are: '
db-backup     | **** [permissions] [debug] Users (varenvuser2add) to add to groups are:
db-backup     | **** [permissions] [debug] Groups (varenvdestgroup) to add users are:
db-backup     | + mkdir -p /tmp/state
db-backup     | ++ basename /var/run/s6/etc/cont-init.d/01-permissions
db-backup     | + touch /tmp/state/01-permissions-init
db-backup     | [cont-init.d] 01-permissions: exited 0.
db-backup     | [cont-init.d] 02-zabbix: executing...
db-backup     | [cont-init.d] 02-zabbix: exited 0.
db-backup     | [cont-init.d] 03-cron: executing...
db-backup     | **** [cron] Disabling Cron
db-backup     | [cont-init.d] 03-cron: exited 0.
db-backup     | [cont-init.d] 04-smtp: executing...
db-backup     | **** [smtp] [debug] SMTP Mailcatcher Enabled at Port 1025, Visit http://127.0.0.1:8025 for Web Interface
db-backup     | **** [smtp] Disabling SMTP Features
db-backup     | [cont-init.d] 04-smtp: exited 0.
db-backup     | [cont-init.d] 99-container-init: executing...
db-backup     | [cont-init.d] 99-container-init: exited 0.
db-backup     | [cont-init.d] done.
db-backup     | [services.d] starting services
db-backup     | [services.d] done.
db-backup     |
db-backup     | ** [zabbix] Starting Zabbix Agent
db-backup     | 2019/06/30 12:44:43 Using in-memory storage
db-backup     | 2019/06/30 12:44:43 [SMTP] Binding to address: 0.0.0.0:1025
db-backup     | [HTTP] Binding to address: 0.0.0.0:8025
db-backup     | 2019/06/30 12:44:43 Serving under http://0.0.0.0:8025/
db-backup     | Creating API v1 with WebPath:
db-backup     | Creating API v2 with WebPath:AnalyticsServer-0.0.1-SNAPSHOT.jar started by root in /opt/app)org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$42f4199] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)environments was not found on the java.library.path: [/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib]org.hibernate.type.UUIDBinaryType@158f4cfe
db-backup     | + '[' '!' -n pgsql ']'
db-backup     | + '[' '!' -n pg ']'
db-backup     | + COMPRESSION=GZ
db-backup     | + PARALLEL_COMPRESSION=TRUE
db-backup     | + DB_DUMP_FREQ=1440
db-backup     | + DB_DUMP_BEGIN=+01
db-backup     | + DB_DUMP_TARGET=/backup
db-backup     | + DBHOST=pg
db-backup     | + DBNAME=myservicePROD
db-backup     | + DBPASS='"root"'
db-backup     | + DBUSER='"root"'
db-backup     | + DBTYPE=pgsql
db-backup     | + MD5=TRUE
db-backup     | + SPLIT_DB=TRUE
db-backup     | + TMPDIR=/tmp/backups
db-backup     | + '[' '' = NOW ']'
db-backup     | + '[' TRUE = 'TRUE ' ']'
db-backup     | + BZIP=bzip2
db-backup     | + GZIP=gzip
db-backup     | + XZIP=xz
db-backup     | + case "$DBTYPE" in
db-backup     | + DBTYPE=pgsql
db-backup     | + DBPORT=5432
db-backup     | + [[ -n "root" ]]
db-backup     | + POSTGRES_PASS_STR='PGPASSWORD="root"'
db-backup     | ++ date
db-backup     | + echo '** [db-backup] Initialized at at Sun' Jun 30 12:44:51 PDT 2019
db-backup     | ** [db-backup] Initialized at at Sun Jun 30 12:44:51 PDT 2019
db-backup     | ++ date +%s
db-backup     | + current_time=1561923891
db-backup     | ++ date +%Y%m%d
db-backup     | + today=20190630
db-backup     | + [[ +01 =~ ^\+(.*)$ ]]
db-backup     | + waittime=60
db-backup     | + sleep 60
db-backup     | [APIv1] KEEPALIVE /api/v1/events
db-backup     | + true
db-backup     | + mkdir -p /tmp/backups
db-backup     | ++ date +%Y%m%d-%H%M%S
db-backup     | + now=20190630-124551
db-backup     | + TARGET=pgsql_myservicePROD_pg_20190630-124551.sql
db-backup     | + case "$DBTYPE" in
db-backup     | + check_availability
db-backup     | + case "$DBTYPE" in
db-backup     | + COUNTER=0
db-backup     | + export 'PGPASSWORD="root"'
db-backup     | + PGPASSWORD='"root"'
db-backup     | + pg_isready --dbname=myservicePROD --host=pg --port=5432 '--username="root"' -q
db-backup     | + backup_pgsql
db-backup     | + '[' TRUE = TRUE ']'
db-backup     | + export 'PGPASSWORD="root"'
db-backup     | + PGPASSWORD='"root"'
db-backup     | ++ psql -h pg -U '"root"' -p 5432 -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;'
db-backup     | psql: FATAL:  password authentication failed for user ""root""
db-backup     | + DATABASES=
db-backup     | + '[' TRUE = TRUE ']'
db-backup     | ++ stat -c%s /backup/pgsql_myservicePROD_pg_20190630-124551.sql
db-backup     | stat: can't stat '/backup/pgsql_myservicePROD_pg_20190630-124551.sql': No such file or directory
db-backup     | + zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.size -o
db-backup     | zabbix_sender [759]: option requires an argument -- o
db-backup     | usage:
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] -s host -k key
db-backup     |                 -o value
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] [-s host] [-T] [-r]
db-backup     |                 -i input-file
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] -k key -o value
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] [-T] [-r] -i input-file
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] -s host
db-backup     |                 --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file -k key
db-backup     |                 -o value
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] [-s host]
db-backup     |                 --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file [-T] [-r]
db-backup     |                 -i input-file
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file -k key
db-backup     |                 -o value
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file [-T] [-r]
db-backup     |                 -i input-file
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] -s host
db-backup     |                 --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file -k key -o value
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] [-s host]
db-backup     |                 --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file [-T] [-r] -i input-file
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file -k key -o value
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file [-T] [-r] -i input-file
db-backup     |   zabbix_sender -h
db-backup     |   zabbix_sender -V
db-backup     | ++ date -r /backup/pgsql_myservicePROD_pg_20190630-124551.sql +%s
db-backup     | date: can't stat '/backup/pgsql_myservicePROD_pg_20190630-124551.sql': No such file or directory
db-backup     | + zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.datetime -o
db-backup     | zabbix_sender [761]: option requires an argument -- o
db-backup     | usage:
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] -s host -k key
db-backup     |                 -o value
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] [-s host] [-T] [-r]
db-backup     |                 -i input-file
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] -k key -o value
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] [-T] [-r] -i input-file
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] -s host
db-backup     |                 --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file -k key
db-backup     |                 -o value
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] [-s host]
db-backup     |                 --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file [-T] [-r]
db-backup     |                 -i input-file
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file -k key
db-backup     |                 -o value
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file [-T] [-r]
db-backup     |                 -i input-file
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] -s host
db-backup     |                 --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file -k key -o value
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] [-s host]
db-backup     |                 --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file [-T] [-r] -i input-file
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file -k key -o value
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file [-T] [-r] -i input-file
db-backup     |   zabbix_sender -h
db-backup     |   zabbix_sender -V
db-backup     | + [[ -n '' ]]
db-backup     | + '[' '' = TRUE ']'
db-backup     | + sleep 86400

Looks like it ignores the DB name in the command generation phase, therefore it's trying to connect to the root db. Tried with ', " and also without anything sorrounding db name, user and password

MySQL backup may not work with non-root user

Summary

MySQL backup may not work with non-root users in both manual and scheduled modes. Error message:

$ sudo docker exec -i backup backup-now
** Performing Manual Backup
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (0 seconds so far)
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (5 seconds so far)
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (10 seconds so far)
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (15 seconds so far)
...

Steps to reproduce

Sample docker-compose file

version: '3.7'
services:
  mysql:
    image: mysql:5.7
    environment:
      - MYSQL_ROOT_PASSWORD=password
      - MYSQL_USER=test
      - MYSQL_PASSWORD=test
      - MYSQL_DATABASE=test
    container_name: MySQL

  backup:
    image: tiredofit/db-backup:2.6.1
    environment:
      - DB_TYPE=mysql
      - DB_HOST=mysql
      - DB_PORT=3306
      - DB_USER=test
      - DB_PASS=test
      - DB_NAME=test
      - DB_DUMP_FREQ=1440
      - DB_DUMP_BEGIN=0000
      - DB_CLEANUP_TIME=43200
      - SPLIT_DB=FALSE
    container_name: backup

What is the expected correct behavior?

Backup should work.

Relevant logs and/or screenshots

$ sudo docker exec -i backup backup-now
** Performing Manual Backup
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (0 seconds so far)
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (5 seconds so far)
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (10 seconds so far)
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (15 seconds so far)
bash-5.1# mysql -u test -ptest -h mysql -e "SELECT COUNT(*) FROM information_schema.FILES;"
ERROR 1227 (42000) at line 1: Access denied; you need (at least one of) the PROCESS privilege(s) for this operation

Environment

  • Image version / tag: 2.6.1
  • Host OS: Unbut

Possible fixes

Use mysqlshow to test the database connection.

MD5 not working with influxdb backups

Summary

The influxdb backup sets $target to be a directory. md5sum doesn't work on directories, only on files.

generate_md5() {
if var_true "$MD5" ; then
print_notice "Generating MD5 for ${target}"
cd $tmpdir
md5sum "${target}" > "${target}".md5
MD5VALUE=$(md5sum "${target}" | awk '{ print $1}')
fi
}

Steps to reproduce

Back up any influxdb and enable md5

What is the expected correct behavior?

md5 generated for each file in the backup set

Relevant logs and/or screenshots

2021/05/14 11:28:50 /tmp/backups/influx_varken_192.168.111.11_20210514-105927/20210514T152841Z.s3140.tar.gz
2021/05/14 11:28:50 /tmp/backups/influx_varken_192.168.111.11_20210514-105927/20210514T152841Z.manifest
[NOTICE] ** [db-backup] Generating MD5 for influx_varken_192.168.111.11_20210514-105927
md5sum: can't read 'influx_varken_192.168.111.11_20210514-105927': Is a directory
md5sum: can't read 'influx_varken_192.168.111.11_20210514-105927': Is a directory

Environment

Unraid running tiredofit/db-backup:latest

Possible fixes

have a different md5 section in the run script for influxdb (and any other db's that have multi-file backup sets)

Doesn't work with influx

I got this container working with mysql but it fails with my influx databases. I think it has to do with this line:

influxd backup -database $DB -host {DBHOST} ${TMPDIR}/${TARGET}

I think {DBHOST} needs to be prefixed with $ like ${DBHOST}. It would also be nice if the port argument was used here (if the database isn't running on default port it wont work).

DB backup doesn't performed on Ubuntu 18.04

Hello,
thank you for this tool, but unfortunately, it looks like I have an issue with it.

I added this backup tool as an additional service to my docker-compose file like this:

...

  dbbackup:
    image: tiredofit/db-backup:latest
    container_name: db-backupper
    environment:
      DB_TYPE: pgsql
      DB_HOST: storage-db
      DB_NAME: storage_db
      DB_USER: postgres
      DB_PASS: ${POSTGRES_PASSWORD}
      # Test db backup each 3 minutes
      DB_DUMP_FREQ: 3
      # Cleanup backups older than 3 days
      DB_CLEANUP_TIME: 4320
    depends_on:
      - db
    networks:
      - my-network
    volumes:
      - ${DB_BACKUP_DIR}:/backup

I just added the line with more frequent DB backupping for debugging.

On my local computer with Ubuntu 16.04, it works just fine, but after I pushed my services on a remote server under Ubuntu 18.04 I see no backups at all.

Checking logs with docker logs CONTAINER-NAME gives the output below, and as I see it the only suspicious line is s6-svc: fatal: unable to control /var/run/s6/services/-d: No such file or directory, but my knowledge of that system stuff is very limited.

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] 00-functions: applying... 
[fix-attrs.d] 00-functions: exited 0.
[fix-attrs.d] 01-s6: applying... 
[fix-attrs.d] 01-s6: exited 0.
[fix-attrs.d] 02-zabbix: applying... 
[fix-attrs.d] 02-zabbix: exited 0.
[fix-attrs.d] 03-logrotate: applying... 
[fix-attrs.d] 03-logrotate: exited 0.
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 00-startup: executing... 
[cont-init.d] 00-startup: exited 0.
[cont-init.d] 01-timezone: executing... 
[NOTICE] ** [timezone] Setting timezone to 'America/Vancouver'
[cont-init.d] 01-timezone: exited 0.
[cont-init.d] 02-permissions: executing... 
[cont-init.d] 02-permissions: exited 0.
[cont-init.d] 03-zabbix: executing... 
[NOTICE] ** [zabbix] Disabling Zabbix Monitoring Functionality
s6-svc: fatal: unable to control /var/run/s6/services/-d: No such file or directory
[cont-init.d] 03-zabbix: exited 0.
[cont-init.d] 04-cron: executing... 
[NOTICE] ** [cron] Disabling Cron
[cont-init.d] 04-cron: exited 0.
[cont-init.d] 05-smtp: executing... 
[NOTICE] ** [smtp] Disabling SMTP Features
[cont-init.d] 05-smtp: exited 0.
[cont-init.d] 99-container: executing... 
[cont-init.d] 99-container: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.

Maybe you have some ideas about what's went wrong? Thank you for your time.

Allow specifying extra options for the various backup commands

So I need to add the option --default-character-set=utf8mb4 to the mysqldump command but others will not want this option. It would be awesome if I could specify custom options for the backup commands.

The reason I need this is described in the Nextcloud docs (backups wont work for utf8mb4 unless specified on the dump command):
https://docs.nextcloud.com/server/stable/admin_manual/configuration_database/mysql_4byte_support.html#mariadb-support

Thanks!

S3 SigV2 deprecated

When using this docker with Backblaze B2 I get the following error. After some research it appears that SigV2 is deprecated and no new buckets created on S3 since mid 2020 support it and B2 doesn't support it at all.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>InvalidRequest</Code>
    <Message>The V2 signature authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256</Message>
</Error>

https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingAWSSDK.html#UsingAWSSDK-sig2-deprecation

If I find the time I may look into the changes needed, but in the mean time I wanted to bring it to attention.

Thanks.

Allow multiple backup targets

You can define lists in environment variables. This way we could us the image to backup multiple targets without the need of hosting the image multiple times.

"MessageBusList:0:Name": "Test"
"MessageBusList:0:UserName": demo

Redis Backup Fails during GZIP compression

Scheduled and Manual backups of an unprotected Redis server fail.

/ # backup-now
** Performing Manual Backup
sending REPLCONF capa eof
SYNC sent to master, writing 2942 bytes to '/tmp/backups/redis__redis_20210321-152913.rdb'
Transfer finished with success.
Transfer finished with success.
[INFO] ** [db-backup] Dumping Redis - Flushing Redis Cache First
[INFO] ** [db-backup] Redis Backup Complete
[INFO] ** [db-backup] Redis Busy - Waiting and retrying in 5 seconds
[INFO] ** [db-backup] Redis Backup Complete
[INFO] ** [db-backup] Redis Busy - Waiting and retrying in 5 seconds
[INFO] ** [db-backup] Redis Backup Complete
[INFO] ** [db-backup] Redis Busy - Waiting and retrying in 5 seconds
[INFO] ** [db-backup] Redis Backup Complete
[INFO] ** [db-backup] Redis Busy - Waiting and retrying in 5 seconds
[INFO] ** [db-backup] Redis Backup Complete
[INFO] ** [db-backup] Redis Busy - Waiting and retrying in 5 seconds
[NOTICE] ** [db-backup] Compressing backup with gzip
stat: can't stat '/tmp/backups/redis__redis_20210321-152913.rdb.gz': No such file or directory
[NOTICE] ** [db-backup] Backup of redis__redis_20210321-152913.rdb.gz created with the size of  bytes
mv: can't rename '/tmp/backups/*.md5': No such file or directory
mv: can't rename '/tmp/backups/redis__redis_20210321-152913.rdb.gz': No such file or directory
[NOTICE] ** [db-backup] Sending Backup Statistics to Zabbix
stat: can't stat '/backup/redis__redis_20210321-152913.rdb.gz': No such file or directory
date: can't stat '/backup/redis__redis_20210321-152913.rdb.gz': No such file or directory
[NOTICE] ** [db-backup] Cleaning up old backups
rm: '/backup' is a directory

The container is running the tiredofit/db-backup:latest tag. I'm running it on Unraid.
I can see that it is connecting to Redis, and it is generating the backups themselves. If I go into the container I can see dozens of backups in the /tmp/backups folder. So I think it's failing when doing the GZIP.

I have GZIP compression enabled, along with multicore processing, But I use both of those settings with other containers, because I also back up Postgres and Mariadb using your container.

Any ideas? Or settings that I should try?

wrong timezone

Hi,
I have a question, the timezone is wrong even if I put the environment TZ="Europe/Paris" and even if I add tzdata in the dockerfile... What can I do ? Thanks for your help !!

Postgres based TimescalDB back-up difficulties

First of all huge thanks for making this tool available and help to get rid of the back-up hassle!

Currently I'm trying to use it for backing up a TimescaleDB but I'm encountering some issues, so hopefully you are able to help me with:

[NOTICE] ** [db-backup] Compressing backup with gzip


pg_dump: warning: there are circular foreign-key constraints on this table:


pg_dump:   hypertable


pg_dump: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.


pg_dump: Consider using a full dump instead of a --data-only dump to avoid this problem.


pg_dump: warning: there are circular foreign-key constraints on this table:


pg_dump:   chunk


pg_dump: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.


pg_dump: Consider using a full dump instead of a --data-only dump to avoid this problem.


[NOTICE] ** [db-backup] Generating MD5 for pgsql_exsyn_TimescaleDB_20210121-114348.sql.gz


[NOTICE] ** [db-backup] Backup of pgsql_exsyn_TimescaleDB_20210121-114348.sql.gz created with the size of 1572 bytes


[INFO] ** [db-backup] Dumping database: decide


[NOTICE] ** [db-backup] Compressing backup with gzip


pg_dump: warning: there are circular foreign-key constraints on this table:


pg_dump:   hypertable


pg_dump: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.


pg_dump: Consider using a full dump instead of a --data-only dump to avoid this problem.


pg_dump: warning: there are circular foreign-key constraints on this table:


pg_dump:   chunk


pg_dump: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.


pg_dump: Consider using a full dump instead of a --data-only dump to avoid this problem.


pg_dump: NOTICE:  hypertable data are in the chunks, no data will be copied


DETAIL:  Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.


HINT:  Use "COPY (SELECT * FROM <hypertable>) TO ..." to copy all data in hypertable, or copy each chunk individually.


pg_dump: NOTICE:  hypertable data are in the chunks, no data will be copied


DETAIL:  Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.


HINT:  Use "COPY (SELECT * FROM <hypertable>) TO ..." to copy all data in hypertable, or copy each chunk individually.


pg_dump: NOTICE:  hypertable data are in the chunks, no data will be copied


DETAIL:  Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.


HINT:  Use "COPY (SELECT * FROM <hypertable>) TO ..." to copy all data in hypertable, or copy each chunk individually.


pg_dump: NOTICE:  hypertable data are in the chunks, no data will be copied


DETAIL:  Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.


HINT:  Use "COPY (SELECT * FROM <hypertable>) TO ..." to copy all data in hypertable, or copy each chunk individually.


pg_dump: NOTICE:  hypertable data are in the chunks, no data will be copied


DETAIL:  Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.


HINT:  Use "COPY (SELECT * FROM <hypertable>) TO ..." to copy all data in hypertable, or copy each chunk individually.


[NOTICE] ** [db-backup] Generating MD5 for pgsql_decide_TimescaleDB_20210121-114348.sql.gz


[NOTICE] ** [db-backup] Backup of pgsql_decide_TimescaleDB_20210121-114348.sql.gz created with the size of 229055344 bytes


[NOTICE] ** [db-backup] Sending Backup Statistics to Zabbix

Is there a possibility to force a full pgsql dump to circumvent this?

Native backup support for TimescaleDB would even be greater of course, but I'm not sure how to implement their guidelines with your tool: https://docs.timescale.com/latest/using-timescaledb/backup

Please let me know if you need any additional info to provide a meaningful answer.

Thanks,

Rob

DB-CLEANUP doesn't work when using root

This code-section is wrong:

### Automatic Cleanup
    if [[ -n "$DB_CLEANUP_TIME" ]]; then
          find $DB_DUMP_TARGET/  -mmin +$DB_CLEANUP_TIME -iname "$DBTYPE_$DBNAME_*.*" -exec rm {} \;
    fi

If you use root, no DB-NAME is specified, which results into no cleanup...just remove name-section and delete everything in the bakup-folder older than given time like this:

### Automatic Cleanup
    if [[ -n "$DB_CLEANUP_TIME" ]]; then
          find $DB_DUMP_TARGET/  -mmin +$DB_CLEANUP_TIME -exec rm {} \;
    fi

It's the easiest solution that comes to my mind...hope it works...

Using awscli for upload backup files to S3

At the time using S3 have to error.
"The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256."

it's may be deprecated api, so it's change to using awscli

InfluxDB - Backup of multiple databases fails

Script contains for loop over $DB_NAME values likely to support backup of multiple InfluxDB databases but there is no setting for target in each iteration. So backup fails with stat: can't stat '/tmp/backups/influx_home_assistant telegraf_192.168.1.199_20201110-233711.sql': No such file or directory.

Adding target=influx_${DB}_${dbhost}_${now} solved problem for me.

Can't access Database after update

Hi :-) I recently made an update to your latest container, and since then I get:

Database not accessible - retrieving

Are docker-secrets still supported? I'm running in swarm...

Zabbix

Whats the whole zabbix stuff for? When running the container, theres a lot of communication to zabbix proxy. blocked by my pihole

[Bug] Docker secrets not working as of v2.1.1

I inject my DB password via Docker Secrets, and noticed that this is no longer working as of v2.1.1.

Looking at the code, the root cause appears to be a result of #43 . Specifically, these lines:

[[ ( -n "${DB_PASS}" ) ]] && file_env 'DB_PASS'

Which are neglecting to check whether DB_PASS_FILE is also set, in which case the password still needs to be read and injected.

I'll open a PR for this shortly.


This was a surprising regression, and I only caught it because I have monitoring which indicates my backups haven't succeeded for X amount of time. @tiredofit , just curious if you've thought about any plans for a test suite? Or a checklist for the manual testing that should go into each release?

In any case, thanks for the tool 👍

Make it possible to run the container with another user

I cant´ find any documentation how to run this container with another. I would like to run this container with the --user command. With security in mind, running as root is not a good idea.
When running with the `--user, i´ve got a lot of permission errors.

MYSQL_PASS_STR erroneous

Connection to mysql database not possible, but when changing the MYSQL_PASS_STR from

        [[ ( -n "${DB_PASS}" ) ]] && MYSQL_PASS_STR=" -p'${DBPASS}'"

to

        [[ ( -n "${DB_PASS}" ) ]] && MYSQL_PASS_STR=" -p${DBPASS}"

all works fine

How to backup all couchdb databases?

Hi, thanks for creating this very helpful tool.
I'm using an application that creates and uses couchdbs. Unfortunately I don't know the names of these databases as these are created dynamically.
A backup of all databases would be optimal for me. How can I do that? Just omit the DB_NAME environment variable?

mySQL/MariaDB - Backup fails when password contains spaces

Hi, first and foremost, thank you for the great work on this container!

I have four instances of this container installed, 3 backing up successfully, 1 not.
2 x MariaDB - NO spaces in password - successful
1 x Postgres - spaces IN password - successful
1 x MariaDB - spaces IN password - NOT successful.

I have enabled debug mode and am able to reproduce the behaviour by copying the mysql command (below) from the debug output and executing it from within the db-backup container.

(I have changed the words used for the password, but have maintained the length of each individual word, and the case, type of character used.)

Once I place quotes around the password, the command executes as expected.
Without them, components of the password after the first space are being treated as options being passed to mysql and are errorring out as unsupported (unrecognised) options.

mysql -umonicauser -P 3306 -h monica_mariadb -pgreen balloon mauve buster -e 'SELECT COUNT(*) FROM information_schema.FILES;'

Backups fail with compression

Default compression of GZ works.
XZ fails - output file size 0
I then tried ZSTD just to try and help give you add'l info
ZSTD works ok - so perhaps just a problem with XZ?

If the value SPLIT_DB is set to TRUE post-script.sh only shows information about the last backed DB

I have differents DB in my mariaDB and I am interested to back it up in separate files not all inside the same file.

So, the script /assets/custom-scripts/post-script.sh is executed after the last DB backup and shows only stadistics about the last backed DB, not the previous ones. Can i have running this script with all DB backed up ?

And by the way. Could you give an example to send this script trought the SMTP included in the container?

Thanks in advanced.

Docker build fails with the following error

  • apk add --virtual .db-backup-run-deps bzip2 mongodb-tools mariadb-client libressl pigz postgresql postgresql-client redis xz
    ERROR: unsatisfiable constraints:
    mongodb-tools (missing):

Syntax error in 2.7.0

Summary

I just updated the docker image tiredofit/db-backup from 2.4.0 to 2.7.0. Unfortunately this one doesn't work anymore. There seems to be a syntax error in the run script.

Steps to reproduce

Just execute a previously working config. For me this happens for a mariadb and postgres configuration.

What is the expected correct behavior?

The script should work.

Relevant logs and/or screenshots

./run: line 38: syntax error near unexpected token `)'
./run: line 38: `    "mysql" | "MYSQL" | "mariadb" | "MARIADB")'

Environment

  • Image version / tag: tiredofit/db-backup:2.7.0
  • Host OS: Synology

Possible fixes

In commit eb0ee61 there have been two ;; removed from the run script. Possibly this causes the syntax error. The previous version 2.6.1 works fine.

Support for raspberrypi (4)

Hey =) As your container is the best solution in docker available, it would be really cool if you support raspberrypi with images too. =)

Container won't stop (stalling on "syncing disks")

I cannot stop the container,

When I run docker image_name stop, the docker-compose logs shows:

example_db_backup_1 | [cont-finish.d] executing container finish scripts...
example_db_backup_1 | [cont-finish.d] done.
example_db_backup_1 | [s6-finish] syncing disks.

Steps to reproduce:

git clone https://github.com/tiredofit/docker-db-backup.git
cd docker-db-backup/examples
docker-compose --project-name dbbackuptest up

Both containers seems to starts OK as per the docker-compose output:

example-db           | Initializing database
example-db           | 2018-11-05 15:49:04 0 [Warning] InnoDB: Failed to set O_DIRECT on file./ibdata1; CREATE: Invalid argument, continuing anyway. O_DIRECT is known to result in 'Invalid argument' on Linux on tmpfs, see MySQL Bug#26662.
example-db-backup    | ./run: line 5: [: !=: unary operator expected
example-db-backup    | ./run: line 40: [: =: unary operator expected
example-db-backup    | ** [db-backup] Initialized at at Mon Nov 5 07:49:05 PST 2018
example-db-backup    | [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
example-db-backup    | [s6-init] ensuring user provided files have correct perms...exited 0.
example-db-backup    | [fix-attrs.d] applying ownership & permissions fixes...
example-db-backup    | [fix-attrs.d] 01-s6: applying... 
example-db-backup    | [fix-attrs.d] 01-s6: exited 0.
example-db-backup    | [fix-attrs.d] 02-zabbix: applying... 
example-db-backup    | [fix-attrs.d] 02-zabbix: exited 0.
example-db-backup    | [fix-attrs.d] 03-logrotate: applying... 
example-db-backup    | [fix-attrs.d] 03-logrotate: exited 0.
example-db-backup    | [fix-attrs.d] done.
example-db-backup    | [cont-init.d] executing container initialization scripts...
example-db-backup    | [cont-init.d] 01-permissions: executing... 
example-db-backup    | [cont-init.d] 01-permissions: exited 0.
example-db-backup    | [cont-init.d] 02-zabbix: executing... 
example-db-backup    | [cont-init.d] 02-zabbix: exited 0.
example-db-backup    | [cont-init.d] 03-cron: executing... 
example-db-backup    | **** [cron] Disabling Cron
example-db-backup    | [cont-init.d] 03-cron: exited 0.
example-db-backup    | [cont-init.d] 04-smtp: executing... 
example-db-backup    | **** [smtp] Disabling SMTP Features
example-db-backup    | [cont-init.d] 04-smtp: exited 0.
example-db-backup    | [cont-init.d] 99-container-init: executing... 
example-db-backup    | [cont-init.d] 99-container-init: exited 0.
example-db-backup    | [cont-init.d] done.
example-db-backup    | [services.d] starting services
example-db-backup    | [services.d] done.
example-db-backup    | 
example-db-backup    | ** [zabbix] Starting Zabbix Agent# 
[...]
example-db           | 2018-11-05 15:49:11 0 [Note] Reading of all Master_info entries succeded
example-db           | 2018-11-05 15:49:11 0 [Note] Added new Master_info '' to hash table
example-db           | 2018-11-05 15:49:11 0 [Note] mysqld: ready for connections.
example-db           | Version: '10.3.10-MariaDB-1:10.3.10+maria~bionic'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  mariadb.org binary distribution# 

In a new shell, when I execute docker stop example-db-backup, the only 3 following lines gets added to docker-compose output:

example-db-backup    | [cont-finish.d] executing container finish scripts...
example-db-backup    | [cont-finish.d] done.
example-db-backup    | [s6-finish] syncing disks.

But the docker stop command never ends.
Trying to exec some command into the container gives an error:

docker exec -ti example-db-backup hostname
# OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "process_linux.go:86: executing setns process caused \"exit status 21\"": unknown

And docker kill does not work either.

I am using:

  • Docker version 18.06.1-ce, build e68fc7a
  • docker-compose version 1.22.0, build f46880fe

`DB_USER` and `DB_PASS` became required for all database types

DB_USER and DB_PASS were optional for mongodb, and DB_PASS was optional for many other database types. However, due to recent changes, they now became required for all database types. Failed to supply either one will fail the backup. See error:

[ERROR] ** [db-backup] error: neither DB_USER nor DB_USER_FILE are set but are required
[ERROR] ** [db-backup] error: neither DB_PASS nor DB_PASS_FILE are set but are required

This can be fixed either from the upstream or in this repo. I will work on a temp fix.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.