Backup Reverse Proxy Site Configs to BS
I like to keep copies of things in various other places, and I like historical records of things. My installation here is what I primarily use as a notebook, blog, and general how-to because I CRS on a regular basis. I figured why not use the power of some bash scripting to run on a schedule to look at things, and then use BS API to add stuff here as a backup? Sure, my PM is backed up regularly, but quick views, edits, opening BS is much faster.
Concept
Use BS API in a bash script to back up the file contents of my NGINX reverse proxy container's /etc/sites-available/
files to a specific book
Features
- Each container = Chapter in the designated NRP Book
- Each backup = versioned page, named after the site config file's name
- Each page includes
- DTG of backup
- Container Name
- Container ID
- Container IP
- Version Number
- SHA256 Hash
- Current File Output
- Difference Output
- Script will run on schedule from CRON, backs up the configs only if the file changed
- Automatically creates the chapter if missing
The script will create a hash of each site config to help track changes. Initial run will upload everything in the directory, then subsequent runs of the script will generate a new hash, and compare - changes detected, upload the new version. Previous hashes per container will be stored for reference.
Script
Script is loaded and run from the NGINX reverse proxy container:Ā /root/nginx_backup_live.sh
Update the script's permissions:
chmod +x /root/nginx_backup_live.sh
CRON schedule:Ā
# EVERY HOUR
0 * * * * /root/nginx_backup_live.sh
# EVERY FOUR HOURS
0 */4 * * * /root/nginx_backup_live.sh
# DAILY AT 0200
0 2 * * * /root/nginx_backup_live.sh
# PICK ONE AND ADD IT OR ROLL YOUR OWN!
The actual live script:
#!/bin/bash
# Config
BOOKSTACK_URL="https://yoursite.com/api"
BOOK_ID=30
TOKEN_ID="ADD_TOKEN_HERE"
TOKEN_SECRET="ADD_SECRET_HERE"
CONFIG_DIR="/etc/nginx/sites-available"
HASH_DIR="$HOME/.nginx-backup-tracker"
# Mapping container names to IDs (used for internal reference)
declare -A container_ids=(
[nginxproxy]=151
[ag.skynet2.net]=152
[vw.skynet2.net]=153
# ADD ALL YOUR CONTAINERS!
)
mkdir -p "$HASH_DIR"
for conf_file in "$CONFIG_DIR"/*; do
name=$(basename "$conf_file")
id=${container_ids[$name]:-UNKNOWN}
ip="192.168.0.${id}"
[[ "$id" == "UNKNOWN" ]] && echo "ā ļø Unknown container ID for $name (file: $conf_file)" && continue
hash_file="$HASH_DIR/$name.sha256"
file_hash=$(sha256sum "$conf_file" | awk '{print $1}')
if [[ -f "$hash_file" && "$file_hash" == "$(cat "$hash_file")" ]]; then
echo "š¢ [$name] No changes since last backup (hash unchanged)"
continue
fi
echo "$file_hash" > "$hash_file"
dtg=$(TZ="America/Phoenix" date "+%d%b%y %H%M" | tr 'a-z' 'A-Z')
version=$(ls "$HASH_DIR/${name}_v"*.sha256 2>/dev/null | wc -l)
version=$((version + 1))
cp "$conf_file" "$HASH_DIR/${name}_v${version}.sha256"
encoded_content=$(sed 's/&/\&/g; s/</\</g; s/>/\>/g; s/"/\"/g' "$conf_file" | sed ':a;N;$!ba;s/\n/\ /g')
page_html="<table class=\"align-center\" style=\"border-collapse: collapse; border-width: 0px; border-style: none; width: 100%;\" border=\"0\">
<colgroup><col style=\"width: 22.0487%;\"><col style=\"width: 22.7638%;\"><col style=\"width: 20.5049%;\"><col style=\"width: 17.1567%;\"><col style=\"width: 17.639%;\"></colgroup>
<tbody>
<tr><td><span style=\"color: rgb(53, 152, 219);\"><strong>DTG OF IMPORT</strong></span></td><td><span style=\"color: rgb(53, 152, 219);\"><strong>CONTAINER NAME</strong></span></td><td><span style=\"color: rgb(53, 152, 219);\"><strong>CONTAINER ID</strong></span></td><td><span style=\"color: rgb(53, 152, 219);\"><strong>IP</strong></span></td><td><span style=\"color: rgb(53, 152, 219);\"><strong>VERSION OF FILE</strong></span></td></tr>
<tr><td>$dtg</td><td>$name</td><td>$id</td><td>$ip</td><td>$version</td></tr>
<tr><td colspan=\"5\"><span style=\"color: rgb(53, 152, 219);\"><strong>SHA256 HASH VALUE</strong></span></td></tr>
<tr><td colspan=\"5\">$file_hash</td></tr>
</tbody></table>
<pre><code class=\"language-nginx\">$encoded_content</code></pre>"
# Create or get chapter ID
chapter_resp=$(curl -s -H "Authorization: Token $TOKEN_ID:$TOKEN_SECRET" \
-H "Accept: application/json" \
"$BOOKSTACK_URL/books/$BOOK_ID/chapters")
echo "$chapter_resp" > /tmp/bookstack_chapters_response.json
chapter_id=$(echo "$chapter_resp" | jq -r ".data[] | select(.name == \"$name\") | .id")
if [[ -z "$chapter_id" || "$chapter_id" == "null" ]]; then
echo "š Chapter '$name' not found. Creating it..."
chapter_payload=$(jq -n --arg name "$name" --argjson book_id "$BOOK_ID" '{name: $name, book_id: $book_id}')
chapter_result=$(curl -s -X POST "$BOOKSTACK_URL/chapters" \
-H "Authorization: Token $TOKEN_ID:$TOKEN_SECRET" \
-H "Content-Type: application/json" \
-d "$chapter_payload")
echo "$chapter_result" > /tmp/bookstack_chapter_create_result.json
chapter_id=$(echo "$chapter_result" | jq -r '.id')
fi
page_title="Site Config Ver $version $dtg"
echo -e "\nš¤ Uploading page for $name..."
json_payload=$(jq -n \
--arg name "$page_title" \
--arg html "$page_html" \
--argjson book_id "$BOOK_ID" \
--argjson chapter_id "$chapter_id" \
'{name: $name, book_id: $book_id, chapter_id: $chapter_id, html: $html}')
result=$(curl -s -X POST "$BOOKSTACK_URL/pages" \
-H "Authorization: Token $TOKEN_ID:$TOKEN_SECRET" \
-H "Content-Type: application/json" \
-d "$json_payload")
if echo "$result" | grep -q '"error"'; then
echo "ā Upload failed for $name"
echo "API Response: $result"
else
echo "ā
Uploaded new config version for $name"
fi
done
# Cleanup temp debug files
rm -f /tmp/bookstack_chapters_response.json /tmp/bookstack_chapter_create_result.json
Ā