This guide shows you how to automate Cloudstic backups using cron jobs, systemd timers, and other scheduling tools.
Why Automate Backups?
Manual backups are unreliable. Automation ensures:
Consistency : Backups run on schedule, even when you forget
Versioning : Multiple snapshots over time for point-in-time recovery
Disaster recovery : Recent backups are always available
Peace of mind : Set it and forget it
Test your automation thoroughly before relying on it. Verify backups are created and restorable.
Prerequisites
Before automating:
Initialize your repository
cloudstic init -password "your passphrase" -add-recovery-key
Test manual backup
cloudstic backup -source local:~/Documents
Test manual restore
cloudstic restore -output test-restore.zip
Using Profiles for Automation (Recommended)
Profiles are the cleanest way to automate backups. Set up once, then your scripts only need the encryption password:
Create a store with encryption
cloudstic store new \
-name prod-s3 \
-uri s3:my-bucket/backups \
-s3-region eu-west-1 \
-password-env BACKUP_PASSWORD
Create profiles for each source
# Local documents
cloudstic profile new \
-name documents \
-source local:~/Documents \
-store-ref prod-s3
# Google Drive
cloudstic auth new -name google-main -provider google
cloudstic auth login -name google-main
cloudstic profile new \
-name gdrive \
-source gdrive-changes \
-store-ref prod-s3 \
-auth-ref google-main
Create a backup script
#!/bin/bash
set -euo pipefail
export BACKUP_PASSWORD = "your passphrase"
# Back up all profiles
cloudstic backup -all-profiles --no-prompt
# Apply retention
for profile in documents gdrive ; do
cloudstic forget -profile " $profile " \
-keep-daily 30 -keep-weekly 8 -keep-monthly 12 \
-prune --no-prompt
done
Make it executable:
Schedule with cron
0 2 * * * /home/user/bin/backup.sh >> /home/user/logs/backup.log 2>&1
Use --no-prompt in automation scripts to ensure commands never hang waiting for interactive input. Missing credentials will cause a clear error instead.
Add or remove profiles from profiles.yaml without touching the backup script. Use cloudstic backup -all-profiles and it picks up changes automatically.
Setting Up Environment Variables
Store credentials in environment variables to avoid typing them repeatedly.
Create a Configuration File
#!/bin/bash
# Cloudstic configuration
# Storage backend
export CLOUDSTIC_STORE = s3 : my-backup-bucket / laptop /
# AWS credentials (for S3)
export AWS_ACCESS_KEY_ID = your-access-key
export AWS_SECRET_ACCESS_KEY = your-secret-key
export CLOUDSTIC_S3_REGION = us-east-1
# Encryption (choose one)
export CLOUDSTIC_PASSWORD = "your passphrase"
# OR for automation:
# export CLOUDSTIC_ENCRYPTION_KEY="64-character-hex-key"
# Source configuration
export CLOUDSTIC_SOURCE = local : $HOME / Documents
Secure the file:
chmod 600 ~/.cloudstic_env
Never commit this file to version control. Add it to .gitignore if your home directory is tracked.
Load Configuration
In your backup scripts:
#!/bin/bash
source ~/.cloudstic_env
cloudstic backup -source " $CLOUDSTIC_SOURCE "
Automation with Cron (Linux/macOS)
Cron is the standard Unix job scheduler.
Basic Cron Job
Create a backup script
Create ~/bin/backup.sh: #!/bin/bash
set -euo pipefail
# Load configuration
source ~/.cloudstic_env
# Log file
LOG_FILE = " $HOME /logs/backup.log"
mkdir -p "$( dirname " $LOG_FILE ")"
# Run backup
echo "[$( date '+%Y-%m-%d %H:%M:%S')] Starting backup" >> " $LOG_FILE "
cloudstic backup -source local:~/Documents >> " $LOG_FILE " 2>&1
if [ $? -eq 0 ]; then
echo "[$( date '+%Y-%m-%d %H:%M:%S')] Backup successful" >> " $LOG_FILE "
else
echo "[$( date '+%Y-%m-%d %H:%M:%S')] Backup failed" >> " $LOG_FILE "
exit 1
fi
Make it executable:
Test the script
Run manually to verify: ~ /bin/backup.sh
cat ~/logs/backup.log
Add to crontab
Edit your crontab: Add a cron entry: # Run backup daily at 2 AM
0 2 * * * /home/user/bin/backup.sh
# Run backup every 6 hours
0 */6 * * * /home/user/bin/backup.sh
# Run backup Monday-Friday at 6 PM
0 18 * * 1-5 /home/user/bin/backup.sh
Use absolute paths in cron jobs. Avoid ~ or relative paths.
Verify cron setup
List your cron jobs: Wait for the scheduled time and check logs: tail -f ~/logs/backup.log
Cron Schedule Examples
Schedule Cron Expression Description Daily at 2 AM 0 2 * * *Once per day Every 6 hours 0 */6 * * *Four times daily Weekdays at 6 PM 0 18 * * 1-5Monday-Friday Sundays at noon 0 12 * * 0Weekly First of month 0 3 1 * *Monthly
Automation with Systemd (Linux)
Systemd timers are a modern alternative to cron.
Create a Systemd Service
Create service file
Create ~/.config/systemd/user/cloudstic-backup.service: [Unit]
Description =Cloudstic backup service
After =network-online.target
Wants =network-online.target
[Service]
Type =oneshot
EnvironmentFile =%h/.cloudstic_env
ExecStart =/usr/local/bin/cloudstic backup -source local:%h/Documents
StandardOutput =append:%h/logs/backup.log
StandardError =append:%h/logs/backup.log
[Install]
WantedBy =default.target
Create timer file
Create ~/.config/systemd/user/cloudstic-backup.timer: [Unit]
Description =Cloudstic backup timer
Requires =cloudstic-backup.service
[Timer]
OnCalendar =daily
OnCalendar =02:00
Persistent =true
[Install]
WantedBy =timers.target
Persistent=true ensures missed runs execute on next boot.
Enable and start the timer
# Reload systemd configuration
systemctl --user daemon-reload
# Enable timer (start on boot)
systemctl --user enable cloudstic-backup.timer
# Start timer now
systemctl --user start cloudstic-backup.timer
Verify timer status
# Check timer status
systemctl --user status cloudstic-backup.timer
# List all timers
systemctl --user list-timers
# View logs
journalctl --user -u cloudstic-backup.service -f
Test the service manually
Trigger a backup immediately: systemctl --user start cloudstic-backup.service
Systemd Timer Schedule Examples
# Daily at 2 AM
OnCalendar =daily
OnCalendar =02:00
# Every 6 hours
OnCalendar =*-*-* 00,06,12,18:00:00
# Weekdays at 6 PM
OnCalendar =Mon..Fri 18:00
# Weekly on Sunday
OnCalendar =Sun *-*-* 12:00:00
# Monthly on the 1st
OnCalendar =*-*-01 03:00:00
Advanced Backup Script
A production-ready script with logging, error handling, and retention:
~/bin/cloudstic-backup.sh
#!/bin/bash
set -euo pipefail
# Configuration
source ~/.cloudstic_env
LOG_DIR = " $HOME /logs"
LOG_FILE = " $LOG_DIR /backup-$( date +%Y%m%d).log"
mkdir -p " $LOG_DIR "
# Logging function
log () {
echo "[$( date '+%Y-%m-%d %H:%M:%S')] $* " | tee -a " $LOG_FILE "
}
log "======================================"
log "Starting Cloudstic backup"
# Check connectivity (for remote backends)
if ! ping -c 1 8.8.8.8 > /dev/null 2>&1 ; then
log "ERROR: No network connectivity"
exit 1
fi
# Run backup
log "Backing up: $CLOUDSTIC_SOURCE "
if cloudstic backup \
-source " $CLOUDSTIC_SOURCE " \
-tag automated \
>> " $LOG_FILE " 2>&1 ; then
log "Backup completed successfully"
else
log "ERROR: Backup failed with exit code $? "
exit 1
fi
# Apply retention policy
log "Applying retention policy"
if cloudstic forget \
-keep-last 7 \
-keep-daily 30 \
-keep-weekly 8 \
-keep-monthly 12 \
-prune \
>> " $LOG_FILE " 2>&1 ; then
log "Retention policy applied successfully"
else
log "WARNING: Retention policy failed with exit code $? "
fi
# Weekly integrity check (on Sundays)
if [ "$( date +%u)" = "7" ]; then
log "Running weekly integrity check"
if cloudstic check >> " $LOG_FILE " 2>&1 ; then
log "Integrity check passed"
else
log "ERROR: Integrity check failed"
exit 1
fi
fi
log "Cloudstic backup completed"
log "======================================"
# Rotate old logs (keep last 30 days)
find " $LOG_DIR " -name "backup-*.log" -mtime +30 -delete
Make it executable:
chmod +x ~/bin/cloudstic-backup.sh
Monitoring and Alerting
Email Notifications on Failure
Send email when backups fail:
~/bin/backup-with-email.sh
#!/bin/bash
source ~/.cloudstic_env
LOG_FILE = " $HOME /logs/backup.log"
if ! cloudstic backup -source local:~/Documents >> " $LOG_FILE " 2>&1 ; then
echo "Cloudstic backup failed. See attached log." | \
mail -s "Backup Failure on $( hostname )" \
-a " $LOG_FILE " \
you@example.com
exit 1
fi
Install mailutils or sendmail for the mail command: sudo apt install mailutils # Debian/Ubuntu
sudo yum install mailx # RHEL/CentOS
brew install mailutils # macOS
Health Check Pings
Use Healthchecks.io or similar services:
#!/bin/bash
source ~/.cloudstic_env
HEALTHCHECK_URL = "https://hc-ping.com/your-uuid"
# Start ping
curl -fsS --retry 3 " $HEALTHCHECK_URL /start" > /dev/null
# Run backup
if cloudstic backup -source local:~/Documents ; then
# Success ping
curl -fsS --retry 3 " $HEALTHCHECK_URL " > /dev/null
else
# Failure ping
curl -fsS --retry 3 " $HEALTHCHECK_URL /fail" > /dev/null
exit 1
fi
Cloud-Specific Automation
AWS Lambda Backup
Run backups from Lambda (e.g., backing up EFS to S3):
import os
import subprocess
def lambda_handler ( event , context ):
os.environ[ 'CLOUDSTIC_STORE' ] = 's3:my-backup-bucket'
os.environ[ 'CLOUDSTIC_ENCRYPTION_KEY' ] = os.environ[ 'ENCRYPTION_KEY' ] # from env var
result = subprocess.run([
'/opt/cloudstic' ,
'backup' ,
'-source' , 'local:/mnt/efs' ,
], capture_output = True , text = True )
if result.returncode != 0 :
raise Exception ( f "Backup failed: { result.stderr } " )
return {
'statusCode' : 200 ,
'body' : result.stdout
}
Package Cloudstic in a Lambda layer and schedule with EventBridge.
GitHub Actions Backup
Back up repositories to S3:
.github/workflows/backup.yml
name : Backup Repository
on :
schedule :
- cron : '0 2 * * *' # Daily at 2 AM UTC
workflow_dispatch : # Manual trigger
jobs :
backup :
runs-on : ubuntu-latest
steps :
- name : Checkout code
uses : actions/checkout@v3
- name : Install Cloudstic
run : |
wget https://github.com/cloudstic/cli/releases/latest/download/cloudstic_linux_amd64.tar.gz
tar xzf cloudstic_linux_amd64.tar.gz
sudo mv cloudstic /usr/local/bin/
- name : Run backup
env :
CLOUDSTIC_STORE : s3:my-backup-bucket
CLOUDSTIC_ENCRYPTION_KEY : ${{ secrets.ENCRYPTION_KEY }}
AWS_ACCESS_KEY_ID : ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY : ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run : |
cloudstic backup -source local:.
Backing Up Cloud Sources
Google Drive Automated Backup
#!/bin/bash
source ~/.cloudstic_env
# Back up Google Drive using incremental changes API
cloudstic backup \
-source gdrive-changes \
-tag gdrive \
-tag automated
# Apply retention
cloudstic forget -keep-daily 30 -source gdrive -prune
OneDrive Automated Backup
#!/bin/bash
source ~/.cloudstic_env
# Back up OneDrive
cloudstic backup \
-source onedrive-changes \
-tag onedrive \
-tag automated
# Apply retention
cloudstic forget -keep-daily 30 -source onedrive -prune
For cloud sources, the first backup is slow (full scan). Subsequent backups use change APIs and are much faster.
Troubleshooting Automated Backups
Cron Job Doesn’t Run
Check cron service is running:
sudo systemctl status cron # Debian/Ubuntu
sudo systemctl status crond # RHEL/CentOS
Verify crontab syntax:
Check system logs:
grep CRON /var/log/syslog
Test script manually:
Environment Variables Not Loaded
Cron has a minimal environment. Always source your config file:
#!/bin/bash
source ~/.cloudstic_env # Must be first
cloudstic backup ...
Or set variables directly in the script:
export CLOUDSTIC_STORE = s3 : my-bucket
Backup Fails Silently
Redirect output to a log file:
0 2 * * * /home/user/bin/backup.sh >> /home/user/logs/backup.log 2>&1
Or use systemd with StandardOutput and StandardError.
”Repository locked” Error
A previous backup may still be running or crashed without releasing the lock.
# Check if backup is running
ps aux | grep cloudstic
# If no process, break the lock
cloudstic break-lock
Prevent overlapping runs in your script:
#!/bin/bash
LOCK_FILE = " $HOME /.cloudstic-backup.lock"
if [ -f " $LOCK_FILE " ]; then
echo "Backup already running (lock file exists)"
exit 1
fi
touch " $LOCK_FILE "
trap "rm -f $LOCK_FILE " EXIT
# Run backup
cloudstic backup ...
Best Practices
Test automation before relying on it
Run manual backups and restores to verify the setup works.
Monitor backup success
Use health check services or email alerts.
Rotate logs
Delete old log files to save space: find ~/logs -name "*.log" -mtime +30 -delete
Run periodic integrity checks
Add weekly cloudstic check runs: 0 3 * * 0 /usr/local/bin/cloudstic check >> ~/logs/check.log 2>&1
Combine backup with retention
Clean up old snapshots automatically: cloudstic backup ...
cloudstic forget -keep-daily 30 -prune
Document your setup
Keep notes on:
Backup schedule
Retention policy
Storage credentials location
Recovery procedure
Next Steps
Retention Policies Manage automated snapshot lifecycle
Encryption Keys Secure credentials for automated backups
Restoring Files Test your automated backups
Check Command Automate integrity verification