How to Set Up Incremental Backups with Restic and S3-Compatible Storage (Fast, Encrypted, and Automated)

Modern backups are not just about copying files to an external disk. If you manage a Linux server, a development workstation, or even a home lab, you need backups that are incremental, encrypted, and easy to restore. In this tutorial, you’ll configure restic (a fast, deduplicating backup tool) to send backups to S3-compatible object storage such as MinIO, Backblaze B2 (S3 API), Wasabi, or an on-prem S3 gateway. The result is a secure backup setup that scales well and can be automated with systemd.

Why restic + S3 is a strong backup combo

Restic is popular because it encrypts data before it leaves your machine, stores only changed blocks (deduplication), and keeps snapshots you can browse and restore from. Pairing it with S3-compatible storage makes your backups resilient: object storage is designed for durability, and you can back up over the network without mounting remote filesystems.

Prerequisites

You’ll need a Linux machine (Debian/Ubuntu/Fedora/AlmaLinux all work), an S3 endpoint (cloud or self-hosted), and credentials (access key and secret key). Make sure the target bucket exists or that your provider allows creating it via API. Also decide what to back up: typical choices are /etc, application configs, and data directories like /srv or /var/lib (be careful with databases; see notes below).

Step 1: Install restic

On Ubuntu/Debian:

sudo apt update && sudo apt install -y restic

On Fedora:

sudo dnf install -y restic

Verify:

restic version

Step 2: Export S3 credentials securely

Restic uses environment variables for S3 credentials. For a quick test in your current shell:

export AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY"

export AWS_SECRET_ACCESS_KEY="YOUR_SECRET_KEY"

If you’re using a non-AWS endpoint (MinIO, Wasabi, etc.), also set a custom endpoint URL:

export AWS_DEFAULT_REGION="us-east-1"

export RESTIC_REPOSITORY="s3:https://s3.example.com/my-restic-bucket"

For AWS S3, your repository might look like:

export RESTIC_REPOSITORY="s3:s3.amazonaws.com/my-restic-bucket"

Step 3: Initialize the backup repository

Initialize once per repository. Restic will prompt for a repository password (this is used for encryption):

restic init

Store this password carefully. If you lose it, you cannot decrypt your backups.

Step 4: Run your first incremental backup

Start with a small but meaningful set of paths. Example:

restic backup /etc /home --exclude /home/*/.cache

Run the same command again later and you’ll get an incremental snapshot: restic will upload only what changed. To see what was saved:

restic snapshots

To verify repository integrity (recommended after initial setup):

restic check

Step 5: Create a smart retention policy (forget + prune)

Backups are only useful if they don’t grow forever. Restic can enforce retention rules and remove unneeded data. A common policy is: keep daily backups for 7 days, weekly for 4 weeks, and monthly for 12 months:

restic forget --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --prune

The --prune flag actually removes unreferenced data, reclaiming space in the S3 bucket.

Step 6: Automate backups with systemd (service + timer)

For reliable automation, use a systemd timer instead of cron. Create an environment file that root can read, for example:

sudo nano /etc/restic.env

Add:

AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY

AWS_SECRET_ACCESS_KEY=YOUR_SECRET_KEY

AWS_DEFAULT_REGION=us-east-1

RESTIC_REPOSITORY=s3:https://s3.example.com/my-restic-bucket

RESTIC_PASSWORD=YOUR_STRONG_REPO_PASSWORD

Now create the service unit:

sudo nano /etc/systemd/system/restic-backup.service

Example content:

[Unit]
Description=Restic Backup
Wants=network-online.target
After=network-online.target

[Service]
Type=oneshot
EnvironmentFile=/etc/restic.env
ExecStart=/usr/bin/restic backup /etc /home --exclude /home/*/.cache
ExecStartPost=/usr/bin/restic forget --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --prune

[Install]
WantedBy=multi-user.target

Create the timer:

sudo nano /etc/systemd/system/restic-backup.timer

Example content:

[Unit]
Description=Run Restic Backup Daily

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target

Enable and start:

sudo systemctl daemon-reload

sudo systemctl enable --now restic-backup.timer

Check status and logs:

systemctl list-timers | grep restic

journalctl -u restic-backup.service -e

Step 7: Test a restore (don’t skip this)

A backup you haven’t restored from is just a hope. List snapshots, pick one, then restore to a temporary directory:

restic snapshots

restic restore latest --target /tmp/restic-restore-test

To restore a single file or folder, you can use restic dump or browse snapshots with restic ls to find the path.

Practical notes for servers and databases

If you back up database files directly (like PostgreSQL under /var/lib/postgresql), you risk capturing inconsistent data. Prefer application-aware backups: use pg_dump for PostgreSQL, mysqldump for MySQL/MariaDB, or filesystem snapshots (LVM/ZFS) followed by restic. Also exclude large volatile paths such as caches, build folders, and container image layers unless you truly need them.

Conclusion

With restic and S3-compatible storage, you get fast incremental backups, strong encryption, simple retention rules, and clean automation through systemd timers. Once you’ve tested restores and verified the schedule, you’ll have a backup system that behaves like a reliable utility: quiet when it works, loud when it fails, and easy to trust when disaster strikes.

Comments