Overview
This step-by-step guide shows how to deploy MinIO with Docker and secure it behind Caddy for automatic HTTPS. MinIO is a high-performance, S3-compatible object storage that you can self-host for backups, logs, media, and AI datasets. We will use Docker Compose on Ubuntu 24.04, set up Caddy as a reverse proxy with free Let’s Encrypt TLS, create a bucket, and test access using the AWS CLI. The result is a production-ready, S3-compatible endpoint at your own domain.
Prerequisites
You will need: (1) a clean Ubuntu 24.04 server with a public IP, (2) a domain with two DNS A records pointing to your server (for example, minio.example.com and console.example.com), (3) Docker and Docker Compose installed, and (4) port 80/443 open in your firewall and cloud security group. Replace example.com with your domain throughout this tutorial.
Step 1: Open firewall and prepare project
Ensure inbound HTTP/HTTPS traffic is allowed so Caddy can obtain and renew TLS certificates. On Ubuntu with UFW:
sudo ufw allow 80,443/tcp
sudo ufw reload
mkdir -p ~/minio-caddy && cd ~/minio-caddy
Step 2: Create Docker Compose file
We will run MinIO and Caddy on the same Docker network. Caddy will request certificates from Let’s Encrypt automatically and reverse proxy to MinIO’s API and web console.
cat > docker-compose.yml <<'YAML'
version: "3.8"
services:
minio:
image: minio/minio:latest
command: server /data --console-address ":9001" --address ":9000"
environment:
- MINIO_ROOT_USER=admin
- MINIO_ROOT_PASSWORD=ChangeMe-StrongSecret123!
volumes:
- minio_data:/data
restart: unless-stopped
networks:
- edge
caddy:
image: caddy:2
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
depends_on:
- minio
restart: unless-stopped
networks:
- edge
volumes:
minio_data:
networks:
edge:
driver: bridge
YAML
Step 3: Create Caddyfile for automatic HTTPS
This configuration terminates TLS, enables compression, and proxies API and console to MinIO. Make sure both hostnames have valid DNS A records pointing to your server’s public IP before starting.
cat > Caddyfile <<'CADDY'
minio.example.com {
encode zstd gzip
reverse_proxy minio:9000
}
console.example.com {
encode zstd gzip
reverse_proxy minio:9001
}
CADDY
Step 4: Start the stack
Bring everything up in the background. Caddy will automatically request TLS certificates from Let’s Encrypt on the first request and keep them renewed.
docker compose up -d
docker compose logs -f caddy
Once ready, visit https://console.example.com to access the MinIO console. Log in with the root credentials you set (admin / ChangeMe-StrongSecret123!). For security, change this password after your first login and create dedicated users for apps instead of sharing root.
Step 5: Create a bucket and access keys with MinIO Client (mc)
Use the MinIO Client to create a bucket and a non-root user with read/write access. Running mc in Docker avoids installing additional packages on the host.
# Add the MinIO endpoint alias (uses HTTPS through Caddy)
docker run --rm -it minio/mc \
alias set myminio https://minio.example.com admin 'ChangeMe-StrongSecret123!'
# Create a bucket
docker run --rm -it minio/mc mb myminio/my-bucket
# Create an app user (access key) and attach readwrite policy
docker run --rm -it minio/mc \
admin user add myminio appuser 'Another-StrongSecret456!'
docker run --rm -it minio/mc \
admin policy attach myminio readwrite --user appuser
Step 6: Test with AWS CLI
MinIO is S3-compatible, so existing tools work by pointing to your endpoint. The AWS CLI is a convenient way to confirm access.
# Install AWS CLI if missing (Ubuntu)
sudo apt-get update && sudo apt-get install -y awscli
# Export temporary credentials for testing
export AWS_ACCESS_KEY_ID=appuser
export AWS_SECRET_ACCESS_KEY='Another-StrongSecret456!'
# List buckets on MinIO via your HTTPS endpoint
aws s3 ls --endpoint-url https://minio.example.com
# Upload a file to the new bucket
echo "hello from minio" > test.txt
aws s3 cp test.txt s3://my-bucket/ --endpoint-url https://minio.example.com
Maintenance and hardening tips
- Change the root password after initial setup and keep it offline. Create per-application users with the least privileges required. Rotate secrets regularly.
- Back up the MinIO data volume and, if critical, replicate to another MinIO cluster or cloud S3 using lifecycle policies or tools like rclone. Test restores.
- Keep Docker images up to date: run “docker compose pull && docker compose up -d” during a maintenance window. MinIO releases frequent fixes and performance updates.
- Monitor health and logs: “docker compose logs -f minio” and “docker compose logs -f caddy”. Use MinIO Console dashboards to watch capacity and performance.
Troubleshooting
- Certificate errors: confirm DNS is correct and that port 80 is open. Let’s Encrypt requires HTTP-01 reachability for the first certificate. Check “docker compose logs -f caddy”.
- 502/Bad Gateway: ensure “minio” container is running and healthy, and verify the Caddyfile hostnames match your browser URL. Restart with “docker compose restart”.
- Permission or upload failures: verify bucket policies or user permissions via the MinIO Console, and confirm your AWS CLI command uses “--endpoint-url”.
You now have a secure, S3-compatible object storage endpoint running on your own infrastructure with automatic HTTPS. Use it for application artifacts, backups, and large datasets, and manage everything from an easy web console and standard S3 tooling.
Comments
Post a Comment