Docker & Docker Compose
Production-grade FastCMS deployment with Docker Compose — Postgres, Redis, persistence, reverse proxy, scaling, backups.
Deploying FastCMS with Docker Compose
The fastest way to run FastCMS in production is the included
docker-compose.yml. It wires up FastCMS, PostgreSQL, and (optionally)
Redis with sensible defaults: persistent volumes, health checks,
auto-restart, environment-driven configuration.
This guide walks you through the full path from "clean server" to "FastCMS serving HTTPS traffic at your domain."
1. Quickstart (single host, 60 seconds)
# clone the repo
git clone https://github.com/aalhommada/fastCMS.git
cd fastCMS
# generate a secret + configure
cp .env.example .env
echo "SECRET_KEY=$(openssl rand -hex 32)" >> .env
echo "POSTGRES_PASSWORD=$(openssl rand -hex 16)" >> .env
# bring up the stack
docker compose up -dWait ~20 seconds for Postgres to initialise. Then:
docker compose logs -f fastcms # see startup
curl http://localhost:8000/health # → {"status":"healthy",...}- Admin UI: http://localhost:8000/admin
- OpenAPI: http://localhost:8000/docs
The Dockerfile uses uv for dependency installation — cold builds finish in ~30 seconds, warm rebuilds in ~3 seconds.
2. What the included stack runs
The docker-compose.yml in the repo wires up two services:
| Service | Image | Purpose |
|---|---|---|
fastcms | built from local Dockerfile | The FastCMS app — uvicorn + uv-built venv |
postgres | postgres:17-alpine | Production database (replaces dev SQLite) |
Volumes:
fastcms-data(named) →/app/datainside the container — uploaded files, SQLite fallback, backups./plugins→/app/plugins(bind mount) — your custom plugins./hooks→/app/hooks(bind mount) — your.pyhook scriptspostgres-data(named) → Postgres data files
You can add Redis by uncommenting a few lines (next section).
3. Environment variables you'll care about
All settings flow through the .env file. The compose file references
them with safe defaults.
Required
SECRET_KEY=<openssl rand -hex 32> # signs JWTs — never commit
POSTGRES_PASSWORD=<openssl rand -hex 16>Common production tweaks
BASE_URL=https://api.yourapp.com # your public URL
CORS_ORIGINS=https://app.yourapp.com # restrict CORS in prod
# admin auto-creation on first boot
ADMIN_EMAIL=admin@yourapp.com
ADMIN_PASSWORD=<a strong password>
# email (SMTP) — needed for verification + password reset
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_USER=apikey
SMTP_PASSWORD=<provider token>
SMTP_FROM_EMAIL=noreply@yourapp.comOptional — Redis for multi-instance realtime
To run more than one FastCMS replica behind a load balancer, enable
Redis pub/sub. Add this to docker-compose.yml:
redis:
image: redis:7-alpine
restart: unless-stopped
volumes:
- redis-data:/data
volumes:
redis-data:And add to the fastcms service's environment::
- REDIS_ENABLED=true
- REDIS_URL=redis://redis:6379/0WebSocket events will now broadcast across all replicas via Redis.
Optional — cloud file storage
For >1 host or >1 replica, switch from local-disk to S3 / Azure Blob:
STORAGE_TYPE=s3
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=...
AWS_S3_BUCKET=fastcms-uploads(Or STORAGE_TYPE=azure + Azure Blob settings — see .env.example.)
The Dockerfile already bundles boto3 and azure-storage-blob.
4. Persistence
Three things are stateful and must survive container restarts:
- PostgreSQL — automatic, stored in the named volume
postgres-data - Uploaded files — in
/app/data/files, stored in named volumefastcms-data - Plugins — bind-mounted from the host
./pluginsdirectory
If you're moving to a managed Postgres (RDS, Cloud SQL, Supabase), just
point DATABASE_URL at it and remove the postgres service:
DATABASE_URL=postgresql+asyncpg://user:pass@host.region.rds.amazonaws.com/fastcms5. Reverse proxy for HTTPS
The included compose file listens on plain HTTP port 8000. Put a TLS-terminating reverse proxy in front of it for production.
Option A — Caddy (easiest, automatic Let's Encrypt)
# add to docker-compose.yml
caddy:
image: caddy:2-alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy-data:/data
- caddy-config:/config
restart: unless-stopped
volumes:
caddy-data:
caddy-config:Create Caddyfile:
api.yourapp.com {
reverse_proxy fastcms:8000
}docker compose up -d and Caddy will auto-fetch a Let's Encrypt cert.
Option B — nginx
server {
listen 443 ssl http2;
server_name api.yourapp.com;
ssl_certificate /etc/letsencrypt/live/api.yourapp.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.yourapp.com/privkey.pem;
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support for realtime
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}
}6. Scaling beyond one replica
To run multiple FastCMS containers behind a load balancer:
- Enable Redis (section 3) so realtime events fan out across replicas.
- Use cloud storage (section 3) — local-disk file storage isn't shared.
- Set
replicas:in compose (Compose v3.4+) or scale on the CLI:docker compose up -d --scale fastcms=3 - Use a load balancer in front (Caddy/nginx round-robin, or a cloud LB).
- Sticky sessions are NOT required — JWTs are stateless. WebSocket connections are pinned to one replica but events flow via Redis.
For really large deployments, consider Kubernetes — each pod runs the same image with the same env vars. The included Dockerfile is Kubernetes-ready.
7. Backups
FastCMS has a built-in backup API. Schedule it via cron on the host:
# crontab -e
0 3 * * * curl -s -X POST https://api.yourapp.com/api/v1/backups \
-H "Authorization: Bearer $ADMIN_TOKEN"Backups live in /app/data/backups/ inside the container — i.e. inside
the fastcms-data named volume. Mirror them off-host with rclone /
restic / your S3 lifecycle policy.
Database-level backup is also worth scheduling separately
(pg_dump | restic on the host) so you can recover from a corrupted
backup zip too.
8. Health checks & observability
The container ships with a HEALTHCHECK that hits /health. Compose
automatically waits for Postgres to be healthy before starting FastCMS.
To ship logs to a central aggregator:
logging:
driver: "json-file" # or "syslog", "gelf", "fluentd", etc.
options:
max-size: "10m"
max-file: "3"The application emits JSON-formatted logs (LOG_FORMAT=json) — feed
them directly into Datadog, Loki, ELK, or CloudWatch Logs.
9. Common pitfalls
| Pitfall | Fix |
|---|---|
SECRET_KEY change kicks all users out | Set it once and store it in a secret manager (Vault, AWS Secrets Manager). Rotate with SECRET_KEY_PREVIOUS to allow old JWTs during the rotation window. |
| File uploads break on second replica | You're still on STORAGE_TYPE=local. Switch to S3 or Azure. |
| WebSocket events only reach one tab | Redis isn't enabled. Set REDIS_ENABLED=true. |
Permission denied on volume mount | The container runs as root by default; if you switch to a non-root user, set chown on the data dir. |
| Slow first request after deploy | Cold start. Use --start-period=30s in your healthcheck to allow startup. |
| Plugin not loading | Plugins are loaded once at startup. docker compose restart fastcms after installing one. |
10. Updating to a new release
cd fastCMS
git pull
docker compose build # rebuilds the FastCMS image (uv = fast)
docker compose up -d # rolling restart
docker compose logs -f fastcmsFor zero-downtime updates with multiple replicas, stop them one at a time:
docker compose up -d --no-deps --build --scale fastcms=3 fastcmsNext steps
- Configure email + OAuth
- Add AI plugins (
fastcms plugin install ai-core) - Hardening checklist
- Backup & restore details