Docker Compose for Beginners: Everything You Need to Know for Self-Hosting
Learn Docker Compose from scratch. Master multi-container apps, volumes, networks, and deploy your own services with a single command.
If you’ve ever tried to self-host something and thought “there has to be a better way than running a dozen Docker commands,” you’ve found your answer: Docker Compose.
I remember the exact moment I discovered Docker Compose. I was spinning up a Nextcloud instance with MariaDB, and my command line looked like a crime scene — volumes everywhere, environment variables hardcoded, ports all over the place. Then someone casually dropped “just use docker-compose” and my life changed.
In this guide, you’ll learn how Docker Compose works, why it beats running containers manually, and how to deploy real self-hosting setups (Nextcloud + database, anyone?) without losing your mind.
Why Docker Compose? (vs. docker run)
Let’s be honest: docker run commands get ugly fast.
Here’s a basic WordPress setup:
docker run -d \
--name wordpress \
-p 80:80 \
-e WORDPRESS_DB_HOST=mysql \
-e WORDPRESS_DB_NAME=wordpress \
-e WORDPRESS_DB_USER=wpuser \
-e WORDPRESS_DB_PASSWORD=secretpassword \
-v /data/wordpress:/var/www/html \
wordpress:latest
Add a MySQL container, a reverse proxy, maybe Redis for caching, and you’re copy-pasting commands across three terminal windows. It’s error-prone and unmaintainable.
Docker Compose fixes this. One docker-compose.yml file describes your entire stack. One command (docker compose up -d) launches everything. Your team can read the file and understand your architecture instantly.
My take: Docker Compose is essential for self-hosting. If you’re managing more than one container, use it.
The Basics: Your First Docker Compose File
Create a file called docker-compose.yml:
version: '3.8'
services:
web:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./html:/usr/share/nginx/html
container_name: my-nginx
That’s it. This tells Docker:
- Use the latest nginx image
- Map port 80 on your machine to port 80 in the container
- Mount your local
html/folder to nginx’s web directory - Name the container
my-nginx
Now run it:
docker compose up -d
Check that it’s running:
docker compose ps
Stop it:
docker compose down
Key commands to remember:
docker compose up -d— Start everything in the backgrounddocker compose down— Stop and remove containersdocker compose logs— View logs from all servicesdocker compose logs service-name— View logs from one servicedocker compose restart— Restart containersdocker compose ps— Show running containers
Understanding Volumes
Volumes are how you persist data. Without them, your data dies when the container dies.
There are three volume types:
1. Bind Mounts (Local Folders)
Mount a folder on your host machine into the container:
services:
nextcloud:
image: nextcloud:latest
volumes:
- ./nextcloud-data:/var/www/html # ./nextcloud-data on host → /var/www/html in container
Use this for:
- Development (edit files on your machine, see changes in container)
- Configuration files you want to edit
- Anything you need quick access to
2. Named Volumes
Let Docker manage the storage. Safer for production:
services:
postgres:
image: postgres:15
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
postgres-data: # Define the named volume here
Docker stores this in /var/lib/docker/volumes/ and handles the details for you.
Use this for:
- Databases (data consistency matters)
- Anything long-term that you don’t need to edit directly
- Production environments
3. Anonymous Volumes
Create a volume without naming it:
services:
app:
image: myapp:latest
volumes:
- /data # Data stored, but Docker manages the path
Honestly, I avoid these. Just use named volumes.
Pro tip: Always use volumes. A container without persistent storage is a container where your data dies. I learned this the hard way.
Networks: Talking Between Containers
Here’s something cool: by default, all services in your docker-compose file can talk to each other using their service name as the hostname.
version: '3.8'
services:
web:
image: nextcloud:latest
environment:
- NEXTCLOUD_DB_HOST=db # ← Service name becomes hostname
- NEXTCLOUD_DB_NAME=nextcloud
- NEXTCLOUD_DB_USER=nextcloud
- NEXTCLOUD_DB_PASSWORD=secure_password
db:
image: postgres:15
environment:
- POSTGRES_DB=nextcloud
- POSTGRES_USER=nextcloud
- POSTGRES_PASSWORD=secure_password
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
Notice how web references db by name? That works automatically. Docker creates a network and handles DNS resolution.
Important: This network isolation means your containers are not exposed to the outside internet by default. Only expose ports you need:
services:
web:
ports:
- "80:80" # ✓ Accessible from outside
db:
# No ports listed — only `web` can talk to it
My approach: Only expose the web service. Keep databases, caches, and internal services hidden. If something doesn’t need to be public, don’t expose it.
Environment Variables
Store configuration separately from your docker-compose file. Create a .env file:
# .env
POSTGRES_PASSWORD=super_secret_password_here
POSTGRES_USER=appuser
NEXTCLOUD_ADMIN_PASSWORD=another_secret
DOMAIN=nextcloud.example.com
Then reference it in your compose file:
services:
db:
image: postgres:15
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_USER=${POSTGRES_USER}
Docker reads the .env file automatically and substitutes variables.
Critical security note: Never commit .env to git. Add it to .gitignore:
echo ".env" >> .gitignore
Seriously. If you commit secrets, you’ve compromised them instantly.
A Real-World Example: Nextcloud + PostgreSQL + Redis
Let’s build something useful:
version: '3.8'
services:
nextcloud:
image: nextcloud:latest
container_name: nextcloud
restart: unless-stopped
ports:
- "80:80"
environment:
- NEXTCLOUD_ADMIN_USER=${NEXTCLOUD_ADMIN_USER}
- NEXTCLOUD_ADMIN_PASSWORD=${NEXTCLOUD_ADMIN_PASSWORD}
- NEXTCLOUD_TRUSTED_DOMAINS=${DOMAIN}
- POSTGRES_HOST=db
- POSTGRES_DB=nextcloud
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- REDIS_HOST=redis
- PHP_MEMORY_LIMIT=512M
volumes:
- nextcloud-data:/var/www/html
- ./config:/var/www/html/config
depends_on:
- db
- redis
networks:
- nextcloud-net
db:
image: postgres:15-alpine
container_name: nextcloud-db
restart: unless-stopped
environment:
- POSTGRES_DB=nextcloud
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- nextcloud-net
# No ports exposed — only Nextcloud talks to this
redis:
image: redis:7-alpine
container_name: nextcloud-redis
restart: unless-stopped
networks:
- nextcloud-net
# No ports exposed
volumes:
nextcloud-data:
postgres-data:
networks:
nextcloud-net:
driver: bridge
The .env file:
NEXTCLOUD_ADMIN_USER=admin
NEXTCLOUD_ADMIN_PASSWORD=your_secure_password_here
POSTGRES_USER=nextcloud_user
POSTGRES_PASSWORD=another_secure_password_here
DOMAIN=nextcloud.yourdomain.com
Deploy it:
docker compose up -d
What just happened:
- Nextcloud started and exposed port 80
- PostgreSQL started (hidden, only Nextcloud can access it)
- Redis started for caching (hidden, only Nextcloud can access it)
- All three containers can talk to each other by hostname
- Data persists in named volumes
- Configuration is externalized in
.env
Restart behavior: The restart: unless-stopped policy means if your server reboots, all containers restart automatically. Perfect for self-hosting.
Useful Commands for Debugging
Things will break. Here’s how to figure out why:
# View logs from all services
docker compose logs
# Follow logs in real-time (like `tail -f`)
docker compose logs -f
# Logs from one service only
docker compose logs nextcloud
# Last 50 lines
docker compose logs --tail=50
# Show running containers
docker compose ps
# Check resource usage
docker compose stats
# Execute a command inside a container
docker compose exec db psql -U nextcloud_user -d nextcloud
# Stop just one service (don't stop others)
docker compose stop nextcloud
# Restart one service
docker compose restart nextcloud
# Remove everything (containers + volumes)
docker compose down -v # ⚠️ This deletes your data!
# Remove everything except volumes
docker compose down # Safer — volumes persist
Pro debugging tip: If a container exits immediately, check logs:
docker compose logs nextcloud | tail -20
Usually you’ll see permission denied or port already in use or some connection error. The logs tell the story.
Common Mistakes (I’ve Made These)
1. Forgetting Restart Policies
# ❌ Bad — crashes on reboot
services:
db:
image: postgres:15
# ✅ Good — restarts automatically
services:
db:
image: postgres:15
restart: unless-stopped
2. Exposing Everything
# ❌ Bad — database is publicly accessible
services:
db:
image: postgres:15
ports:
- "5432:5432"
# ✅ Good — hidden from outside world
services:
db:
image: postgres:15
# No ports
3. No Volume for Databases
# ❌ Bad — data dies with the container
services:
db:
image: postgres:15
# ✅ Good — data persists
services:
db:
image: postgres:15
volumes:
- postgres-data:/var/lib/postgresql/data
4. Hardcoding Secrets in docker-compose.yml
# ❌ Bad — anyone who sees this file sees your password
environment:
- DATABASE_PASSWORD=MySecretPassword123
# ✅ Good — read from .env
environment:
- DATABASE_PASSWORD=${DB_PASSWORD}
Next Steps After docker-compose.yml
Once you have a working setup, consider:
- Reverse Proxy: Use Caddy or Nginx to handle HTTPS, SSL certificates, and route traffic
- Backups: Automate backups of your named volumes
- Monitoring: Use something like Watchtower to auto-update images
- Secrets Management: For production, use Docker secrets instead of .env files
Check our guide on securing your VPS for hardening advice.
FAQ
Q: Can I use Docker Compose on a NAS or ARM system? A: Yes. Docker Compose is hardware-agnostic. Make sure your image supports your architecture (ARM for Raspberry Pi, x86 for servers). Most popular images have multi-arch builds.
Q: What if I need multiple instances of the same service?
A: Use the scale command:
docker compose up -d --scale worker=3
This creates 3 instances of the worker service.
Q: How do I backup my Docker volumes?
A: For named volumes, Docker stores data in /var/lib/docker/volumes/. Back up this directory, or use dedicated backup tools. For bind mounts, just back up the folders.
Q: Can I edit docker-compose.yml while containers are running? A: Yes. Then run:
docker compose up -d
Docker will add/remove/modify containers as needed.
Q: What’s the difference between docker-compose and docker compose?
A: docker-compose is the old standalone tool. docker compose (v2) is built into Docker. Use docker compose — it’s newer and faster.
Q: How do I access a service from outside my network? A: You don’t expose the port in docker-compose. Instead, use a reverse proxy (Caddy, Nginx) that sits in front of your services. This gives you HTTPS, load balancing, and security.
Q: Can I use env variables in volumes paths? A: Yes:
volumes:
- ${HOME}/mydata:/data
Q: What if a service needs to run before another?
A: Use depends_on:
services:
app:
depends_on:
- db
db:
image: postgres:15
The db service starts before app.
Conclusion
Docker Compose is the difference between “managing a few containers” and “this is out of control.”
One file. One command. Everything documented and reproducible.
Start with a simple setup (one or two services). Graduate to complex stacks (Nextcloud + PostgreSQL + Redis). Once you understand the patterns, you can deploy anything.
Go forth and compose. Your future self will thank you for writing a decent docker-compose.yml instead of a shell script with 47 parameters.
Resources:
Stay in the loop 📬
Get self-hosting tutorials, tool reviews, and infrastructure tips delivered to your inbox. No spam, unsubscribe anytime.
Join 0 self-hosters. Free forever.