If you’ve ever deployed a Django app to a VPS and thought “there has to be a better way,” you’re right. Docker makes the whole thing reproducible, and throwing in Prometheus and Grafana means you actually know what’s happening on your server after you deploy.
This guide walks through the full process: updating a fresh Ubuntu VPS, installing Docker, setting up monitoring with Prometheus and Grafana, and deploying a Django app from GitHub. All containerized, all behind Nginx with SSL.
Updating the VPS
Before touching anything, update and reboot. You want a clean slate.
sudo apt update && sudo apt upgrade -y && sudo apt autoremove -y && sudo reboot
SSH back in after the reboot.
Installing Docker
Add Docker’s official repository and install the engine. Don’t use the docker.io package from Ubuntu’s repos, it’s usually outdated.
sudo apt-get install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Run without sudo:
sudo usermod -aG docker $USER
newgrp docker
Verify:
docker --version
docker compose version
docker run hello-world
Log out and back in for the group change to stick permanently.
Setting Up Prometheus
Prometheus scrapes metrics from your server and services. You’ll pair it with Node Exporter to get CPU, memory, disk, and network stats.
Create a Docker Network
All monitoring containers will talk to each other through this network.
docker network create monitoring
Prometheus Configuration
mkdir -p ~/monitoring/prometheus
cat > ~/monitoring/prometheus/prometheus.yml << 'EOF'
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'node-exporter'
static_configs:
- targets: ['node-exporter:9100']
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
EOF
Run Node Exporter
Node Exporter exposes your VPS hardware and OS metrics for Prometheus to scrape.
docker run -d \
--name node-exporter \
--network monitoring \
--restart unless-stopped \
-v /proc:/host/proc:ro \
-v /sys:/host/sys:ro \
-v /:/rootfs:ro \
--pid host \
prom/node-exporter:latest \
--path.procfs=/host/proc \
--path.sysfs=/host/sys \
--path.rootfs=/rootfs
Run Prometheus
docker run -d \
--name prometheus \
--network monitoring \
--restart unless-stopped \
-p 9090:9090 \
-v ~/monitoring/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml \
-v prometheus_data:/prometheus \
prom/prometheus:latest
Reverse Proxy with Nginx
You probably want Prometheus on a subdomain instead of your-ip:9090. Install Nginx and Certbot if you haven’t already:
sudo apt install -y nginx certbot python3-certbot-nginx
Create the site config (replace prom.yourdomain.com with your actual subdomain):
sudo tee /etc/nginx/sites-available/prometheus << 'EOF'
server {
listen 80;
server_name prom.yourdomain.com;
location / {
proxy_pass http://localhost:9090;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
EOF
sudo ln -s /etc/nginx/sites-available/prometheus /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx
Add SSL:
sudo certbot --nginx -d prom.yourdomain.com
Adding Basic Auth
Prometheus has no built-in authentication, so you should add basic auth through Nginx. Without this, anyone can see your server metrics.
sudo apt install -y apache2-utils
sudo htpasswd -c /etc/nginx/.htpasswd admin
Then update the Nginx config to add the auth_basic directives inside the location block:
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://localhost:9090;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
Reload Nginx:
sudo nginx -t && sudo systemctl reload nginx
Heads up: if Certbot has already modified your Nginx config, be careful when editing. Make sure the location block stays inside the server block. Certbot sometimes restructures the file, and pasting a new location block at the bottom of the file (outside the server block) will break Nginx.
Setting Up Grafana
Grafana is where you actually see things. Dashboards, graphs, gauges, all of it.
Run Grafana
docker run -d \
--name grafana \
--network monitoring \
--restart unless-stopped \
-p 3000:3000 \
-v grafana_data:/var/lib/grafana \
grafana/grafana:latest
Nginx Reverse Proxy
Same pattern as Prometheus. Point your subdomain’s A record to your VPS IP, then:
sudo tee /etc/nginx/sites-available/grafana << 'EOF'
server {
listen 80;
server_name graf.yourdomain.com;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
EOF
sudo ln -s /etc/nginx/sites-available/grafana /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx
sudo certbot --nginx -d graf.yourdomain.com
The extra Upgrade and Connection headers are for WebSocket support, which Grafana uses for live updates.
First Login
Default credentials are admin / admin. Grafana will prompt you to set a new password on first login. No need for Nginx basic auth here since Grafana handles its own authentication.
Connect Prometheus as a Data Source
- Go to Settings (gear icon) → Data Sources → Add data source
- Select Prometheus
- Set the URL to
http://prometheus:9090(this uses the Docker network name, notlocalhost) - Click Save & Test
Import the Node Exporter Dashboard
Instead of building dashboards from scratch, import the community-built Node Exporter Full dashboard:
- Go to Dashboards → Import
- Enter dashboard ID:
1860 - Select your Prometheus data source
- Click Import
You’ll immediately get a full infrastructure monitoring dashboard with CPU, memory, disk, network, and system load visualizations.
Useful PromQL Queries
If you want to build custom panels or just poke around in Prometheus directly, here are some useful queries:
# CPU usage percentage
100 - (avg(rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
# Memory usage percentage
100 * (1 - node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes)
# Disk usage percentage
100 - (node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"} * 100)
# Network received bytes per second
rate(node_network_receive_bytes_total{device="eth0"}[5m])
Deploying a Django App
Now the fun part. You’ll deploy a Django app from GitHub using Docker Compose, with PostgreSQL as the database.
Clone the Repo
mkdir -p ~/apps && cd ~/apps
git clone [email protected]:your-org/your-django-app.git
cd your-django-app
The Dockerfile
Here’s a production-ready Dockerfile for Django. One thing to watch out for: don’t use Python 3.14 images yet. Libraries like Pillow don’t have pre-built wheels for it, and the build will fail with zlib errors. Stick with Python 3.12.
FROM python:3.12-slim
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends \
zlib1g-dev libjpeg-dev libffi-dev \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
ENV DJANGO_SETTINGS_MODULE=config.settings.prod
RUN SECRET_KEY=build-placeholder ALLOWED_HOSTS=* python manage.py collectstatic --noinput
EXPOSE 8000
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT ["/app/entrypoint.sh"]
The SECRET_KEY=build-placeholder trick lets collectstatic run during the build without needing the real secret key. It’s only used for that one command.
The Entrypoint Script
Create an entrypoint.sh that handles migrations and starts Gunicorn:
#!/bin/sh
set -e
echo "Applying database migrations..."
python manage.py migrate --noinput
echo "Starting Gunicorn..."
exec gunicorn config.wsgi --bind 0.0.0.0:8000 --workers 3 --log-file -
Running migrations in the entrypoint means your database is always up to date when the container starts. No manual step to forget.
Docker Compose
services:
db:
image: postgres:17-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: ${DB_NAME:-myapp}
POSTGRES_USER: ${DB_USER:-postgres}
POSTGRES_PASSWORD: ${DB_PASSWORD:?DB_PASSWORD is required}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-postgres} -d ${DB_NAME:-myapp}"]
interval: 5s
timeout: 5s
retries: 5
web:
build: .
ports:
- "8000:8000"
env_file:
- .env
environment:
DJANGO_SETTINGS_MODULE: config.settings.prod
DB_HOST: db
DB_PORT: "5432"
volumes:
- media_data:/app/media
depends_on:
db:
condition: service_healthy
volumes:
postgres_data:
media_data:
The service_healthy condition means the Django container won’t start until PostgreSQL is actually ready to accept connections. No more race conditions on startup.
Environment Variables
Create a .env file:
DB_NAME=myapp
DB_USER=postgres
DB_PASSWORD=your-secure-password-here
SECRET_KEY=your-django-secret-key-here
ALLOWED_HOSTS=yourdomain.com,www.yourdomain.com
DEBUG=False
Build and Deploy
docker compose up -d --build
Create the superuser:
docker compose exec web python manage.py createsuperuser
Nginx and SSL
Point your domain’s A record to the VPS IP, then:
sudo tee /etc/nginx/sites-available/myapp << 'EOF'
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
EOF
sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx
sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com
Deploying Updates
When you push changes to the repo, updating the production server is three commands:
cd ~/apps/your-django-app
git pull
docker compose up -d --build
That pulls the latest code, rebuilds the Docker image, and restarts the container. If you changed models, run migrations after:
docker compose exec web python manage.py migrate
Wrapping Up
The whole stack breaks down to: Docker for containerization, Nginx as a reverse proxy with SSL, Prometheus and Node Exporter for metrics collection, and Grafana for visualization. Every service runs in its own container, which means you can update, restart, or debug any part independently.
The monitoring setup alone is worth the effort. Being able to see your server’s CPU spike when you deploy, or catch a memory leak before it crashes your app, saves you from those 3am “why is the site down” moments.
All the code and configs referenced here are battle-tested on a production Ubuntu VPS. If something breaks, docker compose logs -f web is your best friend.
Written and Authored by Chris, Edited and assisted by Claude