Self-Hosting a Full-Stack Portfolio with Docker
1
I migrated my portfolio website from AWS (Amplify + Elastic Beanstalk) to a self-hosted Linux server. This post covers the architecture, the Docker Compose setup, and the lessons learned along the way.
The Stack
- Frontend: Next.js (React)
- Backend: NestJS + Prisma ORM
- Database: PostgreSQL 16
- Infrastructure: Docker Compose
- Public Access: Cloudflare Tunnel
- Monitoring: Uptime Kuma + Healthchecks.io
- Automation: Node.js script orchestrator for data updates
Why Docker?
I develop on Windows but deploy to a Linux server. Docker bridges that gap - the same containers run identically on both systems.
Having worked with AWS Elastic Container Service before, I already had a foundation with containers. That experience made this transition straightforward. Instead of wrestling with Elastic Beanstalk configs or Lambda cold starts, I just run docker compose up on my own hardware.
The main benefits for this project:
- No "works on my machine" issues between Windows and Ubuntu
- Each service is isolated (frontend, API, database)
- Deployment is just pulling and restarting containers
Architecture Overview
Everything runs in Docker containers on a single Linux server. The frontend handles routing with Next.js, proxying /api/* requests to the NestJS backend.
Docker Compose Setup
Here's the core of my docker-compose.yml:
services:
postgres:
image: postgres:16-alpine
container_name: production-db
restart: always
environment:
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
volumes:
- /mnt/data-ssd/database:/var/lib/postgresql/data
networks:
- app-network
api:
build:
context: ../geleta-api
container_name: production-api
restart: always
environment:
DATABASE_URL: ${DATABASE_URL}
JWT_SECRET: ${JWT_SECRET}
ports:
- "3000:3000"
depends_on:
- postgres
networks:
- app-network
web:
build:
context: ../geleta-frontend
container_name: production-web
restart: always
ports:
- "8080:80"
networks:
- app-network
tunnel:
image: cloudflare/cloudflared:latest
container_name: cloudflared-tunnel
restart: always
command: tunnel run
environment:
- TUNNEL_TOKEN=${CLOUDFLARE_TUNNEL_TOKEN}
profiles: ["production"]
networks:
- app-network
networks:
app-network:
name: website_default
driver: bridgeKey decisions:
- Alpine images for smaller footprint
- Named network for service discovery
- Volume mounts for persistent database storage
- Profiles to conditionally run the tunnel only in production
Cloudflare Tunnel (No Port Forwarding)
Instead of opening ports on my router, I use Cloudflare Tunnel. It creates an outbound connection from my server to Cloudflare, which then routes traffic to my containers.
Setup:
- Create a tunnel in Cloudflare Zero Trust dashboard
- Copy the tunnel token
- Add it to your
.envfile - Configure the public hostname to point to your container
# Configure public hostname in Cloudflare dashboard:
# Domain: geleta.ca
# Service: http://production-web:80Benefits:
- No port forwarding needed
- Free SSL certificates
- DDoS protection
- Works behind any NAT
Multi-Stage Dockerfiles
To keep images small, I use multi-stage builds. Here's the pattern for the NestJS backend:
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
CMD ["node", "dist/src/main"]For the frontend with Next.js (standalone output):
FROM node:20-slim AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-slim AS runner
WORKDIR /app
ENV NODE_ENV=production
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
EXPOSE 3000
CMD ["node", "server.js"]Prisma in Docker
A few gotchas with Prisma in containers:
- Generate at build time: Run
prisma generateduring the Docker build - Include migrations: Copy the entire
prisma/directory, not justschema.prisma - Run migrations on deploy: Execute
prisma migrate deployafter containers start
In my deployment script:
# After containers start
docker compose exec -T api npx prisma migrate deployNginx Reverse Proxy
The frontend container runs Nginx, which serves static files and proxies API requests:
location /api/ {
proxy_pass http://production-api:3000/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}This keeps the frontend and backend on the same origin, avoiding CORS issues.
Monitoring with Uptime Kuma
I run Uptime Kuma in a container for internal monitoring:
uptime-kuma:
image: louislam/uptime-kuma:1
container_name: uptime-kuma
restart: always
volumes:
- ./data/uptime-kuma-data:/app/data
- /var/run/docker.sock:/var/run/docker.sock:ro
ports:
- "3001:3001"
networks:
- app-networkIt monitors:
- Container health (via Docker socket)
- HTTP endpoints
- Script execution heartbeats
For external redundancy, I also use Healthchecks.io. If my server goes down completely, Healthchecks.io will still alert me.
Deployment Script
A simple bash script handles deployments:
#!/bin/bash
set -e
# Pull latest code
git pull origin main
# Deploy main stack
sudo docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d --build
# Run migrations
sleep 5
sudo docker compose exec -T api npx prisma migrate deploy
echo "✅ Deployment complete!"Lessons Learned
Docker Gotchas
- TypeScript output paths: Files at project root can affect
tscoutput structure - Build cache: Use
--no-cachewhen debugging path issues - Alpine compatibility: Some npm packages need extra build tools
Cloudflare Tunnel
- Old DNS records can conflict with tunnel configuration
- The tunnel container needs network access to your other containers
- Use Docker service names (like
production-web:80) for internal routing
Monitoring
- Push-based monitoring works better for ephemeral containers
- Layer your monitoring: internal (Uptime Kuma) + external (Healthchecks.io)
- Discord webhooks are great for immediate failure alerts
Why Self-Host?
Compared to managed services:
- Cost: Single cheap server vs. multiple AWS services
- Control: Full access to logs, configuration, and data
- Learning: Hands-on experience with the infrastructure
- Privacy: My data stays on my hardware
The tradeoff is maintenance responsibility. But for a portfolio project, that's part of the value.
Final Result
- Frontend at
https://geleta.ca - Automated data updates for the Financial Insights Platform via cron scheduler
- Monitoring with alerts to Discord and Healthchecks.io (email)
- Full control over the stack