Skip to content

Deployment

ShipQ generates self-contained Go projects that can be deployed anywhere Go runs. The shipq docker command generates production-ready Dockerfiles to make containerized deployment straightforward.

Terminal window
shipq docker

This generates production Dockerfiles for your application:

  • Dockerfile (or Dockerfile.server) — Multi-stage build for the HTTP server
  • Dockerfile.worker — Multi-stage build for the background worker (if you’ve set up shipq workers)

The generated Dockerfiles use multi-stage builds to keep the final image small: the first stage compiles your Go binary, and the second stage copies only the binary into a minimal base image.

Terminal window
docker build -t myapp-server .
docker run -p 8080:8080 \
-e DATABASE_URL="postgres://user:pass@db-host:5432/myapp" \
-e COOKIE_SECRET="your-secret-here" \
myapp-server
Terminal window
# Build both images
docker build -t myapp-server -f Dockerfile.server .
docker build -t myapp-worker -f Dockerfile.worker .
# Run server
docker run -p 8080:8080 \
-e DATABASE_URL="postgres://user:pass@db-host:5432/myapp" \
-e COOKIE_SECRET="your-secret-here" \
myapp-server
# Run worker
docker run \
-e DATABASE_URL="postgres://user:pass@db-host:5432/myapp" \
-e REDIS_URL="redis://redis-host:6379" \
myapp-worker

ShipQ-generated applications read configuration from environment variables in production. Here are the key variables you need to configure:

VariableDescription
DATABASE_URLProduction database connection URL (Postgres, MySQL, or SQLite)
COOKIE_SECRETSecret key used to sign session cookies (must be kept secret)
VariableDescription
COOKIE_SECRETHMAC secret for signing session cookies
VariableDescription
GOOGLE_CLIENT_IDGoogle OAuth client ID
GOOGLE_CLIENT_SECRETGoogle OAuth client secret
GOOGLE_REDIRECT_URLGoogle OAuth callback URL (production URL)
GITHUB_CLIENT_IDGitHub OAuth app client ID
GITHUB_CLIENT_SECRETGitHub OAuth app client secret
GITHUB_REDIRECT_URLGitHub OAuth callback URL (production URL)
VariableDescription
S3_BUCKETS3 bucket name
S3_REGIONAWS region (e.g., us-east-1)
S3_ENDPOINTS3 endpoint URL (empty for AWS; set for MinIO, GCS, R2)
AWS_ACCESS_KEY_IDAWS access key
AWS_SECRET_ACCESS_KEYAWS secret key

Workers & Channels (if shipq workers was used)

Section titled “Workers & Channels (if shipq workers was used)”
VariableDescription
REDIS_URLRedis connection URL for the job queue
CENTRIFUGO_URLCentrifugo API URL
CENTRIFUGO_API_KEYCentrifugo API key
CENTRIFUGO_SECRETCentrifugo HMAC secret for Centrifugo connection JWTs (separate from session cookies)

If you’ve declared additional required environment variables in shipq.ini:

[env]
STRIPE_SECRET_KEY = required
SENDGRID_API_KEY = required

ShipQ’s generated config loader validates these at startup. The application will refuse to start if any required variable is missing, giving you a clear error message instead of a silent runtime failure.

Postgres is the most battle-tested database for ShipQ applications:

Terminal window
DATABASE_URL="postgres://user:password@db-host:5432/myapp?sslmode=require"

Use sslmode=require (or sslmode=verify-full) for production connections.

Terminal window
DATABASE_URL="mysql://user:password@tcp(db-host:3306)/myapp?tls=true"

SQLite works well for single-instance deployments or edge computing scenarios:

Terminal window
DATABASE_URL="sqlite:///data/myapp.db"

ShipQ-generated servers behave differently in production:

FeatureDev/TestProduction
GET /openapi✅ Serves OpenAPI 3.1 JSON spec❌ Disabled
GET /docs✅ Interactive API docs UI❌ Disabled
Admin UI✅ Available❌ Disabled
Error details✅ Verbose error messages❌ Generic error responses

The environment is typically determined by a GO_ENV or equivalent environment variable. Check your generated cmd/server/main.go for the specific mechanism.

For local development that mirrors production, you can use Docker Compose:

version: "3.8"
services:
db:
image: postgres:16
environment:
POSTGRES_USER: myapp
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:7-alpine
ports:
- "6379:6379"
server:
build: .
ports:
- "8080:8080"
environment:
DATABASE_URL: "postgres://myapp:secret@db:5432/myapp?sslmode=disable"
COOKIE_SECRET: "dev-secret-change-in-production"
REDIS_URL: "redis://redis:6379"
depends_on:
- db
- redis
worker:
build:
context: .
dockerfile: Dockerfile.worker
environment:
DATABASE_URL: "postgres://myapp:secret@db:5432/myapp?sslmode=disable"
REDIS_URL: "redis://redis:6379"
depends_on:
- db
- redis
volumes:
pgdata:

If you use Nix, ShipQ can generate a shell.nix for reproducible builds:

Terminal window
shipq nix

This pins your development environment to a specific nixpkgs revision, ensuring that all team members and CI systems use the exact same toolchain versions.

Before deploying to production, make sure you have:

  • All tests passing: go test ./... -v
  • DATABASE_URL pointing to your production database
  • COOKIE_SECRET set to a strong, random value (not the dev default)
  • OAuth redirect URLs updated to production domains (if using OAuth)
  • S3 credentials configured (if using file uploads)
  • Redis and Centrifugo accessible (if using workers/channels)
  • All [env] required variables present
  • TLS/SSL enabled for database connections
  • A reverse proxy (nginx, Caddy, or a cloud load balancer) in front of the Go server

Since ShipQ generates self-contained Go projects, your CI/CD pipeline doesn’t need ShipQ installed. A typical pipeline looks like:

Terminal window
# Install Go dependencies
go mod download
# Run tests (needs a test database)
go test ./... -v
# Build the binary
go build -o server ./cmd/server
# Build Docker image
docker build -t myapp-server .
# Deploy
docker push myapp-server:latest

The key insight is that ShipQ is a development-time tool. Once your code is generated and committed, the build and deployment pipeline only needs Go and Docker.