Skip to content

File Uploads

ShipQ includes a managed file upload subsystem that generates everything you need for S3-compatible file storage — migrations, handlers, query definitions, tests, and TypeScript helpers.

Before generating the file upload system, you need:

Terminal window
shipq files

This single command generates the entire file upload subsystem:

ArtifactDescription
Migrationsmanaged_files and file_access tables
HandlersUpload, download, and access control endpoints in api/managed_files/
Query definitionsFile operation queries in querydefs/
TestsEndpoint tests in api/managed_files/spec/
TypeScript helpersshipq-files.ts with upload/download utilities

After generating, apply the migrations and compile:

Terminal window
shipq migrate up
go mod tidy
shipq handler compile

The file upload system reads its configuration from environment variables — credentials are never stored in shipq.ini.

VariableDescriptionRequired
S3_BUCKETThe S3 bucket name for file storageYes
S3_REGIONThe AWS region (e.g., us-east-1)Yes
S3_ENDPOINTCustom S3 endpoint URL. Leave empty for AWS S3. Set for MinIO, R2, or GCS.No
AWS_ACCESS_KEY_IDAWS (or S3-compatible) access keyYes
AWS_SECRET_ACCESS_KEYAWS (or S3-compatible) secret keyYes
Terminal window
export S3_BUCKET="myapp-uploads"
export S3_REGION="us-east-1"
export S3_ENDPOINT=""
export AWS_ACCESS_KEY_ID="AKIA..."
export AWS_SECRET_ACCESS_KEY="..."
Terminal window
export S3_BUCKET="myapp-uploads"
export S3_REGION="us-east-1"
export S3_ENDPOINT="http://localhost:9000"
export AWS_ACCESS_KEY_ID="minioadmin"
export AWS_SECRET_ACCESS_KEY="minioadmin"

ShipQ can start a local MinIO server for development:

Terminal window
shipq start minio

This requires minio to be available on your $PATH. MinIO provides an S3-compatible API at http://localhost:9000 and a web console at http://localhost:9001.

After shipq files and shipq handler compile, you get the following endpoints:

MethodRouteDescription
POST/managed_filesUpload a file (multipart form data)
GET/managed_files/:idGet file metadata
GET/managed_files/:id/downloadDownload a file (presigned URL or direct)
DELETE/managed_files/:idDelete a file (soft delete)

All file endpoints require authentication (since shipq files requires shipq auth).

The file upload system creates two tables:

Stores file metadata:

  • id, public_id, created_at, updated_at, deleted_at (standard ShipQ columns)
  • filename — the original filename
  • content_type — MIME type (e.g., image/png, application/pdf)
  • size — file size in bytes
  • storage_key — the S3 object key
  • account_id — reference to the uploading user’s account

Tracks access control for shared files, enabling fine-grained permission management.

If you have multi-tenancy configured with [db] scope = organization_id, the file upload system respects tenancy boundaries automatically:

  • Files are scoped to the uploading user’s organization
  • Users in one organization cannot access files uploaded by another organization
  • Generated tenancy tests verify this isolation

The generated shipq-files.ts provides typed upload and download helpers for your frontend:

import { uploadFile, downloadFile, getFileMetadata } from './shipq-files';
// Upload a file
const file = document.getElementById('file-input').files[0];
const result = await uploadFile(file, { token: authToken });
// Get file metadata
const metadata = await getFileMetadata(result.id, { token: authToken });
// Download a file
const blob = await downloadFile(result.id, { token: authToken });

The TypeScript helpers handle multipart form encoding, authentication headers, and error handling for you.

ShipQ generates tests for the file upload system that cover:

  • Upload flow: uploading files with valid auth, verifying metadata is stored correctly
  • Download flow: retrieving uploaded files
  • Auth enforcement: verifying 401 for unauthenticated requests
  • Tenancy isolation: verifying users can’t access files across organizations (when scoping is enabled)

Run the file upload tests:

Terminal window
go test ./api/managed_files/spec/... -v -count=1

Or as part of your full test suite:

Terminal window
go test ./... -v

Under the hood, ShipQ uses the AWS SDK v2 for Go (github.com/aws/aws-sdk-go-v2) to communicate with S3-compatible object stores. The filestorage package provides:

  • Presigned URL generation for secure uploads and downloads
  • Direct upload/download as a fallback
  • Content-type detection from file extensions
  • Storage key generation using unique identifiers to prevent collisions
  • Use MinIO for local development — it’s lightweight and fully S3-compatible. Start it with shipq start minio.
  • Never commit S3 credentials — use environment variables or a secrets manager.
  • Set S3_ENDPOINT correctly — leave it empty for real AWS S3; set it for MinIO, R2, or GCS.
  • Use presigned URLs for large files — they offload transfer to the object store and reduce load on your server.
  • Add file type validation in your handler code — the generated handlers accept any file type. You can customize them to restrict uploads to specific MIME types or file extensions.