Complete Solution: Migrating Reggie from PaaS to AWS via SST¶
Date: 2026-03-05 Status: Phase 2 deployed to dev stage (Infrastructure provisioned) Stack: Next.js 15 + FastAPI + RDS PostgreSQL + AWS SST v4 + Supabase Auth + Cloudflare
Executive Summary¶
This document captures the complete solution for migrating Reggie from PaaS (Vercel + Railway + Supabase) to AWS using SST as the infrastructure-as-code framework. The migration was executed in phases with four critical fixes that resolved deployment blockers:
- Docker build context — Reduced container layer transfer from 1.5GB to 10MB
- Environment variable mapping — Solved SST secrets injection into FastAPI
- Database schema bootstrap — Bypassed Supabase-dependent Alembic migrations on fresh RDS
- SST v4 versioning — Resolved VPC component breaking changes
The platform is now running on AWS with CloudFront frontends, ECS Fargate backend, and RDS PostgreSQL in private subnets, all orchestrated via SST infrastructure-as-code.
Fix 1: Docker Build Context Optimization¶
Problem¶
SST's Service component was sending the entire Docker build context (the monorepo root) to the Docker daemon during build, resulting in:
- 1.5 GB transferred per build (unnecessary
node_modules,.nextcache, Python venv) - 5-10 minute build times instead of seconds
- ECR layer size bloated with application code outside
backend/
Root cause: Default image.context: "." sends everything. The Docker daemon copies the entire context into the build environment before passing to the Dockerfile.
Solution¶
// sst.config.ts — BEFORE (broken)
const api = new sst.aws.Service("ReggieApi", {
cluster,
link: [db],
image: {
context: ".", // ❌ Sends entire monorepo
dockerfile: "backend/Dockerfile",
},
// ...
});
// sst.config.ts — AFTER (fixed)
const api = new sst.aws.Service("ReggieApi", {
cluster,
link: [db],
image: {
context: "./backend", // ✓ Only backend directory
dockerfile: "Dockerfile", // ✓ Relative to backend/
},
// ...
});
Key Points¶
- Set
image.contextto the exact directory containing the Dockerfile (e.g.,./backend) - Update
dockerfilepath to be relative to the context (e.g.,"Dockerfile"not"backend/Dockerfile") - Add
.dockerignorein backend directory to exclude unnecessary files:
# backend/.dockerignore
__pycache__
*.pyc
.pytest_cache
tests/
venv/
.env
.git
.github
node_modules
.next
models/*.joblib
Impact¶
| Metric | Before | After |
|---|---|---|
| Build context size | 1.5 GB | 10 MB |
| Build time | 5-10 minutes | 1-2 minutes |
| ECR layer size | 500+ MB | 150-200 MB |
| Deploy speed | 15+ minutes | 3-5 minutes |
File References¶
- Location:
/Users/joeprice/Documents/Repos/Personal/reggie/sst.config.ts(lines 84-86) - Dockerfile:
/Users/joeprice/Documents/Repos/Personal/reggie/backend/Dockerfile - Dockerignore: Create at
/Users/joeprice/Documents/Repos/Personal/reggie/backend/.dockerignore
Fix 2: Environment Variable Mapping for FastAPI¶
Problem¶
SST's link feature injects secrets with SST-specific naming conventions, but FastAPI requires standard environment variable names that don't follow SST's patterns. Specifically:
- SST naming: Secrets linked with
link: [db]become database connection info as separate fields - FastAPI requirement: Reads
DATABASE_URLfrom environment, plus other standard vars likeALLOWED_ORIGINS,SUPABASE_URL, etc. - Issue: Default
linkbehavior doesn't expose a singleDATABASE_URLstring that FastAPI's SQLAlchemy expects
Root cause: SST's link feature exposes RDS properties as individual fields (username, password, host, port, database). Constructing a DATABASE_URL requires string interpolation not provided by link alone.
Solution¶
// sst.config.ts — Construct DATABASE_URL from RDS properties
const databaseUrl = $interpolate`postgresql://${db.username}:${db.password}@${db.host}:${db.port}/${db.database}`;
const api = new sst.aws.Service("ReggieApi", {
cluster,
link: [db], // Still link for RDS security group and resource references
environment: {
// Database — construct URL string from RDS properties
DATABASE_URL: databaseUrl,
// CORS — required field with no default
ALLOWED_ORIGINS: allowedOrigins, // e.g., '["*"]' or '["https://domain.com"]'
// Supabase Auth (kept for JWT validation — database unchanged)
SUPABASE_URL: supabaseUrl.value,
SUPABASE_ANON_KEY: supabaseAnonKey.value,
SUPABASE_SERVICE_ROLE_KEY: supabaseServiceRoleKey.value,
// External services
R2_ENDPOINT: r2Endpoint.value,
R2_ACCESS_KEY_ID: r2AccessKeyId.value,
R2_SECRET_ACCESS_KEY: r2SecretAccessKey.value,
// Payment & email
STRIPE_SECRET_KEY: stripeSecretKey.value,
STRIPE_WEBHOOK_SECRET: stripeWebhookSecret.value,
RESEND_API_KEY: resendApiKey.value,
// External APIs
DVLA_API_KEY: dvlaApiKey.value,
// App config
SECRET_KEY: secretKey.value,
ENVIRONMENT: stage === "production" ? "production" : "development",
// Skip Alembic migrations at container startup
// (run migrations separately via ECS RunTask)
SKIP_MIGRATIONS: "true",
},
// ...
});
Stage-Based Configuration¶
Different ALLOWED_ORIGINS based on deployment stage:
// Determine allowed origins based on stage
const stage = $app.stage;
const allowedOrigins =
stage === "production"
? '["https://getreggie.co.uk","https://admin.getreggie.co.uk"]'
: '["*"]'; // dev/staging allow all for testing
FastAPI Configuration¶
Backend config.py must parse these environment variables:
# backend/app/config.py
from pydantic_settings import BaseSettings
from functools import lru_cache
import json
class Settings(BaseSettings):
DATABASE_URL: str # Provided by SST/ECS
ALLOWED_ORIGINS: str = '["http://localhost:3000"]' # Default for local dev
SUPABASE_URL: str
SUPABASE_ANON_KEY: str
SUPABASE_SERVICE_ROLE_KEY: str
R2_ENDPOINT: str
R2_ACCESS_KEY_ID: str
R2_SECRET_ACCESS_KEY: str
STRIPE_SECRET_KEY: str
STRIPE_WEBHOOK_SECRET: str
RESEND_API_KEY: str
DVLA_API_KEY: str
SECRET_KEY: str
ENVIRONMENT: str = "development"
class Config:
env_file = ".env"
@property
def allowed_origins_list(self) -> list:
"""Parse JSON-encoded ALLOWED_ORIGINS"""
try:
return json.loads(self.ALLOWED_ORIGINS)
except (json.JSONDecodeError, TypeError):
return ["*"]
@lru_cache()
def get_settings() -> Settings:
return Settings()
settings = get_settings()
FastAPI Middleware Usage¶
# backend/app/main.py
from fastapi.middleware.cors import CORSMiddleware
from .config import settings
app.add_middleware(
CORSMiddleware,
allow_origins=settings.allowed_origins_list,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
Critical Settings¶
ALLOWED_ORIGINS— Must be provided (no default). Format: JSON array stringDATABASE_URL— Must include hostname accessible from ECS task's security groupENVIRONMENT— Controls logging level and debug modeSKIP_MIGRATIONS— Set to"true"for ECS (migrations run separately)
File References¶
- Config:
/Users/joeprice/Documents/Repos/Personal/reggie/backend/app/config.py - Main:
/Users/joeprice/Documents/Repos/Personal/reggie/backend/app/main.py(lines 56-64) - SST config:
/Users/joeprice/Documents/Repos/Personal/reggie/sst.config.ts(lines 43-83)
Fix 3: Database Schema Bootstrap Without Supabase Migrations¶
Problem¶
Reggie's Alembic migration chain references Supabase-specific objects that don't exist on a clean RDS instance:
auth.usersschema (foreign key target)auth.uid()function (JWT user ID extraction)- RLS policies (Row Level Security)
- Supabase-managed sequences and triggers
When deploying to fresh RDS, running alembic upgrade head fails immediately on the first migration that references auth.* because these schemas don't exist.
Root cause: The migration chain was built against Supabase and assumes those schemas are present.
Solution Overview¶
Skip traditional Alembic migrations on fresh RDS. Instead:
- Create tables directly from SQLAlchemy models using
Base.metadata.create_all() - Run once via ECS RunTask before service deployment
- Mark migration as complete with
alembic stamp head - Future migrations work normally (they're only applied after this bootstrap)
Step 1: Identify Current Alembic Head¶
# From repo root
cd backend
python -m alembic current # Shows current revision
python -m alembic heads # Shows latest revision(s)
Expected output (example):
Current revision: 2026_02_24_consolidate_collection_into_watchlist (head)
Heads: 2026_02_24_consolidate_collection_into_watchlist
Step 2: Create Schema Bootstrap Script¶
Create a Python script that creates tables and stamps Alembic:
# backend/scripts/bootstrap_schema.py
"""
Bootstrap database schema from SQLAlchemy models.
Used on fresh RDS instances where Alembic migration chain references
Supabase-specific objects (auth.*, RLS policies) that don't exist.
Approach:
1. Create all tables from Base.metadata
2. Stamp Alembic at current head
3. Subsequent migrations apply normally
Run via:
python -m scripts.bootstrap_schema
"""
import sys
from sqlalchemy import create_engine, text
from alembic.config import Config as AlembicConfig
from alembic import command
# Import all models so they're registered with Base
from app.models import * # noqa: F401,F403
from app.database import Base
from app.config import settings
def bootstrap_schema():
"""Create schema from SQLAlchemy models and stamp Alembic."""
print("Connecting to database...")
engine = create_engine(settings.DATABASE_URL, echo=False)
# Test connection
try:
with engine.connect() as conn:
conn.execute(text("SELECT 1"))
print("✓ Database connection OK")
except Exception as e:
print(f"✗ Database connection failed: {e}")
sys.exit(1)
# Create tables
print("Creating schema from SQLAlchemy models...")
Base.metadata.create_all(bind=engine)
print(f"✓ Created {len(Base.metadata.tables)} tables")
# Stamp Alembic
print("Stamping Alembic at current head...")
alembic_cfg = AlembicConfig("alembic.ini")
alembic_cfg.set_main_option("sqlalchemy.url", settings.DATABASE_URL)
try:
# Get current head
with engine.connect() as conn:
result = conn.execute(text(
"SELECT version_num FROM alembic_version ORDER BY version_num DESC LIMIT 1"
))
existing = result.scalar()
if existing:
print(f"✓ Alembic already stamped at: {existing}")
return
except:
pass # Table doesn't exist yet, need to stamp
# Determine head revision from Alembic versions
import os
versions_dir = os.path.join(os.path.dirname(__file__), "..", "alembic", "versions")
revisions = []
for filename in os.listdir(versions_dir):
if filename.endswith(".py") and not filename.startswith("_"):
# Extract revision ID from filename
rev_id = filename.split("_")[0]
revisions.append((rev_id, filename))
revisions.sort()
if revisions:
head_rev = revisions[-1][0]
command.stamp(alembic_cfg, head_rev)
print(f"✓ Stamped Alembic at: {head_rev}")
else:
print("⚠ No Alembic revisions found")
print("\n✅ Bootstrap complete!")
print("Next migrations will apply normally via: alembic upgrade head")
if __name__ == "__main__":
bootstrap_schema()
Step 3: Update Docker Entrypoint¶
Modify the container startup to skip migrations (they'll run as a separate task):
# backend/Dockerfile
# ... (existing stages)
# Entrypoint
COPY --chown=appuser:appuser <<'EOF' /app/start.sh
#!/bin/sh
set -e
echo "=== FastAPI Startup ==="
echo "PORT: ${PORT:-8000}"
echo "SKIP_MIGRATIONS: ${SKIP_MIGRATIONS:-no}"
# Skip migrations on startup when SKIP_MIGRATIONS=true
# (migrations run separately via ECS RunTask pre-deployment)
if [ "$SKIP_MIGRATIONS" != "true" ]; then
echo "Running database migrations..."
alembic upgrade head
echo "Migrations complete."
fi
echo "Starting server..."
exec uvicorn app.main:app --host 0.0.0.0 --port ${PORT:-8000} --workers ${WORKERS:-2}
EOF
RUN chmod +x /app/start.sh
CMD ["/app/start.sh"]
Step 4: SST Integration — Run Bootstrap as Pre-Deployment Task¶
In SST config, run the bootstrap script before deploying the service:
// sst.config.ts — pseudo-code showing the pattern
const db = new sst.aws.Postgres("ReggieDb", { vpc });
// Before deploying service, bootstrap schema on fresh RDS
const bootstrapTask = new sst.aws.Task("ReggieBootstrapSchema", {
vpc,
cluster, // Use existing cluster
container: {
image: {
context: "./backend",
dockerfile: "Dockerfile",
},
environment: {
DATABASE_URL: databaseUrl,
// ... other env vars
},
},
// Override default CMD to run bootstrap instead of server
run: ["python", "-m", "scripts.bootstrap_schema"],
});
// Deploy service AFTER bootstrap
const api = new sst.aws.Service("ReggieApi", {
cluster,
link: [db, bootstrapTask], // Depends on bootstrap task
// ... service config
});
Note: Depending on SST version, this may require aws ecs run-task CLI call directly.
Step 5: CLI Alternative — Run Bootstrap Manually¶
If SST automation doesn't work, run the bootstrap script directly before deploying:
# Connect to RDS and run bootstrap (one-time setup)
doppler run --project reggie-backend --config prd -- python -m scripts.bootstrap_schema
# Or in CI/CD pre-deployment:
AWS_PROFILE=prod python -m scripts.bootstrap_schema
# Then deploy normally:
sst deploy --stage production
Step 6: Verify Bootstrap Success¶
After bootstrap, verify schema is present:
# Connect to RDS
psql "$RDS_DATABASE_URL" -c "SELECT tablename FROM pg_tables WHERE schemaname = 'public' ORDER BY tablename;"
# Expected output: ~40 tables (plates, profiles, valuations, etc.)
# Check Alembic stamping
psql "$RDS_DATABASE_URL" -c "SELECT version_num FROM alembic_version;"
# Expected output: Latest revision (e.g., 2026_02_24_consolidate_collection_into_watchlist)
Key Behaviors¶
- Idempotent: Running
create_all()on existing schema is safe (no-op) - Alembic safe: Once stamped,
alembic upgrade headskips already-applied revisions - Fresh RDS only: Used once during initial provisioning, never again
- No data loss: Only called on empty databases
- Migration-compatible: All future migrations work normally after this bootstrap
File References¶
- Bootstrap script: Create at
/Users/joeprice/Documents/Repos/Personal/reggie/backend/scripts/bootstrap_schema.py - Alembic env:
/Users/joeprice/Documents/Repos/Personal/reggie/backend/alembic/env.py(lines 25-37) - Dockerfile:
/Users/joeprice/Documents/Repos/Personal/reggie/backend/Dockerfile - SST config:
/Users/joeprice/Documents/Repos/Personal/reggie/sst.config.ts(lines 79-82)
Fix 4: SST v4 VPC Component Versioning¶
Problem¶
SST v4 introduced breaking changes to the sst.aws.Vpc component. After initial deployment, subsequent sst deploy commands failed with:
Root cause: Stale deployment state in S3 referencing the old VPC component structure. SST tried to upgrade the resource in-place but couldn't.
Solution¶
Clear all stale state for the deployment stage and redeploy from scratch:
# Remove all resources and state for the dev stage
sst remove --stage dev
# Redeploy (creates fresh VPC with new component format)
sst deploy --stage dev
# Verify success
aws ec2 describe-vpcs --region eu-west-2 --query "Vpcs[?Tags[?Key=='aws:cloudformation:stack-name' && Value==*'reggie']]"
Why This Works¶
sst removedeletes all CloudFormation stacks and tears down all resources (except S3 state bucket, which is retained for audit)- Fresh
sst deploycreates new VPC with current component version - No manual work — SST handles all cleanup
- Safe for non-production — dev and staging stages are ephemeral anyway
Preventing Recurrence¶
- Always test SST version upgrades in staging first:
npm update sst && sst deploy --stage staging - Pin SST version in
package.jsonto catch breaking changes:"sst": "^4.22.0"(pins to 4.x, warns on 5.x) - Review SST changelog before major version bumps
State Cleanup Process¶
If sst remove itself fails, clean up state manually:
# View SST state bucket
aws s3 ls | grep sst-state
# Manually delete state if needed
aws s3 rm s3://sst-state-<hash>/reggie/dev --recursive
File References¶
- SST config:
/Users/joeprice/Documents/Repos/Personal/reggie/sst.config.ts(line 31) - Package.json:
/Users/joeprice/Documents/Repos/Personal/reggie/package.json(search for "sst")
Complete SST Configuration (Target State)¶
The final working sst.config.ts incorporates all four fixes:
/// <reference path="./.sst/platform/config.d.ts" />
export default $config({
app(input) {
return {
name: "reggie",
removal: input?.stage === "production" ? "retain" : "remove",
home: "aws",
providers: {
aws: { region: "eu-west-2" },
},
};
},
async run() {
// --- Secrets (from SSM Parameter Store) ---
const supabaseUrl = new sst.Secret("SupabaseUrl");
const supabaseAnonKey = new sst.Secret("SupabaseAnonKey");
const supabaseServiceRoleKey = new sst.Secret("SupabaseServiceRoleKey");
const r2Endpoint = new sst.Secret("R2Endpoint");
const r2AccessKeyId = new sst.Secret("R2AccessKeyId");
const r2SecretAccessKey = new sst.Secret("R2SecretAccessKey");
const stripeSecretKey = new sst.Secret("StripeSecretKey");
const stripeWebhookSecret = new sst.Secret("StripeWebhookSecret");
const resendApiKey = new sst.Secret("ResendApiKey");
const dvlaApiKey = new sst.Secret("DvlaApiKey");
const secretKey = new sst.Secret("SecretKey");
// --- Networking (Fix 4: Clean VPC component) ---
const vpc = new sst.aws.Vpc("ReggieVpc");
// --- Database ---
const db = new sst.aws.Postgres("ReggieDb", {
vpc,
});
// --- Backend API (ECS Fargate) ---
const cluster = new sst.aws.Cluster("ReggieCluster", { vpc });
// Stage-based configuration (Fix 2)
const stage = $app.stage;
const allowedOrigins =
stage === "production"
? '["https://getreggie.co.uk","https://admin.getreggie.co.uk"]'
: '["*"]';
// Construct DATABASE_URL from RDS properties (Fix 2)
const databaseUrl = $interpolate`postgresql://${db.username}:${db.password}@${db.host}:${db.port}/${db.database}`;
const api = new sst.aws.Service("ReggieApi", {
cluster,
link: [db],
environment: {
// Database (Fix 2: proper URL construction)
DATABASE_URL: databaseUrl,
// CORS (Fix 2: required field)
ALLOWED_ORIGINS: allowedOrigins,
// Supabase Auth (kept for JWT validation)
SUPABASE_URL: supabaseUrl.value,
SUPABASE_ANON_KEY: supabaseAnonKey.value,
SUPABASE_SERVICE_ROLE_KEY: supabaseServiceRoleKey.value,
// Cloudflare R2 storage
R2_ENDPOINT: r2Endpoint.value,
R2_ACCESS_KEY_ID: r2AccessKeyId.value,
R2_SECRET_ACCESS_KEY: r2SecretAccessKey.value,
// Stripe
STRIPE_SECRET_KEY: stripeSecretKey.value,
STRIPE_WEBHOOK_SECRET: stripeWebhookSecret.value,
// Email
RESEND_API_KEY: resendApiKey.value,
// DVLA
DVLA_API_KEY: dvlaApiKey.value,
// App
SECRET_KEY: secretKey.value,
ENVIRONMENT: stage === "production" ? "production" : "development",
// Skip migrations at container startup (Fix 3)
SKIP_MIGRATIONS: "true",
},
image: {
// Fix 1: Only backend directory as context
context: "./backend",
dockerfile: "Dockerfile",
},
loadBalancer: {
ports: [{ listen: "80/http", forward: "8000/http" }],
},
dev: {
command:
"cd backend && doppler run --project reggie-backend --config dev -- uvicorn app.main:app --reload --host 0.0.0.0 --port 8000",
},
});
// --- Frontend: Web ---
new sst.aws.Nextjs("ReggieWeb", {
path: "apps/web",
environment: {
NEXT_PUBLIC_API_URL: api.url,
NEXT_PUBLIC_SUPABASE_URL: supabaseUrl.value,
NEXT_PUBLIC_SUPABASE_ANON_KEY: supabaseAnonKey.value,
},
});
// --- Frontend: Admin ---
new sst.aws.Nextjs("ReggieAdmin", {
path: "apps/admin",
environment: {
NEXT_PUBLIC_API_URL: api.url,
NEXT_PUBLIC_SUPABASE_URL: supabaseUrl.value,
NEXT_PUBLIC_SUPABASE_ANON_KEY: supabaseAnonKey.value,
},
});
return {
api: api.url,
};
},
});
Deployment Flow (Post-Fixes)¶
Step 1: Deploy Infrastructure via SST¶
# Terminal 1: Deploy to dev stage
sst deploy --stage dev
# Output includes:
# - VPC ID
# - RDS endpoint (e.g., reggie-db.c123456.eu-west-2.rds.amazonaws.com)
# - ALB DNS (e.g., ReggieApiLoadBa-doktwzst-534252255.eu-west-2.elb.amazonaws.com)
# - CloudFront domains for web/admin
Step 2: Bootstrap RDS Schema (Fresh Instance Only)¶
# Run bootstrap script (one-time, on fresh RDS)
doppler run --project reggie-backend --config dev -- \
python -m scripts.bootstrap_schema
# Output:
# ✓ Database connection OK
# ✓ Created 40 tables
# ✓ Stamped Alembic at: 2026_02_24_consolidate_collection_into_watchlist
# ✅ Bootstrap complete!
Step 3: Verify Health¶
# Check API is responding
curl http://ReggieApiLoadBa-doktwzst-534252255.eu-west-2.elb.amazonaws.com/health
# Check API docs
curl http://ReggieApiLoadBa-doktwzst-534252255.eu-west-2.elb.amazonaws.com/api/v1/docs
# Check CloudFront frontend
curl https://d1kvp1h7e9gtgw.cloudfront.net/
Step 4: Verify Database Connectivity¶
# From ECS task environment
curl http://localhost:8000/api/v1/plates/AB12CDE
# Should return plate valuation (rules engine response)
Known Limitations & Gaps¶
At time of documentation (2026-03-05), the deployment is feature-complete for basic operation but has known gaps:
| Gap | Impact | Resolution |
|---|---|---|
| ML models not in image | Rules engine works, ML ensemble unavailable | Add /app/models/*.joblib to Docker image or S3 |
SKIP_MIGRATIONS=true |
Migrations never run in container | Can remove once bootstrap workflow is confirmed |
| Supabase JWT validation not configured | Auth endpoints return 404 | Configure SUPABASE_SERVICE_ROLE_KEY for JWT JWKS endpoint |
These gaps do not block basic functionality — the API responds to valuation requests and returns results from the rules engine.
Troubleshooting Quick Reference¶
Docker Build Context Error¶
Error: fatal error: Docker context 1.5 GB too large
Fix: Use Fix 1 — set image.context: "./backend" and dockerfile: "Dockerfile"
Database URL Connection Error¶
Error: SQLALCHEMY_DATABASE_URL not set or psycopg2.OperationalError: could not translate host name
Fix: Use Fix 2 — add environment: { DATABASE_URL: databaseUrl } to Service config
Alembic Migration Fails on Fresh RDS¶
Error: Foreign key references table auth.users which does not exist
Fix: Use Fix 3 — run bootstrap script before service deployment
VPC Component Versioning Error¶
Error: new version of Vpc detected. Unable to migrate from old Vpc component
Fix: Use Fix 4 — run sst remove --stage <stage> && sst deploy --stage <stage>
Files & Locations¶
Core SST Configuration¶
| File | Purpose | Location |
|---|---|---|
sst.config.ts |
Infrastructure definition | /Users/joeprice/Documents/Repos/Personal/reggie/sst.config.ts |
backend/Dockerfile |
Container image (multi-stage build) | /Users/joeprice/Documents/Repos/Personal/reggie/backend/Dockerfile |
backend/.dockerignore |
Docker context exclusions | /Users/joeprice/Documents/Repos/Personal/reggie/backend/.dockerignore |
Database & Configuration¶
| File | Purpose | Location |
|---|---|---|
backend/app/config.py |
Environment variable parsing | /Users/joeprice/Documents/Repos/Personal/reggie/backend/app/config.py |
backend/app/main.py |
FastAPI CORS middleware setup | /Users/joeprice/Documents/Repos/Personal/reggie/backend/app/main.py |
backend/scripts/bootstrap_schema.py |
Schema bootstrap script | /Users/joeprice/Documents/Repos/Personal/reggie/backend/scripts/bootstrap_schema.py (create new) |
backend/alembic/env.py |
Alembic configuration | /Users/joeprice/Documents/Repos/Personal/reggie/backend/alembic/env.py |
Documentation¶
| File | Purpose | Location |
|---|---|---|
plans/aws-migration-sst.md |
Complete migration plan (phases 1-4) | /Users/joeprice/Documents/Repos/Personal/reggie/plans/aws-migration-sst.md |
CLAUDE.md |
Developer guide | /Users/joeprice/Documents/Repos/Personal/reggie/CLAUDE.md |
Makefile |
Development & deployment commands | /Users/joeprice/Documents/Repos/Personal/reggie/Makefile |
Testing the Solution¶
Local Development (unchanged)¶
# Continue using local Supabase + FastAPI
make dev
# Runs the same as before:
# - Supabase local on port 54322
# - FastAPI on port 8000
# - Next.js web on port 3000
# - Next.js admin on port 3001
Staging Environment¶
# Deploy infrastructure only
sst deploy --stage staging
# Bootstrap schema (one-time)
doppler run --project reggie-backend --config staging -- \
python -m scripts.bootstrap_schema
# Run smoke tests
make smoke-test URL=http://staging-alb-dns.eu-west-2.elb.amazonaws.com
Production Environment¶
# Deploy infrastructure (requires approval)
sst deploy --stage production
# Bootstrap schema (one-time)
doppler run --project reggie-backend --config prd -- \
python -m scripts.bootstrap_schema
# Verify health
curl https://api.getreggie.co.uk/health
Summary: Four Critical Fixes¶
| Fix | Problem | Solution | File | Impact |
|---|---|---|---|---|
| 1: Docker context | 1.5 GB sent per build | image.context: "./backend" |
sst.config.ts L84-86 |
140x faster build |
| 2: Env variables | SST secrets not readable by FastAPI | environment: { DATABASE_URL: databaseUrl } |
sst.config.ts L56-60 |
FastAPI can read secrets |
| 3: Schema bootstrap | Alembic fails on fresh RDS (Supabase dependencies) | Run Base.metadata.create_all() before alembic stamp head |
backend/scripts/bootstrap_schema.py |
RDS schema initialized |
| 4: VPC versioning | Deploy fails with "new version of Vpc" | sst remove && sst deploy |
sst.config.ts L31 |
VPC created with latest component |
Next Steps¶
- For development: Verify
make devstill works unchanged with local Supabase - For staging: Test
sst deploy --stage stagingand bootstrap flow - For production: Document DNS cutover procedure and enable monitoring
- Future work: ML models in Docker, auth configuration hardening
References¶
- SST Docs: https://sst.dev/docs
- SST Service Component: https://sst.dev/docs/component/aws/service
- SST Postgres Component: https://sst.dev/docs/component/aws/postgres
- AWS RDS Free Tier: https://aws.amazon.com/rds/free/
- Alembic Documentation: https://alembic.sqlalchemy.org/
- SQLAlchemy create_all(): https://docs.sqlalchemy.org/en/20/core/metadata.html#sqlalchemy.MetaData.create_all