Patient Graph — Deployment
The Patient Graph API is a Hono service deployed on Vercel (primary) with Render and Fly.io as alternative options.
Architecture Overview
Internet → Vercel Edge (TLS) → Node.js Runtime → Patient Graph API → Health Supabase (PostgreSQL)
→ Upstash Redis (Cache)
→ Sentry (Monitoring)Key characteristics:
- Platform: Vercel (same as
my.loop.health, admin, docs) - Runtime: Node.js 20 (Edge Functions for low latency)
- Region:
iad1(US-East) - co-located with Supabase - Auto-scaling: Serverless, automatic horizontal scaling
- Zero-downtime: Atomic deployments with instant rollback
- Database: Connection pooling via Supabase (pgbouncer mode)
- Caching: Upstash Redis for query result caching
- Monitoring: Sentry for errors, Vercel Analytics for infrastructure
Vercel Deployment (Primary ✅)
Configuration
vercel.json:
{
"$schema": "https://openapi.vercel.sh/vercel.json",
"buildCommand": "cd ../.. && pnpm --filter @loop/core --filter @loop/shared --filter @loop/hono --filter @loop/rimo build",
"installCommand": "cd ../.. && pnpm install",
"framework": null,
"regions": ["iad1"],
"rewrites": [
{ "source": "/(.*)", "destination": "/api" }
],
"headers": [
{
"source": "/api/:path*",
"headers": [
{ "key": "Access-Control-Allow-Credentials", "value": "true" },
{ "key": "Access-Control-Allow-Methods", "value": "GET,POST,PATCH,DELETE,OPTIONS" },
{ "key": "Access-Control-Allow-Headers", "value": "Content-Type, Authorization, X-Request-ID" }
]
}
],
"github": {
"silent": true
}
}Key settings:
- Build command: Builds all dependencies in monorepo context
- Region:
iad1(US-East Virginia) - matches Supabase location - Rewrites: All requests route to
/api(Hono entrypoint) - CORS: Configured for API access from consumer apps
- GitHub: Silent deployments (no comments on PRs)
API Entry Point
apps/patient-graph/api/index.ts:
import { handle } from '@vercel/node';
import { app } from '../src/index.js';
// Export Vercel serverless handler
export default handle(app.fetch);This adapter wraps the Hono app for Vercel’s Node.js runtime.
Environment Variables
Set in Vercel dashboard or CLI:
# Via Vercel CLI
vercel env add DATABASE_URL production
# Paste: postgresql://user:pass@host:5432/db?pool=1&pgbouncer=true
vercel env add CLERK_ISSUER_URL production
# Paste: https://clerk.loop.health
vercel env add CLERK_SECRET_KEY production
# Paste: sk_live_...
vercel env add SENTRY_DSN production
# Paste: https://...@sentry.io/...
vercel env add UPSTASH_REDIS_REST_URL production
vercel env add UPSTASH_REDIS_REST_TOKEN productionRequired variables: See Environment Variables.
Deployment
Automatic (via GitHub):
# Merge to main → auto-deploy to production
git push origin main
# Open PR → auto-deploy to preview
gh pr create --title "Add feature"
# Preview URL posted as commentManual deployment:
cd apps/patient-graph
# Deploy to production
vercel --prod
# Deploy to preview
vercel
# Check deployment status
vercel lsDeployment process:
- Build: Dependencies + TypeScript compilation
- Upload: Functions + static assets to Vercel
- Atomic switch: New version goes live instantly
- Rollback: Previous version kept at versioned URL
Timeline: ~2-3 minutes total
Scaling
Auto-scaling (built-in):
- Vercel automatically scales to handle traffic
- No manual configuration needed
- Scales to zero during idle (no cost)
- Instant cold starts (~50-200ms)
Concurrency limits:
- Pro plan: 1000 concurrent executions
- Enterprise: Custom limits
Database connections:
// CRITICAL: Use pool=1 + pgbouncer=true
const DATABASE_URL =
"postgresql://user:pass@host:5432/db" +
"?pool=1" + // Single connection per serverless function
"&pgbouncer=true"; // Use Supabase connection poolerWhy pool=1?
- Each serverless invocation creates a new connection
- Without pooling, you hit connection limits at scale
- Supabase pgbouncer handles actual connection pooling
- This prevents “too many connections” errors
Monitoring
Vercel Dashboard:
- Function invocations (count, duration)
- Error rate
- Cold start frequency
- Bandwidth usage
Vercel Analytics:
// Enable in vercel.json
{
"analytics": {
"enable": true
}
}Sentry (errors):
// apps/patient-graph/src/lib/sentry.ts
Sentry.init({
dsn: process.env.SENTRY_DSN,
environment: 'production',
release: process.env.VERCEL_GIT_COMMIT_SHA,
tracesSampleRate: 0.1,
});Health checks:
# Liveness
curl https://patient-graph.loop.health/health
# { "status": "ok" }
# Readiness
curl https://patient-graph.loop.health/health/ready
# { "status": "ready", "database": "connected", "cache": "connected" }Troubleshooting
Build failures:
# View build logs
vercel logs <deployment-url>
# Common issues:
# - Missing dependency in package.json
# - TypeScript compilation error
# - Turborepo cache corruption
# Fix: Clear build cache
vercel build --debugRuntime errors:
# View function logs
vercel logs patient-graph-api --follow
# Check Sentry for detailed errors
open https://sentry.io/organizations/loop-health/projects/patient-graph-api/Database connection errors:
// Verify connection string format
console.log(process.env.DATABASE_URL?.includes('pool=1&pgbouncer=true'));
// Should be: true
// Common fixes:
// 1. Add pool parameters to connection string
// 2. Increase Supabase connection pool size (Settings → Database)
// 3. Add connection timeout: &connect_timeout=10Cold starts:
# Check cold start frequency
vercel logs patient-graph-api | grep "Cold Start"
# Reduce cold starts:
# 1. Keep functions warm with periodic requests
# 2. Use Vercel Edge Functions (faster cold starts)
# 3. Optimize bundle size (remove unused dependencies)Rollback
# List deployments
vercel ls patient-graph-api
# Promote specific deployment to production
vercel promote <deployment-url> --yes
# Or via dashboard:
# Deployments → Click deployment → Promote to ProductionCost Optimization
Vercel pricing:
- Pro plan: $20/month + usage
- 100GB bandwidth included
- 1000 GB-hours compute included
- $0.40/GB bandwidth overage
- $0.18/GB-hour compute overage
Cost reduction strategies:
- Cache aggressively - Reduce function invocations
- Optimize bundle size - Faster execution = lower cost
- Use Edge Functions - Cheaper than Node.js runtime (where possible)
- Monitor usage - Set up billing alerts
Estimated monthly cost: ~$20-50 depending on traffic
Alternative: Render.com
If you prefer Render for better pricing at scale:
Configuration
render.yaml:
services:
- type: web
name: patient-graph-api
runtime: node
region: ohio # or oregon
plan: starter # $7/month
branch: main
buildCommand: pnpm install && pnpm build --filter=@loop/patient-graph-api
startCommand: node apps/patient-graph/dist/index.js
envVars:
- key: DATABASE_URL
sync: false
- key: CLERK_ISSUER_URL
sync: false
- key: CLERK_SECRET_KEY
sync: false
- key: SENTRY_DSN
sync: false
- key: NODE_ENV
value: production
healthCheckPath: /health
autoDeploy: truePros/Cons vs Vercel
Render Pros ✅:
- Better pricing at scale ($7-25/month vs $20-50/month)
- No cold starts (always-on instances)
- Native Docker support
- Background workers included
- Persistent disk storage
Render Cons ❌:
- Slower deployments (~5min vs 2min)
- No automatic preview deployments for PRs
- Less integrated with GitHub
- Fewer regions than Vercel
When to use Render:
- High sustained traffic (>1M requests/month)
- Need persistent storage
- Want background workers
- Cost is a priority
Alternative: Fly.io
For global edge deployment with Docker:
fly.toml:
app = "patient-graph-api"
primary_region = "iad"
[http_service]
internal_port = 3000
force_https = true
auto_stop_machines = true
auto_start_machines = true
min_machines_running = 1
[[services.http_checks]]
interval = 10000
grace_period = "5s"
method = "get"
path = "/health"
protocol = "http"
timeout = 2000Deployment:
flyctl deployPros/Cons vs Vercel
Fly.io Pros ✅:
- Global edge deployment (14+ regions)
- Near-zero cold starts
- Full Docker/VM control
- Very cheap ($5-10/month)
Fly.io Cons ❌:
- Another platform to manage
- More complex setup
- Not worth it if already on Vercel
When to use Fly.io:
- Need multi-region deployment
- Require full VM control
- Docker-native workflows
- Want lowest possible cost
Recommendation
Stick with Vercel ✅ because:
- Already using it - Same platform as other apps
- Simplest DX - One deployment workflow
- Great for Hono - Works seamlessly with Edge Functions
- Preview deployments - Automatic on every PR
- Integrated monitoring - Built-in analytics + logs
Consider Render if:
- Traffic scales beyond Vercel Pro plan economics
- Need background workers or persistent storage
- Want to consolidate billing with other services
Consider Fly.io if:
- Need global edge deployment
- Require multi-region active-active
- Want full VM/Docker control
Health Checks
Liveness: /health
Purpose: Is the server process running?
Implementation:
app.get('/health', (c) => {
return c.json({
status: 'ok',
timestamp: new Date().toISOString(),
uptime: process.uptime(),
memory: process.memoryUsage(),
});
});Response:
{
"status": "ok",
"timestamp": "2024-03-20T12:00:00.000Z",
"uptime": 3600,
"memory": {
"rss": 52428800,
"heapTotal": 18874368,
"heapUsed": 12345678,
"external": 1234567
}
}Readiness: /health/ready
Purpose: Can the service handle requests?
Implementation:
app.get('/health/ready', async (c) => {
try {
// Test database connectivity
await db.execute(sql`SELECT 1`);
// Test Redis cache
await redis.ping();
return c.json({
status: 'ready',
database: 'connected',
cache: 'connected',
timestamp: new Date().toISOString(),
});
} catch (error) {
return c.json({
status: 'not_ready',
error: error.message,
timestamp: new Date().toISOString(),
}, 503);
}
});Response (healthy):
{
"status": "ready",
"database": "connected",
"cache": "connected",
"timestamp": "2024-03-20T12:00:00.000Z"
}Performance Tuning
Database Connections
Optimal connection string:
const DATABASE_URL =
"postgresql://user:pass@host:5432/db" +
"?pool=1" + // Single connection per serverless function
"&pgbouncer=true" + // Use Supabase pgbouncer
"&connection_limit=1" + // Enforce limit
"&pool_timeout=30" + // 30s timeout
"&connect_timeout=10"; // 10s connect timeoutCaching Strategy
// apps/patient-graph/src/lib/cache.ts
import { createCache } from '@loop/core';
const cache = createCache({
ttl: 300, // 5 minutes default
namespace: 'patient-graph',
serializer: 'json',
});
// Cache frequently accessed data
export async function getTreatment(id: string) {
return cache.get(`treatment:${id}`, async () => {
return db.query.treatments.findFirst({
where: eq(treatments.id, id)
});
});
}
// Invalidate on writes
export async function updateTreatment(id: string, data: UpdateTreatment) {
await db.update(treatments).set(data).where(eq(treatments.id, id));
await cache.delete(`treatment:${id}`);
}Request Optimization
Batching with DataLoader:
import DataLoader from 'dataloader';
const treatmentLoader = new DataLoader(async (ids) => {
const results = await db.query.treatments.findMany({
where: inArray(treatments.id, ids),
});
return ids.map(id => results.find(r => r.id === id));
});Always paginate:
const { data, total } = await treatmentRepo.list({
customerId,
limit: 50,
offset: 0,
});Best Practices
DO ✅
- Use Vercel for consistency with other apps
- Set
pool=1&pgbouncer=trueon database connection - Cache aggressively to reduce function invocations
- Monitor cold start frequency
- Use preview deployments for testing
- Rotate secrets regularly
- Tag releases in Git
- Monitor Sentry for errors
DON’T ❌
- Deploy without testing database connection
- Skip health check verification
- Ignore cold start metrics
- Store secrets in code or vercel.json
- Deploy without CI checks passing
- Over-optimize prematurely (Vercel scales automatically)
- Deploy on Fridays (unless hotfix)