Local Development Setup
Everything you need to run the RestartiX platform locally in its current state (Layer 0 done + Layer 1 in progress: Core API with auth, multi-tenancy, RBAC, audit, encryption, RLS test harness, OpenAPI codegen, Clinic app, Patient Portal, Console).
Prerequisites
| Tool | Version | Install |
|---|---|---|
| Go | 1.24+ | go.dev/dl |
| Node.js | 20+ | nodejs.org |
| pnpm | 10.x | npm install -g pnpm@latest |
| Docker | 20+ | docs.docker.com |
| Docker Compose | v2+ | Included with Docker Desktop |
| golang-migrate | Latest | go install -tags 'postgres' github.com/golang-migrate/migrate/v4/cmd/migrate@latest |
Optional but recommended:
| Tool | Purpose | Install |
|---|---|---|
| air | Hot reload for Go | go install github.com/air-verse/air@latest |
| golangci-lint | Go linter | go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest |
| psql | PostgreSQL CLI for manual queries | brew install postgresql / apt install postgresql-client |
1. Install frontend dependencies
From the project root:
pnpm install2. Start infrastructure services
The docker-compose.yml lives in services/api/. It starts PostgreSQL and Redis — the only infrastructure needed right now.
cd services/api
make docker-upThis starts:
| Service | Port | Purpose |
|---|---|---|
| PostgreSQL 17 | 5432 | Business data with Row-Level Security. Docker init creates two roles: restartix (owner) and restartix_app (restricted, RLS enforced). Direct port — used by migrations + integration tests, NOT by the running Core API. |
| pgbouncer 1.25 | 6432 | Connection pooler in transaction mode. The Core API connects through this — see P44 in patterns.md for why, and aws-infrastructure.md § Connection pooling for the AWS deployment shape. |
| Redis 7 | 6379 | Session data, caching |
| RedisInsight | 5540 | Redis GUI (localhost:5540) |
Two database URLs, two ports
DATABASE_URL and DATABASE_APP_URL point at pgbouncer (:6432); DATABASE_DIRECT_URL points at Postgres directly (:5432). Migrations use DATABASE_DIRECT_URL because golang-migrate relies on session-scoped pg_advisory_lock which transaction-mode pgbouncer doesn't support. Same pattern applies on AWS — the deploy pipeline runs migrations against the RDS endpoint directly, not via the pgbouncer ECS service.
Verify they're healthy:
docker compose ps3. Environment variables
Core API
cd services/api
cp .env.example .env.localEdit .env.local and fill in values. Here's what matters:
| Variable | Required? | Notes |
|---|---|---|
DATABASE_URL | Yes | Pre-filled. Owner role (restartix) — used by AdminPool (bypasses RLS). Powers auth middleware, superadmin requests, system queries. |
DATABASE_APP_URL | Yes | Pre-filled. Restricted role (restartix_app) — used by AppPool (RLS enforced). Powers all org-scoped, patient, and public requests. Falls back to DATABASE_URL if omitted (not recommended). |
REDIS_URL | Yes | Pre-filled, works with default Docker setup |
CLERK_SECRET_KEY | For auth | Get from dashboard.clerk.com. Without it, all authenticated endpoints return 401. Users are automatically provisioned in the database on first login (no webhooks needed). |
ENCRYPTION_KEYS | Yes | 1:<hex> for a single key; generate the hex with openssl rand -hex 32. Multi-version form is 1:<hex>,2:<hex> during rotation. |
ACTIVE_ENCRYPTION_VERSION | Yes | Version Encrypt seals new data under (defaults to 1). |
Everything else in .env.example (DAILY_API_KEY, AWS_*, telemetry signing-secret env vars, etc.) is for future phases and can be left empty or as-is.
Frontend apps (Clinic, Portal & Console)
All three apps have similar env vars. Copy the examples:
cp apps/clinic/.env.example apps/clinic/.env.local
cp apps/portal/.env.example apps/portal/.env.local
cp apps/console/.env.example apps/console/.env.local| Variable | Required? | Notes |
|---|---|---|
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY | For auth | Get from Clerk dashboard. Apps build and run without it, but auth won't work. |
CLERK_SECRET_KEY | For auth | Same Clerk instance as the Core API. |
CORE_API_URL | For API calls | Pre-filled as http://localhost:9000 |
NEXT_PUBLIC_CLINIC_DOMAIN | For domain mapping | Clinic app: clinic.localhost:9100 |
NEXT_PUBLIC_PORTAL_DOMAIN | For domain mapping | Portal app: portal.localhost:9200 |
NEXT_PUBLIC_CONSOLE_DOMAIN | For domain mapping | Console app: console.localhost:9300 |
4. Run database migrations
cd services/api
make migrate-upVerify tables exist:
psql "postgres://restartix:restartix@localhost:5432/restartix" -c "\dt"You should see: audit_log, organizations, organization_domains, principals, humans, organization_memberships, platform_memberships (plus the schema_migrations tracking table; many more land as Layer 1+ migrations apply).
5. Run the services
Core API (port 9000)
cd services/api
make dev # with hot reload (requires air)
# or
make run # without hot reloadHealth check:
curl http://localhost:9000/healthFrontend apps
From the project root:
pnpm devThis starts all apps via Turborepo:
| App | URL | Purpose |
|---|---|---|
| Clinic app | http://{slug}.clinic.localhost:9100 | Staff-facing (admin, specialists) |
| Patient Portal | http://{slug}.portal.localhost:9200 | Patient-facing (booking, consents) |
| Console | http://console.localhost:9300 | Superadmin (platform management) |
Subdomain mapping: Each organization is accessed via its slug-based subdomain. For example, if you have an org with slug demo, visit http://demo.clinic.localhost:9100. Most browsers resolve *.localhost to 127.0.0.1 automatically — no /etc/hosts changes needed.
Custom domain mapping: Organizations can also configure custom domains (e.g., clinic.myclinic.com). For local testing of custom domains, add entries to /etc/hosts:
# Add to /etc/hosts for custom domain testing
127.0.0.1 clinic.myclinic.localhost
127.0.0.1 portal.myclinic.localhostThen insert a verified domain record in the database (look up the org's UUID first):
INSERT INTO organization_domains (organization_id, domain, domain_type, verification_token, status, verified_at)
SELECT id, 'clinic.myclinic.localhost:9100', 'clinic', 'local-test', 'verified', NOW()
FROM organizations WHERE slug = 'myclinic';Visit http://clinic.myclinic.localhost:9100 — the proxy will detect it's not a platform subdomain and resolve via the ?domain= parameter instead.
Or run individually:
pnpm --filter @workspace/clinic dev # just Clinic app
pnpm --filter @workspace/portal dev # just Patient Portal
pnpm --filter @workspace/console dev # just Console6. What works right now
With everything running, these endpoints and flows are functional:
Core API endpoints:
| Method | Path | Auth | Description |
|---|---|---|---|
GET | /health | No | Health check (postgres + redis status) |
GET | /v1/public/system-info | No | Manufacturer + UDI / MDR labelling for the platform |
GET | /v1/public/organizations/resolve | No | Resolve org by slug or custom domain |
GET | /v1/me | Yes | Current user profile |
PUT | /v1/me/switch-organization | Yes | Switch active organization (API path; primary switching is domain-based) |
GET | /v1/organizations | Yes | List organizations the caller is a member of |
POST | /v1/organizations | Yes (superadmin) | Create organization |
GET | /v1/organizations/{id} | Yes | Get organization |
PATCH | /v1/organizations/{id} | Yes (organizations.update) | Update organization |
GET | /v1/organizations/{id}/members | Yes (organizations.manage_members) | List members |
POST | /v1/organizations/{id}/members | Yes (organizations.manage_members) | Add or upsert a member by email |
DELETE | /v1/organizations/{id}/members/{userId} | Yes (organizations.manage_members) | Remove a member |
GET | /v1/organizations/{id}/roles | Yes (organizations.manage_members) | List roles defined for the org (system clones + custom) |
GET | /v1/organizations/{id}/domains | Yes (organizations.manage_domains) | List custom domains |
POST | /v1/organizations/{id}/domains | Yes (organizations.manage_domains) | Add custom domain |
DELETE | /v1/organizations/{id}/domains/{domainId} | Yes (organizations.manage_domains) | Remove custom domain |
POST | /v1/organizations/{id}/domains/{domainId}/verify | Yes (organizations.manage_domains) | Verify domain DNS |
Frontend apps:
- Sign-in / sign-up pages (Clerk)
- Protected dashboard layout
- Organization switcher (Clinic app)
- User menu with sign-out
- Role-based UI visibility
7. Common Makefile commands
All commands run from services/api/:
| Command | What it does |
|---|---|
make dev | Start Core API with hot reload (air) |
make run | Start Core API without hot reload |
make build | Compile binary to bin/api |
make test | Run unit tests with race detector |
make test-integration | Run testcontainers integration tests (RLS harness + S3 LocalStack); needs Docker running |
make test-cover | Run tests with coverage report |
make lint | Run golangci-lint |
make check | Lint + vet + build |
make openapi | Regenerate Go types from apps/docs/openapi.yaml (TypeScript types regenerate via pnpm openapi at the repo root) |
make migrate-up | Apply all pending migrations |
make migrate-down | Rollback last migration |
make migrate-reset | Drop + recreate the restartix database, re-run roles init, then migrate-up (dev wipe — pre-prod only) |
make migrate-create name=add_foo | Create a new migration pair |
make docker-up | Start PostgreSQL + Redis |
make docker-down | Stop PostgreSQL + Redis |
Partition rollover (audit-partition-roll)
audit_log and audit_ai_provenance are range-partitioned monthly on created_at (P41). The migration seeds only the current month — a separate CLI provisions forward months and is the load-bearing piece in production:
# Ensure current month + next 3 months exist (idempotent; safe to re-run).
set -a && source .env.local && set +a
go run ./cmd/audit-partition-roll -ahead=3Verify the result with psql:
docker compose exec -T postgres psql -U restartix -d restartix \
-c "SELECT inhrelid::regclass FROM pg_inherits WHERE inhparent = 'audit_log'::regclass ORDER BY 1;"In staging/production this runs on a monthly scheduler (1A.15). Locally, run it manually whenever you want to test partition handoff or simulate the cron.
8. Database access
The Core API uses two PostgreSQL connection pools for defense-in-depth:
| Pool | Role | Connection String | Purpose |
|---|---|---|---|
| AdminPool | restartix (owner) | postgres://restartix:restartix@localhost:5432/restartix | Bypasses RLS. Auth middleware, superadmin requests, system queries. |
| AppPool | restartix_app (restricted) | postgres://restartix_app:restartix_app@localhost:5432/restartix | RLS enforced. Public endpoints, org-scoped staff/patient requests. |
# PostgreSQL (owner — full access, bypasses RLS)
psql "postgres://restartix:restartix@localhost:5432/restartix"
# PostgreSQL (restricted — RLS enforced, for testing RLS policies)
psql "postgres://restartix_app:restartix_app@localhost:5432/restartix"
# Redis
redis-cliThe restartix_app role is created by init-db/01-roles.sql (Docker) and by migration 000001_init (production). It has SELECT, INSERT, UPDATE, DELETE on all tables but does not own them — so PostgreSQL enforces RLS policies.
9. Resetting everything
Wipe all data and start fresh
cd services/api
make docker-down
docker compose down -v # removes volumes (all data, including restartix_app role)
make docker-up # re-creates volumes, runs init-db/01-roles.sql
make migrate-upNote:
docker compose down -vremoves the PostgreSQL volume, which triggersinit-db/01-roles.sqlto re-run on nextdocker compose up. This re-creates therestartix_approle. Without-v, the init script does not re-run (Docker only runs init scripts on first volume creation).
Stop services (keep data)
cd services/api
docker compose stopStart again
cd services/api
docker compose start10. Troubleshooting
"connection refused" on database
Docker containers take a few seconds to start. Wait for health checks:
cd services/api && docker compose ps # STATUS should show "healthy""role restartix does not exist"
Container hasn't initialized yet. Remove the volume and restart:
cd services/api
docker compose down -v && make docker-up"migrate: no change"
All migrations are already applied. This is normal.
Clerk auth fails
- Make sure you're using test keys (prefix
sk_test_andpk_test_) - Create a Clerk dev instance at dashboard.clerk.com if you haven't
- The frontend apps build and run without Clerk keys, but auth flows won't work
Port already in use
lsof -i :9000 # find what's using the port
# Kill it or change PORT in .env