Skip to content

Local Development Setup

Everything you need to run the RestartiX platform locally in its current state (Layer 0 done + Layer 1 in progress: Core API with auth, multi-tenancy, RBAC, audit, encryption, RLS test harness, OpenAPI codegen, Clinic app, Patient Portal, Console).


Prerequisites

ToolVersionInstall
Go1.24+go.dev/dl
Node.js20+nodejs.org
pnpm10.xnpm install -g pnpm@latest
Docker20+docs.docker.com
Docker Composev2+Included with Docker Desktop
golang-migrateLatestgo install -tags 'postgres' github.com/golang-migrate/migrate/v4/cmd/migrate@latest

Optional but recommended:

ToolPurposeInstall
airHot reload for Gogo install github.com/air-verse/air@latest
golangci-lintGo lintergo install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
psqlPostgreSQL CLI for manual queriesbrew install postgresql / apt install postgresql-client

1. Install frontend dependencies

From the project root:

bash
pnpm install

2. Start infrastructure services

The docker-compose.yml lives in services/api/. It starts PostgreSQL and Redis — the only infrastructure needed right now.

bash
cd services/api
make docker-up

This starts:

ServicePortPurpose
PostgreSQL 175432Business data with Row-Level Security. Docker init creates two roles: restartix (owner) and restartix_app (restricted, RLS enforced). Direct port — used by migrations + integration tests, NOT by the running Core API.
pgbouncer 1.256432Connection pooler in transaction mode. The Core API connects through this — see P44 in patterns.md for why, and aws-infrastructure.md § Connection pooling for the AWS deployment shape.
Redis 76379Session data, caching
RedisInsight5540Redis GUI (localhost:5540)

Two database URLs, two ports

DATABASE_URL and DATABASE_APP_URL point at pgbouncer (:6432); DATABASE_DIRECT_URL points at Postgres directly (:5432). Migrations use DATABASE_DIRECT_URL because golang-migrate relies on session-scoped pg_advisory_lock which transaction-mode pgbouncer doesn't support. Same pattern applies on AWS — the deploy pipeline runs migrations against the RDS endpoint directly, not via the pgbouncer ECS service.

Verify they're healthy:

bash
docker compose ps

3. Environment variables

Core API

bash
cd services/api
cp .env.example .env.local

Edit .env.local and fill in values. Here's what matters:

VariableRequired?Notes
DATABASE_URLYesPre-filled. Owner role (restartix) — used by AdminPool (bypasses RLS). Powers auth middleware, superadmin requests, system queries.
DATABASE_APP_URLYesPre-filled. Restricted role (restartix_app) — used by AppPool (RLS enforced). Powers all org-scoped, patient, and public requests. Falls back to DATABASE_URL if omitted (not recommended).
REDIS_URLYesPre-filled, works with default Docker setup
CLERK_SECRET_KEYFor authGet from dashboard.clerk.com. Without it, all authenticated endpoints return 401. Users are automatically provisioned in the database on first login (no webhooks needed).
ENCRYPTION_KEYSYes1:<hex> for a single key; generate the hex with openssl rand -hex 32. Multi-version form is 1:<hex>,2:<hex> during rotation.
ACTIVE_ENCRYPTION_VERSIONYesVersion Encrypt seals new data under (defaults to 1).

Everything else in .env.example (DAILY_API_KEY, AWS_*, telemetry signing-secret env vars, etc.) is for future phases and can be left empty or as-is.

Frontend apps (Clinic, Portal & Console)

All three apps have similar env vars. Copy the examples:

bash
cp apps/clinic/.env.example apps/clinic/.env.local
cp apps/portal/.env.example apps/portal/.env.local
cp apps/console/.env.example apps/console/.env.local
VariableRequired?Notes
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEYFor authGet from Clerk dashboard. Apps build and run without it, but auth won't work.
CLERK_SECRET_KEYFor authSame Clerk instance as the Core API.
CORE_API_URLFor API callsPre-filled as http://localhost:9000
NEXT_PUBLIC_CLINIC_DOMAINFor domain mappingClinic app: clinic.localhost:9100
NEXT_PUBLIC_PORTAL_DOMAINFor domain mappingPortal app: portal.localhost:9200
NEXT_PUBLIC_CONSOLE_DOMAINFor domain mappingConsole app: console.localhost:9300

4. Run database migrations

bash
cd services/api
make migrate-up

Verify tables exist:

bash
psql "postgres://restartix:restartix@localhost:5432/restartix" -c "\dt"

You should see: audit_log, organizations, organization_domains, principals, humans, organization_memberships, platform_memberships (plus the schema_migrations tracking table; many more land as Layer 1+ migrations apply).


5. Run the services

Core API (port 9000)

bash
cd services/api
make dev    # with hot reload (requires air)
# or
make run    # without hot reload

Health check:

bash
curl http://localhost:9000/health

Frontend apps

From the project root:

bash
pnpm dev

This starts all apps via Turborepo:

AppURLPurpose
Clinic apphttp://{slug}.clinic.localhost:9100Staff-facing (admin, specialists)
Patient Portalhttp://{slug}.portal.localhost:9200Patient-facing (booking, consents)
Consolehttp://console.localhost:9300Superadmin (platform management)

Subdomain mapping: Each organization is accessed via its slug-based subdomain. For example, if you have an org with slug demo, visit http://demo.clinic.localhost:9100. Most browsers resolve *.localhost to 127.0.0.1 automatically — no /etc/hosts changes needed.

Custom domain mapping: Organizations can also configure custom domains (e.g., clinic.myclinic.com). For local testing of custom domains, add entries to /etc/hosts:

bash
# Add to /etc/hosts for custom domain testing
127.0.0.1  clinic.myclinic.localhost
127.0.0.1  portal.myclinic.localhost

Then insert a verified domain record in the database (look up the org's UUID first):

sql
INSERT INTO organization_domains (organization_id, domain, domain_type, verification_token, status, verified_at)
SELECT id, 'clinic.myclinic.localhost:9100', 'clinic', 'local-test', 'verified', NOW()
FROM organizations WHERE slug = 'myclinic';

Visit http://clinic.myclinic.localhost:9100 — the proxy will detect it's not a platform subdomain and resolve via the ?domain= parameter instead.

Or run individually:

bash
pnpm --filter @workspace/clinic dev     # just Clinic app
pnpm --filter @workspace/portal dev     # just Patient Portal
pnpm --filter @workspace/console dev    # just Console

6. What works right now

With everything running, these endpoints and flows are functional:

Core API endpoints:

MethodPathAuthDescription
GET/healthNoHealth check (postgres + redis status)
GET/v1/public/system-infoNoManufacturer + UDI / MDR labelling for the platform
GET/v1/public/organizations/resolveNoResolve org by slug or custom domain
GET/v1/meYesCurrent user profile
PUT/v1/me/switch-organizationYesSwitch active organization (API path; primary switching is domain-based)
GET/v1/organizationsYesList organizations the caller is a member of
POST/v1/organizationsYes (superadmin)Create organization
GET/v1/organizations/{id}YesGet organization
PATCH/v1/organizations/{id}Yes (organizations.update)Update organization
GET/v1/organizations/{id}/membersYes (organizations.manage_members)List members
POST/v1/organizations/{id}/membersYes (organizations.manage_members)Add or upsert a member by email
DELETE/v1/organizations/{id}/members/{userId}Yes (organizations.manage_members)Remove a member
GET/v1/organizations/{id}/rolesYes (organizations.manage_members)List roles defined for the org (system clones + custom)
GET/v1/organizations/{id}/domainsYes (organizations.manage_domains)List custom domains
POST/v1/organizations/{id}/domainsYes (organizations.manage_domains)Add custom domain
DELETE/v1/organizations/{id}/domains/{domainId}Yes (organizations.manage_domains)Remove custom domain
POST/v1/organizations/{id}/domains/{domainId}/verifyYes (organizations.manage_domains)Verify domain DNS

Frontend apps:

  • Sign-in / sign-up pages (Clerk)
  • Protected dashboard layout
  • Organization switcher (Clinic app)
  • User menu with sign-out
  • Role-based UI visibility

7. Common Makefile commands

All commands run from services/api/:

CommandWhat it does
make devStart Core API with hot reload (air)
make runStart Core API without hot reload
make buildCompile binary to bin/api
make testRun unit tests with race detector
make test-integrationRun testcontainers integration tests (RLS harness + S3 LocalStack); needs Docker running
make test-coverRun tests with coverage report
make lintRun golangci-lint
make checkLint + vet + build
make openapiRegenerate Go types from apps/docs/openapi.yaml (TypeScript types regenerate via pnpm openapi at the repo root)
make migrate-upApply all pending migrations
make migrate-downRollback last migration
make migrate-resetDrop + recreate the restartix database, re-run roles init, then migrate-up (dev wipe — pre-prod only)
make migrate-create name=add_fooCreate a new migration pair
make docker-upStart PostgreSQL + Redis
make docker-downStop PostgreSQL + Redis

Partition rollover (audit-partition-roll)

audit_log and audit_ai_provenance are range-partitioned monthly on created_at (P41). The migration seeds only the current month — a separate CLI provisions forward months and is the load-bearing piece in production:

bash
# Ensure current month + next 3 months exist (idempotent; safe to re-run).
set -a && source .env.local && set +a
go run ./cmd/audit-partition-roll -ahead=3

Verify the result with psql:

bash
docker compose exec -T postgres psql -U restartix -d restartix \
  -c "SELECT inhrelid::regclass FROM pg_inherits WHERE inhparent = 'audit_log'::regclass ORDER BY 1;"

In staging/production this runs on a monthly scheduler (1A.15). Locally, run it manually whenever you want to test partition handoff or simulate the cron.


8. Database access

The Core API uses two PostgreSQL connection pools for defense-in-depth:

PoolRoleConnection StringPurpose
AdminPoolrestartix (owner)postgres://restartix:restartix@localhost:5432/restartixBypasses RLS. Auth middleware, superadmin requests, system queries.
AppPoolrestartix_app (restricted)postgres://restartix_app:restartix_app@localhost:5432/restartixRLS enforced. Public endpoints, org-scoped staff/patient requests.
bash
# PostgreSQL (owner — full access, bypasses RLS)
psql "postgres://restartix:restartix@localhost:5432/restartix"

# PostgreSQL (restricted — RLS enforced, for testing RLS policies)
psql "postgres://restartix_app:restartix_app@localhost:5432/restartix"

# Redis
redis-cli

The restartix_app role is created by init-db/01-roles.sql (Docker) and by migration 000001_init (production). It has SELECT, INSERT, UPDATE, DELETE on all tables but does not own them — so PostgreSQL enforces RLS policies.


9. Resetting everything

Wipe all data and start fresh

bash
cd services/api
make docker-down
docker compose down -v     # removes volumes (all data, including restartix_app role)
make docker-up             # re-creates volumes, runs init-db/01-roles.sql
make migrate-up

Note: docker compose down -v removes the PostgreSQL volume, which triggers init-db/01-roles.sql to re-run on next docker compose up. This re-creates the restartix_app role. Without -v, the init script does not re-run (Docker only runs init scripts on first volume creation).

Stop services (keep data)

bash
cd services/api
docker compose stop

Start again

bash
cd services/api
docker compose start

10. Troubleshooting

"connection refused" on database

Docker containers take a few seconds to start. Wait for health checks:

bash
cd services/api && docker compose ps   # STATUS should show "healthy"

"role restartix does not exist"

Container hasn't initialized yet. Remove the volume and restart:

bash
cd services/api
docker compose down -v && make docker-up

"migrate: no change"

All migrations are already applied. This is normal.

Clerk auth fails

  • Make sure you're using test keys (prefix sk_test_ and pk_test_)
  • Create a Clerk dev instance at dashboard.clerk.com if you haven't
  • The frontend apps build and run without Clerk keys, but auth flows won't work

Port already in use

bash
lsof -i :9000   # find what's using the port
# Kill it or change PORT in .env