Skip to content

Local Development Setup

Everything you need to run the RestartiX platform locally in its current state (Phase 0–1.6: Core API with auth & multi-tenancy, Clinic app, Patient Portal, Console).


Prerequisites

ToolVersionInstall
Go1.24+go.dev/dl
Node.js20+nodejs.org
pnpm10.xnpm install -g pnpm@latest
Docker20+docs.docker.com
Docker Composev2+Included with Docker Desktop
golang-migrateLatestgo install -tags 'postgres' github.com/golang-migrate/migrate/v4/cmd/migrate@latest

Optional but recommended:

ToolPurposeInstall
airHot reload for Gogo install github.com/air-verse/air@latest
golangci-lintGo lintergo install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
psqlPostgreSQL CLI for manual queriesbrew install postgresql / apt install postgresql-client

1. Install frontend dependencies

From the project root:

bash
pnpm install

2. Start infrastructure services

The docker-compose.yml lives in services/api/. It starts PostgreSQL and Redis — the only infrastructure needed right now.

bash
cd services/api
make docker-up

This starts:

ServicePortPurpose
PostgreSQL 175432Business data with Row-Level Security. Docker init creates two roles: restartix (owner) and restartix_app (restricted, RLS enforced).
Redis 76379Session data, caching
RedisInsight5540Redis GUI (localhost:5540)

Verify they're healthy:

bash
docker compose ps

3. Environment variables

Core API

bash
cd services/api
cp .env.example .env

Edit .env and fill in values. Here's what matters:

VariableRequired?Notes
DATABASE_URLYesPre-filled. Owner role (restartix) — used by AdminPool (bypasses RLS). Powers auth middleware, superadmin requests, system queries.
DATABASE_APP_URLYesPre-filled. Restricted role (restartix_app) — used by AppPool (RLS enforced). Powers all org-scoped, patient, and public requests. Falls back to DATABASE_URL if omitted (not recommended).
REDIS_URLYesPre-filled, works with default Docker setup
CLERK_SECRET_KEYFor authGet from dashboard.clerk.com. Without it, all authenticated endpoints return 401. Users are automatically provisioned in the database on first login (no webhooks needed).
ENCRYPTION_KEYYesGenerate with openssl rand -hex 32

Everything else in .env.example (DAILY_API_KEY, AWS_*, TELEMETRY_INTERNAL_URL, etc.) is for future phases and can be left empty or as-is.

Frontend apps (Clinic, Portal & Console)

All three apps have similar env vars. Copy the examples:

bash
cp apps/clinic/.env.example apps/clinic/.env.local
cp apps/portal/.env.example apps/portal/.env.local
cp apps/console/.env.example apps/console/.env.local
VariableRequired?Notes
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEYFor authGet from Clerk dashboard. Apps build and run without it, but auth won't work.
CLERK_SECRET_KEYFor authSame Clerk instance as the Core API.
CORE_API_URLFor API callsPre-filled as http://localhost:9000
NEXT_PUBLIC_CLINIC_DOMAINFor domain mappingClinic app: clinic.localhost:9100
NEXT_PUBLIC_PORTAL_DOMAINFor domain mappingPortal app: portal.localhost:9200
NEXT_PUBLIC_CONSOLE_DOMAINFor domain mappingConsole app: console.localhost:9300

4. Run database migrations

bash
cd services/api
make migrate-up

Verify tables exist:

bash
psql "postgres://restartix:restartix@localhost:5432/restartix" -c "\dt"

You should see: audit_log, organizations, organization_domains, users, user_organizations (plus the schema_migrations tracking table).


5. Run the services

Core API (port 9000)

bash
cd services/api
make dev    # with hot reload (requires air)
# or
make run    # without hot reload

Health check:

bash
curl http://localhost:9000/health

Frontend apps

From the project root:

bash
pnpm dev

This starts all apps via Turborepo:

AppURLPurpose
Clinic apphttp://{slug}.clinic.localhost:9100Staff-facing (admin, specialists)
Patient Portalhttp://{slug}.portal.localhost:9200Patient-facing (booking, consents)
Consolehttp://console.localhost:9300Superadmin (platform management)

Subdomain mapping: Each organization is accessed via its slug-based subdomain. For example, if you have an org with slug demo, visit http://demo.clinic.localhost:9100. Most browsers resolve *.localhost to 127.0.0.1 automatically — no /etc/hosts changes needed.

Custom domain mapping: Organizations can also configure custom domains (e.g., clinic.myclinic.com). For local testing of custom domains, add entries to /etc/hosts:

bash
# Add to /etc/hosts for custom domain testing
127.0.0.1  clinic.myclinic.localhost
127.0.0.1  portal.myclinic.localhost

Then insert a verified domain record in the database:

sql
INSERT INTO organization_domains (organization_id, domain, domain_type, verification_token, status, verified_at)
VALUES (1, 'clinic.myclinic.localhost:9100', 'clinic', 'local-test', 'verified', NOW());

Visit http://clinic.myclinic.localhost:9100 — the proxy will detect it's not a platform subdomain and resolve via the ?domain= parameter instead.

Or run individually:

bash
pnpm --filter @workspace/clinic dev     # just Clinic app
pnpm --filter @workspace/portal dev     # just Patient Portal
pnpm --filter @workspace/console dev    # just Console

6. What works right now

With everything running, these endpoints and flows are functional:

Core API endpoints:

MethodPathAuthDescription
GET/healthNoHealth check (postgres + redis status)
GET/v1/public/organizations/resolveNoResolve org by slug or custom domain
GET/v1/meYesCurrent user profile
PUT/v1/me/switch-organizationYesSwitch active organization (legacy)
GET/v1/organizationsYesList organizations
POST/v1/organizationsYes (superadmin)Create organization
GET/v1/organizations/{id}YesGet organization
PATCH/v1/organizations/{id}Yes (admin)Update organization
GET/v1/organizations/{id}/domainsYesList custom domains
POST/v1/organizations/{id}/domainsYes (admin)Add custom domain
DELETE/v1/organizations/{id}/domains/{domainId}Yes (admin)Remove custom domain
POST/v1/organizations/{id}/domains/{domainId}/verifyYes (admin)Verify domain DNS

Frontend apps:

  • Sign-in / sign-up pages (Clerk)
  • Protected dashboard layout
  • Organization switcher (Clinic app)
  • User menu with sign-out
  • Role-based UI visibility

7. Common Makefile commands

All commands run from services/api/:

CommandWhat it does
make devStart Core API with hot reload (air)
make runStart Core API without hot reload
make buildCompile binary to bin/api
make testRun tests with race detector
make test-coverRun tests with coverage report
make lintRun golangci-lint
make checkLint + vet + build
make migrate-upApply all pending migrations
make migrate-downRollback last migration
make migrate-create name=add_fooCreate a new migration pair
make docker-upStart PostgreSQL + Redis
make docker-downStop PostgreSQL + Redis

8. Database access

The Core API uses two PostgreSQL connection pools for defense-in-depth:

PoolRoleConnection StringPurpose
AdminPoolrestartix (owner)postgres://restartix:restartix@localhost:5432/restartixBypasses RLS. Auth middleware, superadmin requests, system queries.
AppPoolrestartix_app (restricted)postgres://restartix_app:restartix_app@localhost:5432/restartixRLS enforced. Public endpoints, org-scoped staff/patient requests.
bash
# PostgreSQL (owner — full access, bypasses RLS)
psql "postgres://restartix:restartix@localhost:5432/restartix"

# PostgreSQL (restricted — RLS enforced, for testing RLS policies)
psql "postgres://restartix_app:restartix_app@localhost:5432/restartix"

# Redis
redis-cli

The restartix_app role is created by init-db/01-roles.sql (Docker) and by migration 000001_init (production). It has SELECT, INSERT, UPDATE, DELETE on all tables but does not own them — so PostgreSQL enforces RLS policies.


9. Resetting everything

Wipe all data and start fresh

bash
cd services/api
make docker-down
docker compose down -v     # removes volumes (all data, including restartix_app role)
make docker-up             # re-creates volumes, runs init-db/01-roles.sql
make migrate-up

Note: docker compose down -v removes the PostgreSQL volume, which triggers init-db/01-roles.sql to re-run on next docker compose up. This re-creates the restartix_app role. Without -v, the init script does not re-run (Docker only runs init scripts on first volume creation).

Stop services (keep data)

bash
cd services/api
docker compose stop

Start again

bash
cd services/api
docker compose start

10. Troubleshooting

"connection refused" on database

Docker containers take a few seconds to start. Wait for health checks:

bash
cd services/api && docker compose ps   # STATUS should show "healthy"

"role restartix does not exist"

Container hasn't initialized yet. Remove the volume and restart:

bash
cd services/api
docker compose down -v && make docker-up

"migrate: no change"

All migrations are already applied. This is normal.

Clerk auth fails

  • Make sure you're using test keys (prefix sk_test_ and pk_test_)
  • Create a Clerk dev instance at dashboard.clerk.com if you haven't
  • The frontend apps build and run without Clerk keys, but auth flows won't work

Port already in use

bash
lsof -i :9000   # find what's using the port
# Kill it or change PORT in .env