Project Layout
Current-state layout of the Core API and the surrounding workspace. Forward-looking pieces (Telemetry API, additional domains) are listed at the end under Planned with explicit "not yet built" labels.
Naming convention is fixed in CLAUDE.md: RestartiX = brand only; Core API =
services/api/; Clinic / Portal / Console = the three Next.js apps. Don't use "desk" or "Desk".
Workspace Structure
The repo is a pnpm workspace with Go and Next.js code coexisting at the top level.
restartix-platform/
├── apps/
│ ├── clinic/ # Staff Next.js app (clinic.localhost:9100)
│ ├── portal/ # Patient Next.js app (portal.localhost:9200)
│ ├── console/ # Superadmin Next.js app (console.localhost:9300)
│ └── docs/ # VitePress documentation site (this site)
│
├── packages/
│ ├── ui/ # Shared shadcn/ui components
│ ├── api-client/ # Shared API client + generated OpenAPI types
│ ├── eslint-config/
│ └── typescript-config/
│
├── services/
│ └── api/ # Core API (Go) — see below
│
├── package.json # pnpm workspace root, Turborepo tasks
├── pnpm-workspace.yaml
├── turbo.json
└── CLAUDE.md # Project conventions (authoritative)There is exactly one Go service today: the Core API. The Telemetry API is planned for Layer 11; it does not exist yet.
Core API (services/api/)
services/api/
├── cmd/
│ └── api/
│ └── main.go # Single entrypoint, listens on :9000
│
├── deploy/
│ ├── Dockerfile.api # Multi-stage build, distroless runtime
│ ├── README-s3.md # Bucket policy + CORS deploy notes
│ └── s3-bucket-policy.json # Deny-unencrypted-uploads, deny-non-TLS, etc.
│
├── init-db/
│ └── 01-roles.sql # Bootstraps the `restartix` owner + `restartix_app` restricted role for local dev
│
├── migrations/
│ └── core/ # golang-migrate files (000001_init, 000002_tenancy_rbac)
│
├── internal/
│ ├── core/ # Application packages (allowed to depend on each other)
│ │ ├── activity/ # P35 — last_used_at bumper, throttled
│ │ ├── audit/ # P10 — recorder, middleware, diff, redact
│ │ ├── auth/ # Token verification — Verifier interface + auth/clerk/ impl
│ │ ├── principal/ # Actor model — Subject, SubjectLoader, Resolver, permission codes, drift test
│ │ ├── config/ # envconfig + Version constant + drift test
│ │ ├── crypto/ # P12 — AES-256-GCM, Keyring (InMemory + KMS stub)
│ │ ├── database/ # pgxpool wrappers, TxFromContext, tx helpers
│ │ ├── domain/ # Per-domain code (model, repository, service, handler)
│ │ │ ├── organization/ # Orgs, members, domains
│ │ │ └── human/ # /me, switch-organization
│ │ ├── events/ # P28 — in-process event bus + schedule registry
│ │ ├── gdpr/ # Anonymize() leaf operation (Layer 12 wires the orchestrator)
│ │ ├── idempotency/ # Redis-backed Idempotency-Key replay cache
│ │ ├── middleware/ # HTTP middleware constructors (see below)
│ │ └── server/ # Server bootstrap, routes.go, openapi/ generated types
│ │
│ ├── integration/ # External-service clients (one subdir per provider)
│ │ └── s3/ # P27 — org-scoped uploads, surface registry, presign
│ │
│ ├── shared/ # Cross-cutting helpers (no domain logic)
│ │ ├── apiquery/ # P34 — pagination + sort parsing
│ │ ├── httputil/ # JSON helpers, AppError envelope, HandleError
│ │ ├── pseudonym/ # SHA-256 user-ID hashing (telemetry pre-pseudonymization)
│ │ ├── redact/ # Sensitive-key matcher (slog + audit JSONB)
│ │ ├── requestctx/ # Request-ID generation + `X-Request-ID` propagation
│ │ └── softdelete/ # P13 — deleted_at filter + Restore helper
│ │
│ └── test/ # Test-only packages
│ └── rlstest/ # Layer 1.2 — testcontainers RLS harness
│
├── Makefile # `make check`, `make test`, `make test-integration`, `make migrate-*`, `make openapi`
├── go.mod # github.com/restartix/restartix-platform/services/api
├── go.sum
└── .env.exampleMiddleware (internal/core/middleware/)
Order matters; the pipeline is wired in server/routes.go.
| File | Purpose |
|---|---|
auth.go | Authenticate(verifier, resolver, loader): composes the auth chain — verify token → resolve / first-sight provision principal → load Subject. Provider-agnostic; per-actor-type. |
organization.go | OrganizationContext: pick AdminPool (superadmin) or AppPool (everyone else), set app.current_* session variables for RLS. |
rbac.go | RequirePermission(code) and RequireSuperadmin() route gates. |
activity.go | ActivityTracker(): throttled bump of organization_memberships.last_used_at via internal/core/activity. |
logging.go | Structured request/response slog with X-Request-ID. |
recovery.go | Panic → 500 with generic envelope; never leaks the panic message to the client. |
Cross-cutting helpers used by the middleware:
- Audit middleware lives at
internal/core/audit/middleware.go(it needs theaudit.Recorderso it sits with the recorder, not inmiddleware/). - Request ID is generated in
internal/shared/requestctxand surfaced as theX-Request-IDresponse header — set even when the request panics so error reports stay correlated.
Routes (internal/core/server/routes.go)
The full route table is generated and verified by internal/core/server/openapi/spec_test.go. Adding a route requires updating three places in lockstep (the route, apps/docs/openapi.yaml, and the expectedRoutes table); the drift test fails if any is out of sync.
Today's routes (Layer 0 + Layer 1):
GET /health
GET /v1/public/system-info
GET /v1/public/organizations/resolve
# Authenticated /v1 group: Authenticate → OrganizationContext → ActivityTracker
GET /v1/me
PUT /v1/me/switch-organization
GET /v1/organizations
POST /v1/organizations # superadmin
GET /v1/organizations/{id}
PATCH /v1/organizations/{id} # organizations.update
GET /v1/organizations/{id}/members # organizations.manage_members
POST /v1/organizations/{id}/members # organizations.manage_members
DELETE /v1/organizations/{id}/members/{userId} # organizations.manage_members
GET /v1/organizations/{id}/roles # organizations.manage_members
GET /v1/organizations/{id}/domains # organizations.manage_domains
POST /v1/organizations/{id}/domains # organizations.manage_domains
DELETE /v1/organizations/{id}/domains/{domainId} # organizations.manage_domains
POST /v1/organizations/{id}/domains/{domainId}/verify # organizations.manage_domainsLayer 2+ routes (patients, specialists, services, calendars, appointments, forms, documents, automations, webhooks, segments, treatment plans) are described in implementation-plan.md and data-model.md. They do not exist in code yet.
Core Patterns
The patterns themselves are documented in patterns.md. The summary below shows where each pattern lives in code today.
Constructor-based DI
No DI framework. cmd/api/main.go instantiates pools, recorders, services, and handlers, then passes them down. Each package exposes a New*() constructor.
Repository Pattern
Each domain owns its SQL in repository.go. Repositories use database.TxFromContext(ctx) to pick up the RLS-scoped pgx.Tx started by OrganizationContext (one transaction per request). They never reach for the pool directly.
Thin Handlers
Handlers do parse → call service → render. No business logic. All errors flow through httputil.HandleError, which preserves *AppError envelopes and converts unknown errors to a generic 500.
Error Envelope
internal/shared/httputil/errors.go defines *AppError with Code, Message, StatusCode, Fields. Constructors: NewNotFoundError, NewBadRequestError, NewConflictError, NewForbiddenError, NewUnauthorizedError, NewServiceError, NewValidationError. The envelope shape is documented in reference/error-envelope.md.
Two-Pool Architecture (P1, P2)
// internal/core/database/postgres.go
adminPool := database.MustConnect(ctx, cfg.DatabaseURL, cfg.DBPoolMin, cfg.DBPoolMax) // restartix → bypasses RLS
appPool := database.MustConnect(ctx, cfg.DatabaseAppURL, cfg.DBPoolMin, cfg.DBPoolMax) // restartix_app → RLS enforcedOrganizationContext picks the pool per request: superadmins get the admin pool, everyone else gets the app pool. Either way, middleware acquires a connection, opens an explicit pgx.Tx, and binds the RLS GUCs (app.current_principal_id, app.current_actor_type, app.current_org_id, app.current_role) to the transaction via the SECURITY DEFINER wrappers (set_app_*_context in 000002 / 000006). Repositories pull the tx out via database.TxFromContext(ctx). Postgres wipes the GUCs at COMMIT/ROLLBACK — see the wrapper-block comment in 000002 for the lifetime model.
Configuration
internal/core/config/config.go is the single source for env vars. Selected entries:
Port int `envconfig:"PORT" default:"9000"`
DatabaseURL string `envconfig:"DATABASE_URL" required:"true"`
DatabaseAppURL string `envconfig:"DATABASE_APP_URL"` // falls back to DatabaseURL
EncryptionKeys string `envconfig:"ENCRYPTION_KEYS"` // "1:hex,2:hex"
ActiveEncryptionVersion int `envconfig:"ACTIVE_ENCRYPTION_VERSION" default:"1"`
UseKMSEncryption bool `envconfig:"USE_KMS_ENCRYPTION" default:"false"` // production must be true
ClerkSecretKey string `envconfig:"CLERK_SECRET_KEY" required:"true"`
RedisURL string `envconfig:"REDIS_URL" required:"true"`Every change to Config is mirror-tested by version_drift_test.go so undocumented env vars don't sneak in.
Verification Commands
Always use the Make / pnpm targets — never raw go vet, tsc, or eslint:
# Go (run from services/api/)
make check # lint + vet + build
make test # unit tests with -race
make test-integration # testcontainers RLS harness + S3 LocalStack
make openapi # regenerate Go types from apps/docs/openapi.yaml
# pnpm (run from repo root)
pnpm check # lint + typecheck + build across the workspace
pnpm openapi # regenerate TS types into packages/api-clientSee reference/local-development.md for the full developer setup.
Planned (Not Yet Built)
The pieces below appear in feature specs but do not exist in the codebase today. They live here as a forward-looking map; the implementation plan tracks who owns what.
Telemetry API (Layer 2 feature)
A second Go service that ingests patient exercise-engagement events and pose-tracking landmark batches from the Patient Portal, computes session aggregates server-side, and persists them via events.Bus into Core API's existing Postgres. Replay landmarks live as gzipped binary blobs in S3. No separate ClickHouse, no separate compliance Postgres — see decisions.md → Why telemetry is PG + S3, not ClickHouse.
When it lands it will sit beside the Core API:
services/
├── api/ # exists today
└── telemetry/ # Layer 2 — cmd/, internal/{handlers,buffer,aggregator,codec}/, deploy/Telemetry has no Postgres migrations of its own — the aggregate tables (pose_session_metrics, pose_rep_metrics, media_session_metrics, media_buffering_events) ship as Core API migrations and are written by a Core API events.Bus subscriber. Telemetry talks back to Core API as a Cat F service-account principal. See /telemetry/index.md for the full design.
Until Layer 2 telemetry work begins, audit lives entirely in the Core API's local Postgres audit_log table — see internal/core/audit/. Telemetry does NOT consume audit_log (the earlier "audit forwarding" design is rejected).
Additional Domains
The Core API today contains only organization and user domains. Layer 2+ adds (in the order driven by dependency-map.md): patient_person, patient, specialist, specialty, service, service_plan, product, custom_field, form_template, form, calendar, appointment, pdf_template, appointment_document, automation, webhook, segment, exercise, treatment_plan. Each will follow the same model.go / repository.go / service.go / handler.go / errors.go shape.
Additional Integrations
internal/integration/ will grow as Layer 2+ ships:
dailyco/(Layer 6.5 video calls)clerk/(webhook receiver, when JIT provisioning is augmented with a webhook fallback — see Layer 1.13)- transport providers (Layer 8 automations: AWS SES / SendGrid / Twilio — provider undecided)
Additional Middleware
ratelimit.go(Layer 1.16 — Redis-backed, applied to auth + public endpoints first)securityheaders.go(1E.3 / production hardening — HSTS, CSP, X-Frame-Options for prod)
These are intentionally absent from internal/core/middleware/ today.