Skip to content

Key Decisions

Every major technology choice has a reason. This page documents the thinking behind each one — useful for developers joining the project and for stakeholders evaluating the platform.


Why Go?

Go (also called Golang) is the programming language the platform is built in. It's used by companies like Uber, Cloudflare, Docker, and Google for backend systems that need to be fast and reliable.

ConcernOld system (Strapi / Node.js)New system (Go)
PerformanceSingle-threaded, slows under loadNative concurrency — handles many requests simultaneously
Memory~500MB per server instance~20MB per server instance
SecurityThousands of npm packages (supply chain risk)Minimal dependencies — smaller attack surface
Type safetyRuntime errors discovered in productionErrors caught at compile time, before deployment
DeploymentNode runtime + node_modules (~500MB)Single static binary (~20MB)
HIPAA/GDPRHard to audit — hidden framework behaviorsFull auditability — no hidden code paths
TalentLarge Node.js poolLarge Go pool — used by major infrastructure companies

Bottom line: Go is faster, cheaper to run, easier to audit, and more reliable for a healthcare platform at this scale.


Why PostgreSQL?

PostgreSQL is the primary database. It was chosen for one critical capability: Row-Level Security (RLS).

RLS is a PostgreSQL feature that enforces data isolation at the database level — not the application level. This means even if there were a bug in application code, the database itself would refuse to return one clinic's data to another clinic's user.

For a multi-tenant healthcare platform, this is not a nice-to-have — it's a requirement. RLS makes the compliance story much stronger because data isolation can be verified at the database level by auditors, not just taken on faith from application code.

The database runs on AWS RDS — a dedicated PostgreSQL instance in a private network (VPC), not exposed to the internet. The platform originally used Neon (a serverless PostgreSQL provider) during early development but migrated to RDS for production because: RDS includes a HIPAA BAA at no cost (Neon requires a $500+/month Enterprise plan), provides dedicated resources instead of shared multi-tenant compute, and avoids a provider migration later since the scaling plan already required RDS at Phase 2. See AWS Infrastructure for the full setup.


Why Clerk for authentication?

Clerk is a third-party authentication service. Authentication (login, MFA, password reset, session management) is a commodity problem — but getting it wrong has severe consequences for a healthcare platform.

Clerk was chosen because:

  • SOC 2 Type II certified
  • HIPAA Business Associate Agreement (BAA) available
  • Handles MFA, social login, magic links, passkeys out of the box
  • Reduces the attack surface — auth code is the most security-sensitive code in any system

The split: Clerk handles authentication (is this person who they say they are?). The platform's own database handles authorization (what are they allowed to do in which clinic?). These are separate concerns handled by the right tool for each.


Why one service (not microservices)?

The previous system had two separate services — a main backend and a separate scheduling microservice. That caused:

  • Duplicate authentication code
  • Data syncing between two databases (and sync bugs)
  • Two deployments to coordinate
  • More things to break

The new system merges all clinical operations into one service. The only separate service is Telemetry, which is separate for a legitimate reason: it handles high-frequency analytics data (video tracking, session metrics) that operates at a completely different scale and rhythm from clinical CRUD operations.

Rule: Separate services only when the operational profile genuinely differs. Don't split for the sake of "microservices architecture."


Why Redis?

Redis is an in-memory data store used for three specific things:

  1. Booking holds — When a patient selects a time slot, it's held in Redis for a few minutes while they complete the booking. This prevents two patients from booking the same slot simultaneously without locking the database.
  2. Rate limiting — Limits how many requests a user or IP can make per minute.
  3. Webhook idempotency — Tracks processed event IDs to prevent duplicate processing when Clerk or other services retry webhook deliveries.

Redis is not used as a general cache. The database is fast enough.


Why Daily.co for video?

Daily.co provides HIPAA-compliant video rooms via API. A video room is created automatically when an appointment is booked and expires after the appointment. The platform never stores video content — Daily.co handles recording if needed.

Alternatives were evaluated:

  • Twilio Video — more expensive, more complex
  • WebRTC DIY — requires STUN/TURN server infrastructure, significant ongoing maintenance
  • Whereby — less developer control over room lifecycle

Why AWS for deployment?

The platform originally launched on Railway (a Heroku-like PaaS) for its simplicity during early development. We migrated to AWS for production because:

  • Reliability: Railway had no published SLA and experienced frequent production issues. AWS App Runner provides 99.99% SLA.
  • HIPAA compliance: Railway does not offer a Business Associate Agreement (BAA). AWS provides BAA at no additional cost — required for healthcare SaaS.
  • We were already on AWS: S3 for file uploads, S3 for backups, and the scaling plan called for AWS RDS. Running compute on AWS too means one provider, one bill, private networking between services.
  • No migration later: The scaling plan requires AWS RDS at Phase 2, enterprise isolation at Phase 3, and multi-region at Phase 4. Starting on AWS means zero provider migrations.

We use AWS App Runner — a serverless container service that works like Railway (push a container, it runs, auto-scales, handles SSL). The developer experience is identical: git push to main triggers a build and deploy via GitHub Actions. See the full infrastructure plan in AWS Infrastructure.


Why a monorepo?

Both services (Core API and Telemetry API) live in the same Git repository. This means:

  • Shared types are defined once and imported by both
  • A single go.mod — no dependency drift between services
  • Changes that span both services are a single commit
  • One CI/CD pipeline to maintain

The tradeoff is that a monorepo requires discipline — each service's code is in its own directory (internal/core/, internal/telemetry/) and they don't import from each other's internal packages.