Skip to content

External Service Providers

Comprehensive inventory of all third-party services that RestartiX Platform integrates with, organized by domain. Each entry includes purpose, data exchanged, compliance requirements, and where it's documented.


Authentication & Identity

Clerk

PurposeUser authentication, session management, organization switching
Used byAuth feature, all authenticated endpoints
Data exchangedUser identity (email, name), session tokens, organization membership
ComplianceBAA required (handles user PII). HIPAA-eligible plan needed.
Environment variablesCLERK_SECRET_KEY, CLERK_PUBLISHABLE_KEY, CLERK_WEBHOOK_SECRET
Failure impactCritical — no authentication = no API access
Docsfeatures/auth/

Integration pattern: Clerk issues JWTs → Core API validates via Clerk SDK → sets RLS session variables (app.current_user_id, app.current_org_id, app.current_role).


Video & Communication

Daily.co

PurposeVideo call rooms for telehealth appointments
Used byAppointments feature (video appointments)
Data exchangedRoom creation, meeting tokens, participant joins/leaves
ComplianceBAA required (video sessions may contain PHI). HIPAA-eligible plan needed.
Environment variablesDAILY_API_KEY
Failure impactHigh — video appointments fail, in-person appointments unaffected
Docsfeatures/appointments/, features/integrations/

Integration pattern: Core API creates Daily rooms on appointment creation → generates time-limited meeting tokens per participant → frontend joins via Daily SDK.

Bunny Stream

PurposeExercise video hosting, adaptive streaming (HLS), CDN delivery
Used byExercise library, treatment plan sessions
Data exchangedVideo files (upload), stream URLs (playback), video metadata
ComplianceNo PHI in video content (exercise demonstrations only, not patient recordings).
Environment variablesBUNNY_STREAM_API_KEY, BUNNY_STREAM_LIBRARY_ID, BUNNY_STREAM_CDN_HOSTNAME
Failure impactHigh — exercise videos don't play, treatment plan sessions degraded
Docsfeatures/exercise-library/

Integration pattern: Admin uploads video → Bunny Stream processes (transcoding, HLS packaging) → CDN URL stored in exercises.video_url → patient streams via HLS.js on frontend.


File Storage

AWS S3

PurposeDocument storage (appointment files, PDF reports, form attachments)
Used byDocuments, appointments, forms, PDF templates
Data exchangedFiles (upload/download), signed URLs
ComplianceBAA required (files may contain PHI — reports, prescriptions). S3 encryption at rest enabled.
Environment variablesAWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_S3_BUCKET, AWS_REGION
Failure impactHigh — file uploads/downloads fail, PDF generation fails
Docsfeatures/documents/, features/integrations/

Integration pattern: Core API generates presigned URLs for upload/download → files encrypted at rest (AES-256) → org-scoped key prefixes (org-{id}/...) for tenant isolation.

AWS CloudFront (Optional)

PurposeCDN for S3-hosted files (faster delivery, signed URLs)
Used byDocument downloads
Data exchangedCached file delivery
ComplianceSame as S3 (pass-through).
Environment variablesCLOUDFRONT_DISTRIBUTION_ID, CLOUDFRONT_KEY_PAIR_ID, CLOUDFRONT_PRIVATE_KEY
Failure impactLow — falls back to direct S3 access
Docsfeatures/integrations/

Databases

PostgreSQL (Main)

PurposePrimary business database — all feature data, RLS-enforced multi-tenancy
ProviderAWS RDS PostgreSQL
Data storedAll business entities (55 tables): patients, appointments, forms, services, etc.
ComplianceBAA required (contains PHI). Encryption at rest. 7-year audit retention.
Environment variablesDATABASE_URL
Failure impactCritical — entire API down
Docsdatabase-overview.md, scaling-architecture.md

PostgreSQL (Telemetry)

PurposeCompliance-grade audit, security, and privacy data (separate from business DB)
ProviderAWS RDS PostgreSQL with TimescaleDB extension
Data storedAudit logs, security events, CCPA exclusions, staff activity (6 tables across 4 schemas)
ComplianceHIPAA §164.312(b) audit controls. Immutable audit trail. 6-7 year retention.
Environment variablesTELEMETRY_DATABASE_URL
Failure impactMedium — API continues working, audit logging fails (must queue and retry)
Docs../telemetry/postgres-schema.sql

ClickHouse

PurposeHigh-frequency analytics, media performance tracking, pose data
ProviderClickHouse Cloud
Data storedAnalytics events, media sessions, buffering events, error events, pose frames, API metrics (7 tables)
ComplianceNo direct PHI (hashed actor IDs, truncated IPs). Consent-level gated. TTL-based retention (6mo–2yr).
Environment variablesCLICKHOUSE_URL
Failure impactLow — analytics/media tracking stops, core features unaffected
Docs../telemetry/clickhouse-schema.sql

Redis

PurposeCaching, scheduling hold system, worker queues (audit/analytics), real-time features
ProviderAWS ElastiCache Redis
Data storedScheduling holds (TTL), worker job queues, cached query results, SSE channel state
ComplianceNo persistent PHI (transient data only, TTL-enforced).
Environment variablesREDIS_URL
Failure impactHigh — scheduling holds fail, worker processing stops, caching disabled
Docsfeatures/scheduling/

Geolocation & Privacy

MaxMind GeoLite2

PurposeIP geolocation (country, region) for compliance middleware
Used byTelemetry compliance pipeline (geo enrichment on every request)
Data exchangedIP addresses → country/region lookup (local database, no API calls per request)
ComplianceDatabase downloaded locally. No PII sent to MaxMind during lookups.
Environment variablesMAXMIND_ACCOUNT_ID, MAXMIND_LICENSE_KEY
Failure impactLow — geo enrichment returns "unknown", everything else works
Docs../telemetry/README.md

Integration pattern: GeoLite2 database downloaded on startup (and updated via POST /v1/admin/geo/update) → stored in-memory → every request enriched with country/region.

MaxMind Privacy Exclusions API

PurposeCCPA "Do Not Sell" IP network lists
Used byTelemetry privacy module (compliance middleware)
Data exchangedWeekly sync: MaxMind API → CIDR network blocks stored in Telemetry PG
ComplianceLegal obligation (CCPA). IP ranges from excluded networks get consent_level = 0.
Environment variablesSame as GeoLite2 (MAXMIND_ACCOUNT_ID, MAXMIND_LICENSE_KEY)
Failure impactLow — stale exclusion list used until next successful sync
Docs../telemetry/postgres-schema.sql (privacy schema), ../telemetry/api.md

Infrastructure & Hosting

AWS

PurposeProduction infrastructure: App Runner, RDS, ElastiCache, S3, CloudFront, VPC, Secrets Manager, CloudWatch
Used byAll services (compute, databases, caching, file storage, monitoring)
Services hostedCore API (App Runner), Telemetry API (App Runner), PostgreSQL (RDS), Redis (ElastiCache)
ComplianceBAA available. HIPAA-eligible services used.
Environment variablesManaged via AWS Secrets Manager
Failure impactCritical — entire platform down
Docsscaling-architecture.md, monitoring.md, AWS Infrastructure

Monitoring & Alerting (Infrastructure)

Not to be confused with Telemetry. These tools monitor the servers and infrastructure — CPU, memory, connection pools, request latency, uptime. Telemetry monitors the product and users — video performance, audit trails, analytics events, compliance data. They are complementary layers:

Telemetry (yours)Monitoring (third-party)
Question"Why can't this patient watch videos?""Why is the server slow?"
AudienceProduct team, support, complianceDevOps, on-call engineer
DataFrontend events, middleware enrichmentServer metrics, logs, traces
StorageClickHouse Cloud + Telemetry PG (AWS RDS)Provider's SaaS cloud
Required?Yes (HIPAA audit, product features)Recommended (ops visibility)

See ../telemetry/ for the product telemetry layer.

What You Need By Phase

PhaseInfrastructureMonitoring StackCost
Phase 1 (1-10 orgs, AWS)CloudWatch built-in metricsUptimeRobot (free) + Slack alertsFree
Phase 2 (10-50 orgs, AWS)AWS RDS + App Runner + CloudWatchDatadog or CloudWatch + PagerDuty~$100-200/mo
Phase 3+ (50+ orgs)Multi-instance, read replicasFull Datadog APM + PagerDuty on-call~$500-800/mo

Phase 1 is enough to launch. AWS CloudWatch gives you CPU/memory/logs in the console. UptimeRobot pings /health every 5 minutes and alerts via Slack if it's down. That covers the basics.

UptimeRobot / Pingdom (Phase 1 — Start Here)

PurposeExternal health check monitoring — "is the API up?"
Used byProduction availability monitoring
Data exchangedHTTP GET to /health endpoint every 5 minutes
CostFree tier (50 monitors, 5-min interval)
Failure impactNone — monitoring only, platform unaffected
Docsmonitoring.md

Why start here: This gives you external uptime monitoring. If AWS itself is experiencing issues, you won't know from CloudWatch alone. UptimeRobot checks from outside and alerts you.

Slack

PurposeAlert delivery channel for all phases
Used byUptimeRobot alerts (Phase 1), Datadog warnings (Phase 2+), deployment notifications
Data exchangedAlert messages via incoming webhooks (no PHI)
CostFree (existing workspace)
Failure impactNone — notifications delayed

Datadog (Phase 2+)

PurposeServer metrics, structured logging, APM traces, custom dashboards
Used byAll services — Core API, databases, workers
Data exchangedApplication logs (PHI sanitized before shipping), CPU/memory/connection metrics, request traces
ComplianceNo PHI in logs (sanitized at application level before export).
Environment variablesDD_API_KEY, DD_SITE
Cost~$31/mo (2 hosts, 50 metrics, 10GB logs) → scales with infrastructure
Failure impactNone — observability degraded, platform unaffected
Docsmonitoring.md

Why Phase 2: In Phase 1, CloudWatch + UptimeRobot cover the basics. When you scale to read replicas and more complex connection pool management, you may want deeper APM metrics — connection pool utilization, query latency percentiles, replication lag. That's when Datadog earns its cost.

What Datadog gives you that Telemetry doesn't:

  • Connection pool utilization trending toward exhaustion
  • p95/p99 API response times across all endpoints
  • Slow query detection and alerting
  • Server CPU/memory/goroutine monitoring
  • Correlated request traces (which query made this endpoint slow?)

PagerDuty (Phase 2+)

PurposeCritical alert escalation — pages the on-call engineer at 3am
Used byDatadog critical alerts → PagerDuty → phone call/SMS
Data exchangedAlert metadata: "Connection pool exhaustion on core-api" (no PHI)
Cost~$21/user/mo
Failure impactNone — alerts delayed, not lost (Datadog retries)
Docsmonitoring.md

Why Phase 2: With 1-10 orgs, you can check Slack in the morning. With 50+ orgs and paying customers, you need someone woken up when the database runs out of connections.


Planned (Not Yet Integrated)

Payment Processor (Stripe / equivalent)

PurposeService plan billing, product order payments
Will be used byService plans, patient product orders
Data exchangedPayment intents, subscription management, webhook events
CompliancePCI DSS Level 1. No card data stored locally (tokenized).
StatusNot yet chosen. Required before billing features go live.
Docsfeatures/services/ (billing model defined, payment integration TBD)

Email Service (Resend / SendGrid / equivalent)

PurposeTransactional emails (appointment confirmations, form requests, automation actions)
Will be used byAutomations (send_email action), appointment notifications
Data exchangedEmail addresses, email content (may reference patient names/appointments)
ComplianceBAA required if email content includes PHI.
StatusNot yet chosen. Required for automation send_email action.
Docsfeatures/automations/

SMS Service (Twilio / equivalent)

PurposeAppointment reminders, session reminders
Will be used byAutomations (send_sms action)
Data exchangedPhone numbers, message content
ComplianceBAA required if messages include PHI.
StatusNot yet chosen. Optional, email may suffice initially.
Docsfeatures/automations/

Summary

By Criticality

LevelServicesImpact if Down
CriticalPostgreSQL (Main), Clerk, AWS (App Runner + RDS)API completely unavailable
HighRedis, AWS S3, Daily.co, Bunny StreamMajor features degraded
MediumPostgreSQL (Telemetry)Audit logging queued, compliance risk
LowClickHouse, MaxMind, CloudFrontAnalytics/enrichment stops, core features fine
NoneDatadog, PagerDuty, Slack, UptimeRobotObservability only

By Compliance Requirement

RequirementServices
BAA requiredClerk, Daily.co, AWS (S3/RDS), PostgreSQL (Main), PostgreSQL (Telemetry)
BAA required (when chosen)Payment processor, Email service, SMS service
No BAA neededBunny Stream (no PHI), ClickHouse (hashed data), Redis (transient), MaxMind (local DB), Monitoring stack

Environment Variables (Complete)

env
# Auth
CLERK_SECRET_KEY=sk_live_...
CLERK_PUBLISHABLE_KEY=pk_live_...
CLERK_WEBHOOK_SECRET=whsec_...

# Databases
DATABASE_URL=postgresql://...                    # Main PG
TELEMETRY_DATABASE_URL=postgresql://...           # Telemetry PG (TimescaleDB)
CLICKHOUSE_URL=clickhouse://...                  # ClickHouse
REDIS_URL=redis://...                            # Redis

# File Storage
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=...
AWS_S3_BUCKET=restartix-files
AWS_REGION=eu-central-1

# CDN (optional)
CLOUDFRONT_DISTRIBUTION_ID=E...
CLOUDFRONT_KEY_PAIR_ID=K...
CLOUDFRONT_PRIVATE_KEY=-----BEGIN RSA PRIVATE KEY-----...

# Video Streaming
BUNNY_STREAM_API_KEY=...
BUNNY_STREAM_LIBRARY_ID=...
BUNNY_STREAM_CDN_HOSTNAME=...

# Video Calls
DAILY_API_KEY=...

# Telemetry
TELEMETRY_ENCRYPTION_KEY=...                     # PII encryption key
MAXMIND_ACCOUNT_ID=...                           # GeoIP + Privacy Exclusions
MAXMIND_LICENSE_KEY=...

# Monitoring
DD_API_KEY=...                                   # Datadog (Phase 2+)
DD_SITE=datadoghq.com

# Payments (planned)
# STRIPE_SECRET_KEY=sk_live_...
# STRIPE_WEBHOOK_SECRET=whsec_...

# Email (planned)
# RESEND_API_KEY=re_...

# SMS (planned)
# TWILIO_ACCOUNT_SID=AC...
# TWILIO_AUTH_TOKEN=...

Total Count

StatusCount
Active13 services (Clerk, Daily.co, Bunny Stream, AWS S3, CloudFront, PG Main, PG Telemetry, ClickHouse, Redis, MaxMind ×2, AWS (compute/networking), Datadog)
Optional3 services (CloudFront, UptimeRobot/Pingdom, PagerDuty)
Planned3 services (Payment, Email, SMS)