Backup and Disaster Recovery Strategy
Executive Summary
RestartiX processes state-funded insurance claims where exercise and therapy data serves as proof of service delivery. Loss of this data would:
- Prevent reimbursement claims from insurance providers
- Eliminate fraud prevention evidence
- Expose the organization to legal liability
- Violate HIPAA 6-year medical record retention requirements
- Undermine audit compliance for state funding
Therefore: Data backup is not optional. This document defines a defense-in-depth backup strategy with multiple independent layers.
Compliance and Legal Requirements
Data Retention Mandates
| Data Category | Retention Period | Legal Basis | Consequence of Loss |
|---|---|---|---|
| Exercise/therapy logs | 7 years | State insurance fraud prevention | Cannot prove services delivered, risk fraud accusations |
| Appointments | 6 years | HIPAA medical records | Cannot defend malpractice claims |
| Signed consent forms | 6 years | GDPR Art. 7 + HIPAA | Cannot prove patient consent |
| Prescriptions/reports | 6 years | HIPAA | Medical-legal liability |
| Audit logs | 6 years | HIPAA §164.312(b) | Cannot prove security compliance |
| Insurance claim metadata | 7 years | State financial audit requirements | Cannot reconcile payments |
Why 7 Years?
State financial audits can request records up to 7 years retroactively. If you cannot produce exercise logs proving services were delivered, you may be required to refund state payments and face fraud investigations.
Backup Architecture: The 3-2-1-1 Rule
We implement an enhanced version of the industry-standard 3-2-1 rule, adding a fourth layer for state compliance:
┌─────────────────────────────────────────────────────────────────┐
│ LAYER 0: LIVE PRODUCTION DATABASE (NEON) │
│ - Primary serverless PostgreSQL (Neon) │
│ - RLS-enforced multi-tenant isolation │
│ - Real-time data, constantly changing │
│ - RPO: 0 seconds (no data loss tolerance during operation) │
└─────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────┐
│ LAYER 1: NEON MANAGED BACKUPS (VENDOR-CONTROLLED) │
│ - Point-in-time recovery (PITR): 7-30 days │
│ - Automated continuous backup │
│ - Managed by Neon infrastructure │
│ - Fast restore (minutes) │
│ │
│ ⚠️ LIMITATION: Vendor lock-in. If Neon has catastrophic │
│ failure or goes out of business, this layer is lost. │
└─────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────┐
│ LAYER 2: DAILY LOGICAL BACKUPS (OUR CONTROL - PRIMARY SAFETY) │
│ - Daily pg_dump to object storage (S3/R2/Backblaze) │
│ - Different provider than Neon (different failure domain) │
│ - Encrypted at rest (AES-256) │
│ - Versioned and immutable (cannot be deleted/modified) │
│ - Retention: 7 years │
│ - Format: Custom PostgreSQL dump (compressed) │
│ │
│ ✅ GUARANTEES: │
│ - Restoreable to any PostgreSQL instance │
│ - Independent of Neon (vendor independence) │
│ - Protected from ransomware (immutable storage) │
│ - Meets state audit requirements │
└─────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────┐
│ LAYER 3: WEEKLY CROSS-REGION REPLICATION (GEOGRAPHIC SAFETY) │
│ - Weekly copy of Layer 2 backups to different geographic zone │
│ - Example: Primary EU-Central → Replica US-East │
│ - Protection against regional disasters │
│ - Same retention: 7 years │
│ - Same immutability guarantees │
│ │
│ ✅ GUARANTEES: │
│ - Survives entire AWS region failure │
│ - Survives natural disasters (earthquakes, floods) │
│ - Survives geopolitical events (war, sanctions) │
└─────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────┐
│ LAYER 4: QUARTERLY OFFLINE ARCHIVE (COLD STORAGE - OPTIONAL) │
│ - Quarterly snapshot exported to air-gapped storage │
│ - Stored offline (disconnected from network) │
│ - Physical media or encrypted external drive │
│ - Retention: 7 years │
│ │
│ ✅ GUARANTEES: │
│ - Survives complete cloud infrastructure compromise │
│ - Ultimate protection against ransomware │
│ - Physical possession for legal/audit purposes │
│ │
│ ⚠️ TRADE-OFF: Manual process, slower restore time │
│ 📋 RECOMMENDATION: Only if state audits explicitly require │
└─────────────────────────────────────────────────────────────────┘Disaster Recovery Scenarios
Scenario Matrix
| Disaster | Layer 1 (Neon) | Layer 2 (Our S3) | Layer 3 (Cross-Region) | Layer 4 (Offline) |
|---|---|---|---|---|
| Accidental DELETE query | ✅ Restore via PITR (5 min) | ✅ Restore from yesterday (30 min) | ✅ Restore from last week (1 hour) | ✅ Restore from last quarter (4 hours) |
| Ransomware encrypts database | ❌ Encrypted | ✅ Immutable backup survives | ✅ Geographic copy survives | ✅ Offline copy survives |
| Neon regional outage | ⚠️ Degraded (failover 15 min) | ✅ Restore to new instance (1 hour) | ✅ Restore from replica (1 hour) | ✅ Restore from offline (4 hours) |
| Neon bankruptcy/shutdown | ❌ Lost after notice period | ✅ Full restore to any Postgres | ✅ Full restore to any Postgres | ✅ Full restore to any Postgres |
| AWS S3 regional failure | ✅ Neon still operational | ❌ Primary backups unavailable | ✅ Cross-region copy available | ✅ Offline copy available |
| Developer accidentally drops table | ✅ PITR restore (10 min) | ✅ Restore specific table (20 min) | ✅ Restore specific table (30 min) | ✅ Restore specific table (2 hours) |
| State audit requests 5-year-old data | ❌ Outside retention window | ✅ Retrieve from S3 archive | ✅ Retrieve from cross-region | ✅ Retrieve from offline archive |
| Hacker deletes all Neon backups | ❌ Vendor backups compromised | ✅ Separate credentials, survives | ✅ Separate credentials, survives | ✅ Air-gapped, survives |
| Complete internet/cloud collapse | ❌ Inaccessible | ❌ Inaccessible | ❌ Inaccessible | ✅ Physical possession |
Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO)
| Scenario | RTO (Max Downtime) | RPO (Max Data Loss) | Recovery Source |
|---|---|---|---|
| Minor data corruption | 15 minutes | 0 seconds | Neon PITR |
| Table accidentally dropped | 30 minutes | < 1 hour | Neon PITR or daily backup |
| Database-wide corruption | 2 hours | < 24 hours | Daily backup (Layer 2) |
| Regional disaster | 4 hours | < 24 hours | Cross-region backup (Layer 3) |
| Complete provider failure | 8 hours | < 7 days | Weekly cross-region + daily backups |
| Catastrophic global event | 24 hours | < 90 days | Offline archive (Layer 4) |
Implementation Details
Layer 1: Neon Managed Backups
What Neon Provides:
- Continuous backup (Write-Ahead Log streaming)
- Point-in-time recovery (PITR) to any second within retention window
- Retention: Varies by plan (7 days free, 30 days on paid plans)
- Automatic snapshots every 6 hours
Configuration:
Neon Project Settings:
compute_units: Auto-scale (0.25 - 4 CU)
storage: Unlimited (pay per GB)
backup_retention: 30 days (Business plan)
pitr_enabled: true
Estimated Cost (1000 orgs, 500GB DB):
- Compute: ~$200-400/month
- Storage: ~$75/month (500GB × $0.15/GB)
- Backups: Included in plan
Total: ~$275-475/monthOur Responsibility:
- Monitor Neon status page for outages
- Test PITR restore monthly (see Testing section)
- Understand restore procedure (document in runbook)
Layer 2: Daily Logical Backups (CRITICAL LAYER)
Why This Layer is Critical:
- Vendor independence: Can restore to any PostgreSQL provider
- Fraud defense: Immutable proof of historical data
- Audit compliance: Long-term retention (7 years)
- Ransomware protection: Write-once, read-many storage
Backup Schedule:
Frequency: Daily at 02:00 UTC (low-traffic window)
Method: pg_dump with custom format
Compression: gzip level 9
Encryption: AES-256 (separate key from application encryption)Storage Strategy:
Provider: AWS S3 or Cloudflare R2 (different from Neon infrastructure)
Bucket Configuration:
- Versioning: Enabled
- Object Lock: COMPLIANCE mode (cannot delete for 7 years)
- Lifecycle Policy:
* Days 0-90: S3 Standard (hot, fast retrieval)
* Days 91-730: S3 Glacier Instant Retrieval (warm)
* Days 731+: S3 Glacier Deep Archive (cold, 12hr retrieval)
- Encryption: AES-256 (server-side, AWS-managed keys)
- Replication: Enable to Layer 3 (cross-region)
Naming Convention:
s3://restartix-backups-primary/
├── daily/
│ ├── 2026-02-15-core-full.pgdump.gz.enc
│ ├── 2026-02-14-core-full.pgdump.gz.enc
│ └── ...
├── weekly/ (Sunday snapshots, kept separately)
│ ├── 2026-02-09-core-full.pgdump.gz.enc
│ └── ...
└── monthly/ (First of month, kept separately)
├── 2026-02-01-core-full.pgdump.gz.enc
└── ...Backup Process (Automated):
- Export:
pg_dumpfrom Neon (read replica if available) - Compress: gzip -9 (90% compression ratio typical)
- Encrypt: AES-256 with backup-specific encryption key
- Upload: S3 with metadata (DB size, row counts per org, checksum)
- Verify: Download and verify checksum
- Alert: Notify if backup fails, incomplete, or suspiciously small
Estimated Cost:
Database Size: 500 GB
Daily Growth: 1 GB
Compression Ratio: 90% (compressed: 50 GB per backup)
Storage Costs (7 years = 2,555 days):
Year 1 (365 days):
- Daily backups: 365 × 50 GB = 18.25 TB
- 90 days × $0.023/GB = $103/month (S3 Standard)
- 275 days × $0.004/GB = $55/month (Glacier Instant)
Years 2-7 (all in Glacier Deep Archive):
- Total: ~100 TB
- Cost: 100,000 GB × $0.00099/GB = $99/month
Total: ~$160/month (scales with DB size)
Per-org cost: $0.16/month (negligible)Layer 3: Weekly Cross-Region Replication
Purpose:
- Geographic redundancy (survive regional disasters)
- Compliance with state requirements for off-site backups
- Defense against geopolitical/infrastructure risks
Configuration:
Source: eu-central-1 (Frankfurt) - Primary backup bucket
Destination: us-east-1 (Virginia) - Cross-region replica
Replication Rule:
- Frequency: Weekly (Sunday after daily backup completes)
- What to replicate: Weekly and monthly backups only (not all daily)
- Storage class: S3 Glacier Instant Retrieval (cheaper, same compliance)
- Retention: 7 years (same as primary)
- Encryption: Replicate with same AES-256
Estimated Cost:
- Storage: ~$50/month (subset of daily backups)
- Data transfer: ~$10/month (cross-region replication)
Total: ~$60/monthLayer 4: Quarterly Offline Archive (OPTIONAL)
When to Implement:
- State auditors explicitly request offline backups
- High-risk contracts with zero-tolerance data loss clauses
- Legal requirement for physical evidence custody
- Enhanced ransomware protection (air-gapped)
Implementation Options:
Option A: Encrypted External Drives
Hardware:
- 2TB enterprise-grade external SSD
- Hardware encryption (FIPS 140-2 certified)
Process:
1. Quarterly: Download latest monthly backup from S3
2. Verify checksum
3. Copy to encrypted drive
4. Store in physical safe (fireproof, waterproof)
5. Document in audit log (who, when, where)
Cost: ~€200/year (drive replacement every 3 years)Option B: Tape Backup (Enterprise)
Hardware:
- LTO-9 tape drive (~€3,000)
- LTO-9 tapes (~€100/tape, 18TB capacity)
Process:
- Quarterly: Write backup to tape
- Store tapes in off-site vault service
- 30-year shelf life (exceeds 7-year requirement)
Cost: ~€500/year (vault service + tapes)
Recommendation: Only for enterprise/hospital deploymentsRecommendation: Start without Layer 4. Add only if:
- State audit explicitly requires it
- Legal counsel advises it
- Insurance policy mandates it
Backup Testing and Validation
Critical Rule: Untested backups are not backups. They are "hopes."
Monthly Restore Test (Automated)
Schedule: 1st of every month, 03:00 UTC
Duration: ~2 hours
Environment: Isolated staging database (not production)
Test Procedure:
1. Select random daily backup from previous month
2. Download from S3
3. Decrypt
4. Decompress
5. Restore to temporary PostgreSQL instance
6. Run validation queries:
- Row count per organization
- Verify RLS policies functional
- Check foreign key integrity
- Sample data spot-checks (10 random appointments)
- Verify encryption keys can decrypt encrypted fields
7. Generate test report
8. Alert on-call if ANY validation fails
9. Destroy temporary instance
Success Criteria:
- Restore completes without errors
- All row counts match backup metadata
- All sampled data is readable and correct
- Time to restore < 2 hoursQuarterly Disaster Recovery Drill
Schedule: Last Saturday of quarter
Duration: 4 hours
Participants: Engineering team + CTO
Drill Scenarios (rotate each quarter):
Q1: Neon regional outage → restore from Layer 2
Q2: S3 bucket compromised → restore from Layer 3 (cross-region)
Q3: Complete provider failure → restore to different provider (e.g., Supabase)
Q4: Ransomware attack → restore from immutable backup
Success Criteria:
- Full production database restored to functional state
- Application can connect and serve requests
- RTO/RPO targets met
- All team members understand procedure
- Runbook updated with lessons learnedAnnual Audit Compliance Test
Schedule: Before annual state audit
Duration: 1 day
Purpose: Prove 7-year retention and data integrity
Test Procedure:
1. Select 10 random patients from 5-7 years ago
2. Restore backup from that period (Layer 2 or 3)
3. Extract their exercise logs, appointments, consent forms
4. Verify data is complete and unmodified
5. Generate audit report with:
- Patient names (anonymized for test)
- Service dates
- Exercise/therapy session counts
- Proof of consent signatures
6. Present to auditor (if requested)
Success Criteria:
- All requested historical data retrievable
- Data matches original records (if cross-referenced)
- Restore time < 4 hours
- Data format is human-readable (for auditor review)Data Integrity and Immutability
Cryptographic Verification
Every backup includes:
{
"backup_id": "2026-02-15-daily-001",
"timestamp": "2026-02-15T02:00:00Z",
"database_size_bytes": 524288000,
"compressed_size_bytes": 52428800,
"sha256_checksum": "a1b2c3d4e5f6...",
"encryption_key_version": 2,
"organization_count": 1000,
"row_counts": {
"appointments": 45000,
"patients": 12000,
"exercise_logs": 180000,
"forms": 30000
}
}Verification Process:
- Before upload: Calculate SHA-256 checksum
- After upload: Download first 1MB and verify partial checksum
- Monthly test: Full download and checksum verification
- Before restore: Verify checksum matches metadata
Why? Detects:
- Silent data corruption during transfer
- Bitrot in storage media
- Tampering attempts
- Incomplete uploads
Immutability Enforcement
S3 Object Lock (COMPLIANCE Mode):
Configuration:
Mode: COMPLIANCE
Retention: 7 years from creation date
Guarantees:
- Cannot be deleted by anyone (even AWS root account)
- Cannot be modified (append-only)
- Cannot shorten retention period
- Can only be deleted after 7 years expire
Legal Basis:
- HIPAA: 6-year medical record retention
- State: 7-year financial audit window
- GDPR: Allows retention for legal compliance (Art. 17(3))Ransomware Protection: Even if an attacker:
- Compromises AWS credentials
- Deletes production database
- Deletes Neon backups
- Attempts to delete S3 backups
Result: Backups survive. Object Lock prevents deletion.
Backup Security
Access Control
Who Can Access Backups:
Production Database (Neon):
- Application service account (read/write)
- Database administrator (superadmin role)
Layer 2 Backups (S3):
- Automated backup job (write-only service account)
- Database administrator (read-only for restore)
- Security team (read-only for audit)
Layer 3 Backups (Cross-region):
- Replication service account (write-only)
- CTO only (read-only for disaster recovery)
Layer 4 Backups (Offline):
- Physical access: CTO + COO (dual-custody)
Principle: Minimum necessary access, separation of dutiesEncryption Keys
Key Hierarchy:
Application Data Encryption:
- Purpose: Encrypt sensitive fields (phone, API keys)
- Storage: AWS Secrets Manager
- Rotation: Quarterly
Backup Encryption:
- Purpose: Encrypt backup files before S3 upload
- Storage: Separate from application keys (AWS Secrets Manager)
- Rotation: Annually
- Why separate? If app keys compromised, backups remain safe
S3 Server-Side Encryption:
- Purpose: Encryption at rest in S3
- Storage: AWS-managed keys (SSE-S3)
- Rotation: Automatic (AWS handles)Key Backup: All encryption keys backed up to:
- Password manager (1Password/Bitwarden) - shared vault, restricted access
- Printed copy in physical safe (disaster recovery)
State Audit Compliance
What Auditors Will Request
Based on typical state insurance audits:
| Request | How We Provide It | Source |
|---|---|---|
| "Prove services were delivered for Patient X in 2023" | Export exercise logs, appointments, signed forms | Layer 2/3 backup (historical) |
| "Show all payments received vs services delivered" | Cross-reference appointments with invoices | Audit log + backup |
| "Prove this data hasn't been tampered with" | SHA-256 checksums, immutable S3 Object Lock | Backup metadata |
| "How do you prevent data loss?" | This document + test reports | Documentation |
| "Show me a backup from 5 years ago" | Restore from Layer 2 (Glacier Deep Archive) | S3 lifecycle retrieval |
| "What if your cloud provider fails?" | Layer 3 cross-region backup | Alternative provider restore |
| "Prove patients consented to treatment" | Signed consent forms with timestamps | Forms backup (status='signed') |
Audit-Ready Documentation
Maintain in a physical binder (for in-person audits):
- This backup strategy document (printed)
- Monthly backup test reports (last 12 months)
- Quarterly DR drill reports (last 4 quarters)
- Backup retention policy (signed by CTO)
- Data processing agreement with Neon (DPA)
- Data processing agreement with AWS (DPA)
- Encryption key rotation logs (dates only, not keys)
- Incident response plan (see monitoring.md)
Operational Runbooks
Runbook 1: Restore from Neon PITR (Minor Issues)
When to Use: Accidental DELETE/UPDATE, recent data corruption
Steps:
- Identify exact timestamp of corruption (check audit log)
- Log into Neon console
- Navigate to project → Backups → Point-in-Time Recovery
- Select timestamp (up to second precision)
- Choose: "Create new branch" or "Restore to main" (use new branch for safety)
- Wait for restore (5-15 minutes typically)
- Verify restored data in new branch
- If correct: promote branch to main OR export and import
- Document incident in audit log
RTO: 15-30 minutes RPO: 0 seconds (up to 30 days ago)
Runbook 2: Restore from Daily Backup (Database Corruption)
When to Use: Neon backups unavailable, major corruption, data older than PITR window
Steps:
- Identify target restore date
- Download backup from S3:
aws s3 cp s3://restartix-backups-primary/daily/YYYY-MM-DD-core-full.pgdump.gz.enc ./ - Verify checksum:
sha256sum YYYY-MM-DD-core-full.pgdump.gz.enc # Compare with metadata file - Decrypt:
openssl enc -d -aes-256-cbc -in backup.enc -out backup.pgdump.gz -k $BACKUP_ENCRYPTION_KEY - Decompress:
gunzip backup.pgdump.gz - Provision new PostgreSQL instance (Neon, Supabase, or self-hosted)
- Restore:
pg_restore -d restartix_platform -v backup.pgdump - Verify:
- Row counts per organization
- Sample data spot-checks
- Application can connect
- Switch application connection string to restored instance
- Monitor for issues (check logs, error rates)
- Document incident
RTO: 1-2 hours RPO: < 24 hours
Runbook 3: Restore from Cross-Region Backup (Regional Disaster)
When to Use: AWS region failure, Neon regional outage, primary S3 bucket unavailable
Steps:
- Access cross-region backup bucket:
aws s3 ls s3://restartix-backups-replica/weekly/ - Download most recent weekly backup
- Follow Runbook 2 steps 3-11 (same restore procedure)
- Provision instance in DIFFERENT region
- Update DNS / load balancer to point to new region
RTO: 2-4 hours RPO: < 7 days (weekly backup)
Runbook 4: Restore from Offline Archive (Catastrophic Scenario)
When to Use: All cloud infrastructure compromised/unavailable
Steps:
- Retrieve offline backup media from physical safe (requires dual-custody)
- Connect encrypted drive to secure workstation (air-gapped)
- Decrypt and extract backup
- Provision PostgreSQL instance (on-premises or different cloud provider)
- Follow Runbook 2 steps 6-11
- Manually configure application deployment to new infrastructure
RTO: 8-24 hours RPO: < 90 days (quarterly backup)
Monitoring and Alerting
Backup Health Metrics
| Metric | Alert Threshold | Severity | Action |
|---|---|---|---|
| Daily backup failed | 1 failure | Critical | Page on-call, investigate immediately |
| Backup size anomaly | ±50% from expected | High | Verify data integrity, check for corruption |
| Backup upload incomplete | Any incomplete | Critical | Retry upload, verify network |
| Checksum mismatch | Any mismatch | Critical | Re-run backup, investigate corruption |
| S3 bucket replication lag | > 24 hours | Medium | Check replication rules, AWS status |
| Monthly restore test failed | Any failure | High | Debug restore procedure, fix issues |
| Backup older than 25 hours | No new backup in 25h | High | Check backup job, Neon connectivity |
Dashboards
Grafana / Datadog:
- Backup job success rate (7-day trend)
- Backup file sizes (detect growth anomalies)
- Restore test results (monthly pass/fail)
- S3 storage costs (budget monitoring)
- Time to complete backup (performance trend)
Cost Summary (1000 Organizations, 500GB Database)
| Layer | Provider | Monthly Cost | Annual Cost | Purpose |
|---|---|---|---|---|
| 0: Live DB | Neon | $275-475 | $3,300-5,700 | Production database |
| 1: Neon Backups | Neon | Included | Included | Fast PITR (30 days) |
| 2: Daily Backups | AWS S3 | $160 | $1,920 | Primary long-term safety |
| 3: Cross-Region | AWS S3 | $60 | $720 | Geographic redundancy |
| 4: Offline (optional) | External SSD | $17 | $200 | Audit compliance (if required) |
| Total | $512-712 | $6,140-8,540 | Full disaster recovery |
Per-Organization Cost: $0.51-0.71/month for comprehensive backup protection
ROI Calculation:
- Cost of data loss: Inability to claim insurance reimbursements + fraud liability + legal costs = Millions of euros
- Cost of backup: €6,000-9,000/year
- ROI: Infinite (prevents catastrophic loss)
Recommended Implementation Timeline
Phase 1: Immediate (Week 1)
- [ ] Enable Neon 30-day PITR (upgrade to paid plan if needed)
- [ ] Set up AWS S3 bucket with versioning
- [ ] Implement daily pg_dump backup job
- [ ] Test manual restore to staging
Phase 2: Short-term (Month 1)
- [ ] Enable S3 Object Lock (COMPLIANCE mode)
- [ ] Configure S3 lifecycle policies (Standard → Glacier)
- [ ] Implement automated backup verification (checksum)
- [ ] Set up monitoring and alerting
- [ ] Document restore procedures (runbooks)
Phase 3: Medium-term (Quarter 1)
- [ ] Set up cross-region replication (Layer 3)
- [ ] Implement monthly automated restore testing
- [ ] Run first quarterly DR drill
- [ ] Prepare audit compliance documentation
Phase 4: Ongoing
- [ ] Monthly restore tests (automated)
- [ ] Quarterly DR drills (team exercise)
- [ ] Annual audit preparation
- [ ] Review and update backup strategy yearly
Related Documentation
- Database Overview - All tables and multi-tenant architecture
- RLS Policies - Data isolation and security
- Encryption - Data protection at rest and in transit
- GDPR Compliance - Data retention and erasure
- Monitoring - Alerting and incident response
- Audit Log - Audit trail for compliance
Appendix: Fraud Prevention Evidence Requirements
What Data Proves Services Were Delivered?
For state insurance audits, the following data constitutes proof of service:
| Evidence Type | Data Source | Retention | Why It Matters |
|---|---|---|---|
| Appointment attendance | appointments.status = 'done' | 7 years | Proves patient attended session |
| Exercise/therapy logs | (Future feature - telemetry service) | 7 years | Proves exercises were performed |
| Video call metadata | appointments.daily_room_name + Daily.co logs | 7 years | Proves real-time interaction occurred |
| Specialist notes | appointment_documents (reports) | 7 years | Medical documentation of session |
| Patient consent | forms.status = 'signed' | 7 years | Proves patient authorized treatment |
| Prescription issuance | appointment_documents (prescriptions) | 7 years | Proves medical care provided |
| Payment records | (External billing system) | 7 years | Cross-reference with services |
Critical: Without backups, you cannot produce this evidence. Insurance claims can be retroactively denied up to 7 years later.
Example Audit Query
Auditor requests: "Prove services delivered for Patient ID 12345 in July 2023"
Our Response (from backup):
- Restore July 2023 backup
- Query:sql
SELECT a.started_at, a.ended_at, a.status, s.name AS specialist_name, (SELECT COUNT(*) FROM forms WHERE appointment_id = a.id AND status = 'signed') AS signed_forms, (SELECT COUNT(*) FROM appointment_documents WHERE appointment_id = a.id) AS documents_generated FROM appointments a JOIN specialists s ON a.specialist_id = s.id WHERE a.patient_id = 12345 AND a.started_at BETWEEN '2023-07-01' AND '2023-07-31' AND a.status = 'done'; - Export signed consent forms (PDF)
- Export prescription/report documents (PDF)
- Provide to auditor with checksums (proof of authenticity)
Result: Audit passed, no fraud accusations, insurance reimbursements validated.
Questions for Legal/Compliance Team
Before finalizing backup strategy, confirm with legal counsel:
- Retention period: Is 7 years sufficient, or does your state require longer?
- Offline backup: Does state audit explicitly require physical/offline backups?
- Geographic requirements: Must backups be stored within EU? Or can cross-region be US?
- Data sovereignty: Are there restrictions on cloud provider jurisdiction?
- Encryption standards: Are AES-256 and current key management procedures compliant?
- Audit frequency: How often should we expect state audits? (Affects test schedule)
- Evidence format: Do auditors require specific export formats (PDF, CSV, etc.)?
Action: Schedule meeting with legal team to review this document and confirm compliance requirements.
Document Version History
| Version | Date | Author | Changes |
|---|---|---|---|
| 1.0 | 2026-02-15 | Engineering Team | Initial backup strategy for state-funded insurance compliance |
Next Steps: Review with CTO → Legal approval → Implementation (Phase 1)