Audit Compliance: HIPAA & GDPR Requirements
Status — partly shipped, mostly planned. Layer 1.1 has shipped: the synchronous local audit-log writer, immutability via RLS (no UPDATE/DELETE policies), and request-context capture. Not yet shipped: the read API (
GET /v1/audit-logs*), the CSV export endpoint, the break-glass table and flow, the monthly archival job to S3, the 6-year warm-tier purge, the regular-review tooling, and the cross-tenant Telemetry pipeline. Those land progressively in Layers 1.13 (admin viewer), 11 (telemetry forwarding), and 12 (DSAR + retention pipeline). Treat the SQL examples and admin endpoints in this document as the target shape, not "you can call this today." See the implementation status of each item in the checklist at the bottom of this page.
Overview
This document details how the platform's audit system will meet HIPAA and GDPR compliance requirements. The audit trail is designed to be tamper-evident, comprehensive, and retained for the required periods.
HIPAA Requirements
164.312(b) - Audit Controls
Requirement: Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information (ePHI).
Our Implementation:
| Requirement | Implementation | Status |
|---|---|---|
| Record all access to ePHI | Audit middleware logs all mutations (POST/PUT/PATCH/DELETE). Read access is logged only inside a break-glass session (planned, Layer 12). | Mutations: shipped (Layer 1.1). Read-during-break-glass: planned (Layer 12). |
| Examine activity logs | Admin API endpoints for querying audit logs | Planned (Layer 1.13 admin viewer + Layer 11 telemetry pipeline) |
| Tamper-evident | No UPDATE or DELETE RLS policies on audit_log table; restartix_app has UPDATE/DELETE/TRUNCATE explicitly REVOKED | Shipped (Layer 1.1) |
| Retention | 6-year minimum (hot PostgreSQL + warm S3 archives) | Hot: shipped. Warm-tier S3 archive job: planned (Layer 12). 6-year purge: planned (Layer 12). |
What Gets Logged
Every mutation request captures:
- Who:
actor_id(FK toprincipals.id) +actor_type('human' | 'agent' | 'service_account' | 'system') - What:
entity_type,entity_id(resource being modified) - When:
created_at(timestamp in UTC) - How:
action(CREATE/UPDATE/DELETE) - Where:
ip_address(client IP via Cloudflare or X-Forwarded-For) - Result:
status_code(HTTP response code) - Context:
request_path(e.g.,POST /v1/appointments)
Failed requests are also logged (401 Unauthorized, 403 Forbidden, 500 Internal Server Error) — critical for security monitoring.
164.308(a)(1)(ii)(D) - Information System Activity Review
Requirement: Implement procedures to regularly review records of information system activity, such as audit logs, access reports, and security incident tracking reports.
Our Implementation:
| Procedure | Frequency | Owner | Tool |
|---|---|---|---|
| Monthly audit log review | Monthly | Security Officer | Admin dashboard |
| Failed access review (401/403) | Weekly | Security Officer | Datadog query |
| Break-glass session review | Within 24 hours of session | Two approvers | Manual report |
| Quarterly compliance report | Quarterly | Security Officer | Automated export |
Admin Dashboard Queries:
-- All failed access attempts in the last 7 days
SELECT * FROM audit_log
WHERE status_code IN (401, 403)
AND created_at > NOW() - INTERVAL '7 days'
ORDER BY created_at DESC;
-- High-privilege actions (CREATE/DELETE on sensitive entities)
SELECT * FROM audit_log
WHERE action IN ('CREATE', 'DELETE')
AND entity_type IN ('patient', 'human', 'organization')
AND created_at > NOW() - INTERVAL '30 days'
ORDER BY created_at DESC;
-- All actions by a specific principal (actor activity audit)
SELECT * FROM audit_log
WHERE actor_id = $1
AND organization_id = $2
ORDER BY created_at DESC;164.308(a)(8) - Evaluation
Requirement: Perform a periodic technical and nontechnical evaluation, based initially upon the standards implemented under this rule and, subsequently, in response to environmental or operational changes affecting the security of ePHI, that establishes the extent to which an entity's security policies and procedures meet the requirements of this subpart.
Our Implementation:
| Evaluation | Frequency | Owner | Evidence |
|---|---|---|---|
| Audit log completeness check | Quarterly | Security Officer | SQL query: SELECT COUNT(*) FROM audit_log WHERE created_at > ... |
| Retention compliance check | Annually | Security Officer | Verify S3 archives exist for past 6 years |
| RLS policy audit | Annually | Engineering Lead | Review all RLS policies in migration files |
| Break-glass log review | Annually | Security Officer | Verify all sessions were reviewed and approved |
Automated Checks:
# GitHub Actions workflow runs quarterly
# .github/workflows/hipaa-audit-check.yml
- name: Verify Audit Log Completeness
run: |
# Check that audit entries exist for the past 90 days
go run cmd/tools/audit-check/main.go --days=90
- name: Verify S3 Archives
run: |
# Check that S3 archives exist for the past 6 years
aws s3 ls s3://bucket/audit-archive/ --recursive | grep $(date -d '6 years ago' +%Y)GDPR Requirements
Article 30 - Records of Processing Activities
Requirement: Each controller and, where applicable, the controller's representative, shall maintain a record of processing activities under its responsibility.
Our Implementation:
The audit_log table serves as the record of processing activities (ROPA) for all data mutations.
| ROPA Element | Audit Log Field | Example |
|---|---|---|
| Name and contact details of the controller | organization_id (FK to organizations.name) | "Restartix Clinic" |
| Purposes of the processing | action (CREATE/UPDATE/DELETE) | "Patient record update" |
| Categories of data subjects | entity_type (patient, specialist, user) | "patient" |
| Categories of personal data | entity_type + changes JSONB | "patient.name, patient.email" |
| Categories of recipients | actor_id joined to organization_memberships.role_id → roles.code | "specialist", "admin" |
| Transfers to third countries | Not applicable (all data stored in EU) | N/A |
| Time limits for erasure | 6 years (see retention section) | Per HIPAA requirement |
| Technical and organizational measures | RLS, encryption, audit logging | Documented in reference/rbac-permissions.md, reference/encryption.md, reference/rls-policies.md |
Article 32 - Security of Processing
Requirement: The controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including:
- (a) the pseudonymisation and encryption of personal data
- (b) the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services
- (c) the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident
- (d) a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing
Our Implementation:
| Measure | Implementation | Status |
|---|---|---|
| Pseudonymisation | Actor hashing helper (internal/shared/pseudonym.UserID) ready; will be applied by the Telemetry forwarder | Helper: shipped (Layer 1.5). Telemetry application: planned (Layer 11). |
| Encryption | App-level AES-256-GCM helper (internal/core/crypto/) | Helper: shipped (1A.3). Customer-managed CMK envelopes the SM-stored keyring: planned (1E.3). Direct kmsKeyring (per-data-key KMS calls + BYOK): Phase 2. First encrypted columns under live data: Layer 2 (patient_persons). |
| Confidentiality | RLS + RBAC permissions + audit logging | Shipped (Layers 0 + 1.1) |
| Integrity | Database constraints, immutable audit log, foreign keys | Shipped (Layers 0 + 1.1) |
| Availability | AWS multi-AZ + autoscaling + automated backups | Planned (Layer 12) |
| Resilience | Health checks, automatic restarts, connection pool monitoring | Health check: shipped. Production hardening: planned (Layer 12). |
| Restore capability | Point-in-time recovery (7 days), manual backups before migrations | Planned (Layer 12) |
| Regular testing | Quarterly HIPAA audit checks, annual penetration testing | Planned (Layer 12) |
Article 33 - Breach Notification
Requirement: In the case of a personal data breach, the controller shall without undue delay and, where feasible, not later than 72 hours after having become aware of it, notify the personal data breach to the supervisory authority competent in accordance with Article 51.
Our Implementation:
See reference/gdpr-compliance.md for the complete breach notification procedure. Summary:
- Detection: Datadog alerts on unusual access patterns (Telemetry threat detection)
- Assessment: Security Officer determines severity within 12 hours
- Notification: 72-hour window to notify supervisory authority (ANSPDCP in Romania)
- Documentation:
breach_recordstable (permanent, never deleted) - User notification: Email to affected users within 72 hours
Audit log role in breach detection:
- All failed access attempts (401, 403) are logged
- Unusual access patterns trigger Telemetry threat detection
- Break-glass sessions are flagged for immediate review
- Mass data access (>100 records in 1 minute) triggers alert
Retention Policy
HIPAA Requirement: 6-Year Minimum
HIPAA 164.316(b)(2)(i) requires retaining documentation for 6 years from the date of its creation or the date when it last was in effect, whichever is later.
Our Three-Tier Retention Strategy
| Tier | Storage | Duration | Queryable | Cost | Purpose |
|---|---|---|---|---|---|
| Hot | PostgreSQL audit_log table | 0-12 months | Yes (via API) | High | Operational queries, admin dashboards |
| Warm | S3 JSONL archives | 12 months - 6 years | On request (download) | Low | Compliance retention, forensic investigations |
| Delete | Purged | After 6 years | No | None | HIPAA allows deletion after 6 years |
Archival Process
Cron job: Runs monthly (1st of each month)
// internal/jobs/audit_archive.go
func (j *AuditArchiveJob) Run(ctx context.Context) error {
// 1. Find audit entries older than 12 months
cutoff := time.Now().AddDate(-1, 0, 0)
// 2. Export to S3 as compressed JSONL, grouped by org and month
// s3://bucket/audit-archive/{org_id}/{year}/{month}.jsonl.gz
rows, _ := j.db.Query(ctx,
`SELECT * FROM audit_log WHERE created_at < $1 ORDER BY organization_id, created_at`,
cutoff,
)
// Stream to S3 in batches of 10,000 rows
var batch []AuditEntry
var currentOrgID int64
var currentMonth string
for rows.Next() {
var entry AuditEntry
rows.Scan(&entry.ID, &entry.OrganizationID, /* ... */)
month := entry.CreatedAt.Format("2006-01")
if entry.OrganizationID != currentOrgID || month != currentMonth {
// Flush current batch to S3
j.uploadBatch(ctx, currentOrgID, currentMonth, batch)
batch = nil
currentOrgID = entry.OrganizationID
currentMonth = month
}
batch = append(batch, entry)
}
// Flush final batch
j.uploadBatch(ctx, currentOrgID, currentMonth, batch)
// 3. Delete archived entries from PostgreSQL
_, err := j.db.Exec(ctx, `DELETE FROM audit_log WHERE created_at < $1`, cutoff)
if err != nil {
return fmt.Errorf("delete archived entries: %w", err)
}
// 4. Delete S3 archives older than 6 years
sixYearCutoff := time.Now().AddDate(-6, 0, 0)
j.deleteOldArchives(ctx, sixYearCutoff)
return nil
}
func (j *AuditArchiveJob) uploadBatch(ctx context.Context, orgID int64, month string, batch []AuditEntry) error {
// Convert to JSONL (one JSON object per line)
var buf bytes.Buffer
gzw := gzip.NewWriter(&buf)
for _, entry := range batch {
json.NewEncoder(gzw).Encode(entry)
}
gzw.Close()
// Upload to S3
key := fmt.Sprintf("audit-archive/%d/%s.jsonl.gz", orgID, month)
return j.s3Client.Upload(ctx, key, &buf)
}
func (j *AuditArchiveJob) deleteOldArchives(ctx context.Context, cutoff time.Time) error {
// List all archives older than cutoff and delete
prefix := "audit-archive/"
objects, _ := j.s3Client.ListObjects(ctx, prefix)
for _, obj := range objects {
// Parse date from key: "audit-archive/{org_id}/{year}-{month}.jsonl.gz"
parts := strings.Split(obj.Key, "/")
if len(parts) != 3 {
continue
}
month := strings.TrimSuffix(parts[2], ".jsonl.gz")
date, err := time.Parse("2006-01", month)
if err != nil {
continue
}
if date.Before(cutoff) {
j.s3Client.DeleteObject(ctx, obj.Key)
}
}
return nil
}Special Retention Rules
Never deleted:
break_glass_logtable (permanent records, no archival)- GDPR operation audit entries (7-year retention per legal requirement)
Extended retention:
- Breach notification records: 7 years (per GDPR Article 33)
- Key rotation events: Permanent (compliance audit trail)
Downloading Warm Archives
Admins can download S3 archives via the admin API:
GET /v1/admin/audit/archive?organization_id={id}&month={YYYY-MM}
→ Returns a pre-signed S3 URL for the JSONL.gz file (15-minute expiry)Example:
# Admin requests archive for January 2023
curl -H "Authorization: Bearer $TOKEN" \
https://api.restartix.com/v1/admin/audit/archive?organization_id=1&month=2023-01
# Response
{
"download_url": "https://s3.amazonaws.com/bucket/audit-archive/1/2023-01.jsonl.gz?X-Amz-Signature=...",
"expires_at": "2026-02-14T10:15:00Z"
}What Gets Logged
Logged Entities
| Entity Type | Logged Actions | Notes |
|---|---|---|
patient | CREATE, UPDATE, DELETE | All mutations logged (includes soft deletes) |
appointment | CREATE, UPDATE, DELETE | Status changes (upcoming → confirmed → done) logged |
form | CREATE, UPDATE | Form submission and signature events |
user | CREATE, UPDATE, DELETE | User creation, role changes, blocking |
specialist | CREATE, UPDATE, DELETE | Includes soft deletes |
organization | UPDATE | Org settings changes (logo, legal templates) |
form_template | CREATE, UPDATE | Template creation and publishing |
custom_field | CREATE, UPDATE, DELETE | Custom field definition changes |
segment | CREATE, UPDATE, DELETE | Segment rule changes |
Not Logged (Read-Only)
GET requests are not audited (they're read-only). This reduces audit log volume and focuses on mutations.
Exception: Break-glass sessions log all actions (including reads) because emergency access must be fully traceable.
Sensitive Data Masking
Before writing to audit_log, sensitive fields in the changes JSONB column are masked.
Masked Patterns
var sensitivePatterns = []string{
"password", "secret", "token", "api_key", "apikey",
"authorization", "cookie", "session",
}Example:
// Original request body
{
"name": "John Doe",
"email": "[email protected]",
"password": "hunter2"
}
// Logged in audit_log.changes
{
"name": {"old": "Jane Doe", "new": "John Doe"},
"email": {"old": "[email protected]", "new": "[email protected]"},
"password": "[REDACTED]"
}PII Masking in Analytics
Telemetry applies additional masking for ClickHouse analytics:
- User IDs are hashed (SHA-256) — no direct user identifiers
- IP addresses are resolved to country/city but not stored in analytics
- Patient names/emails are never sent to ClickHouse
Querying Audit Logs
Admin API Endpoints
List audit logs:
GET /v1/audit-logs?organization_id={id}&start_date={date}&end_date={date}&entity_type={type}&actor_id={id}&actor_type={type}Query parameters:
organization_id(required): Tenant contextstart_date(optional): Filter by created_at >= start_dateend_date(optional): Filter by created_at <= end_dateentity_type(optional): Filter by entity_type (e.g., "patient")actor_id(optional): Filter by principal that performed the actionactor_type(optional): Filter by'human' | 'agent' | 'service_account' | 'system'page(optional): Pagination (default 1)limit(optional): Page size (default 50, max 500)
Response:
{
"data": [
{
"id": "01938b27-7df1-7c8a-9d3a-1b2c3d4e5f60",
"organization_id": "01938b27-7df1-7c8a-9d3a-aabbccddeeff",
"actor_id": "01938b27-7df1-7c8a-9d3a-112233445566",
"actor_type": "human",
"action": "UPDATE",
"entity_type": "patient",
"entity_id": "01938b27-7df1-7c8a-9d3a-998877665544",
"ip_address": "203.0.113.42",
"user_agent": "Mozilla/5.0...",
"request_path": "PATCH /v1/patients/789",
"status_code": 200,
"model_version": null,
"inputs_hash": null,
"confidence": null,
"created_at": "2026-02-13T10:00:00Z"
}
],
"pagination": {
"page": 1,
"limit": 50,
"total": 1234
}
}Export to CSV:
GET /v1/audit-logs/export?organization_id={id}&start_date={date}&end_date={date}
→ Returns CSV fileCSV format:
id,organization_id,actor_id,actor_type,action,entity_type,entity_id,ip_address,request_path,status_code,model_version,confidence,created_at
01938b27-...,01938b27-...,01938b27-...,human,UPDATE,patient,01938b27-...,203.0.113.42,PATCH /v1/patients/789,200,,,2026-02-13T10:00:00ZSQL Queries (Direct Database Access)
Entity history:
-- All mutations on patient 789
SELECT * FROM audit_log
WHERE entity_type = 'patient'
AND entity_id = 789
AND organization_id = 1
ORDER BY created_at DESC;Principal activity:
-- All actions by principal X in the last 30 days
SELECT * FROM audit_log
WHERE actor_id = '01938b27-7df1-7c8a-9d3a-112233445566'
AND organization_id = '01938b27-7df1-7c8a-9d3a-aabbccddeeff'
AND created_at > NOW() - INTERVAL '30 days'
ORDER BY created_at DESC;Failed requests:
-- All 4xx/5xx responses in the last 7 days
SELECT * FROM audit_log
WHERE status_code >= 400
AND organization_id = 1
AND created_at > NOW() - INTERVAL '7 days'
ORDER BY created_at DESC;Break-glass sessions:
-- All actions during break-glass session 123
SELECT * FROM audit_log
WHERE break_glass_id = 123
ORDER BY created_at;Compliance Checklist
HIPAA
- [x] All mutations are logged (Layer 1.1 — every CREATE/UPDATE/DELETE goes through
internal/core/audit/) - [x] Audit logs are tamper-evident (no UPDATE/DELETE RLS policies;
restartix_appUPDATE/DELETE/TRUNCATE explicitly revoked) - [x] Failed access attempts are logged (Layer 1.1 audit middleware records 5xx, 403, and 401-with-bearer)
- [ ] Read access logged inside break-glass sessions (Layer 12 — break-glass flow not yet built)
- [ ] 6-year warm-tier retention (hot PostgreSQL exists; S3 archive job + purge job planned for Layer 12)
- [ ] Regular audit log reviews (monthly) — needs the admin viewer (Layer 1.13) and runbook (Layer 12)
- [ ] Break-glass sessions are logged and reviewed within 24 hours — break-glass flow planned (Layer 12)
- [ ] Archival process tested and documented — Layer 12
GDPR
- [x] Record of processing activities (ROPA) maintained in
audit_log(Layer 1.1) - [x] Pseudonymisation helper ready (
internal/shared/pseudonym.UserID, Layer 1.5) — applied by the Telemetry forwarder when Layer 11 ships - [x] App-level encryption helper ready (
internal/core/crypto/, 1A.3) — first encrypted columns land in Layer 2 - [ ] Data subjects can request audit logs (via GDPR export). DSAR fulfilment runs inside the clinic's tenant — the clinic is GDPR controller, the platform is processor. See decisions.md → Why clinic is controller, platform is processor; endpoint planned (Layer 12.1).
- [ ] Breach detection via audit log monitoring — Layer 12 (alerting on
audit_logpatterns) - [ ] Regular testing (quarterly HIPAA checks) — Layer 12
- [ ] 72-hour breach notification procedure documented + alerting — Layer 12
Related Documentation
- local-logging.md - Core API synchronous audit middleware (shipped, Layer 1.1) — single source of truth for compliance audit
- reference/gdpr-compliance.md - Full GDPR architecture
- reference/rbac-permissions.md, reference/rls-policies.md, reference/encryption.md - Auth, RLS, and encryption foundations