Skip to content

Audit Compliance: HIPAA & GDPR Requirements

Overview

This document details how the platform's audit system meets HIPAA and GDPR compliance requirements. The audit trail is designed to be tamper-evident, comprehensive, and retained for the required periods.

HIPAA Requirements

164.312(b) - Audit Controls

Requirement: Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information (ePHI).

Our Implementation:

RequirementImplementationStatus
Record all access to ePHIAudit middleware logs all mutations (POST/PUT/PATCH/DELETE)✅ Complete
Examine activity logsAdmin API endpoints for querying audit logs✅ Complete
Tamper-evidentNo UPDATE or DELETE RLS policies on audit_log table✅ Complete
Retention6-year minimum (hot PostgreSQL + warm S3 archives)✅ Complete

What Gets Logged

Every mutation request captures:

  • Who: user_id (actor performing the action)
  • What: entity_type, entity_id (resource being modified)
  • When: created_at (timestamp in UTC)
  • How: action (CREATE/UPDATE/DELETE)
  • Where: ip_address (client IP via Cloudflare or X-Forwarded-For)
  • Result: status_code (HTTP response code)
  • Context: request_path (e.g., POST /v1/appointments)

Failed requests are also logged (401 Unauthorized, 403 Forbidden, 500 Internal Server Error) — critical for security monitoring.

164.308(a)(1)(ii)(D) - Information System Activity Review

Requirement: Implement procedures to regularly review records of information system activity, such as audit logs, access reports, and security incident tracking reports.

Our Implementation:

ProcedureFrequencyOwnerTool
Monthly audit log reviewMonthlySecurity OfficerAdmin dashboard
Failed access review (401/403)WeeklySecurity OfficerDatadog query
Break-glass session reviewWithin 24 hours of sessionTwo approversManual report
Quarterly compliance reportQuarterlySecurity OfficerAutomated export

Admin Dashboard Queries:

sql
-- All failed access attempts in the last 7 days
SELECT * FROM audit_log
WHERE status_code IN (401, 403)
  AND created_at > NOW() - INTERVAL '7 days'
ORDER BY created_at DESC;

-- High-privilege actions (CREATE/DELETE on sensitive entities)
SELECT * FROM audit_log
WHERE action IN ('CREATE', 'DELETE')
  AND entity_type IN ('patient', 'user', 'organization')
  AND created_at > NOW() - INTERVAL '30 days'
ORDER BY created_at DESC;

-- All actions by a specific user (user activity audit)
SELECT * FROM audit_log
WHERE user_id = $1
  AND organization_id = $2
ORDER BY created_at DESC;

164.308(a)(8) - Evaluation

Requirement: Perform a periodic technical and nontechnical evaluation, based initially upon the standards implemented under this rule and, subsequently, in response to environmental or operational changes affecting the security of ePHI, that establishes the extent to which an entity's security policies and procedures meet the requirements of this subpart.

Our Implementation:

EvaluationFrequencyOwnerEvidence
Audit log completeness checkQuarterlySecurity OfficerSQL query: SELECT COUNT(*) FROM audit_log WHERE created_at > ...
Retention compliance checkAnnuallySecurity OfficerVerify S3 archives exist for past 6 years
RLS policy auditAnnuallyEngineering LeadReview all RLS policies in migration files
Break-glass log reviewAnnuallySecurity OfficerVerify all sessions were reviewed and approved

Automated Checks:

bash
# GitHub Actions workflow runs quarterly
# .github/workflows/hipaa-audit-check.yml

- name: Verify Audit Log Completeness
  run: |
    # Check that audit entries exist for the past 90 days
    go run cmd/tools/audit-check/main.go --days=90

- name: Verify S3 Archives
  run: |
    # Check that S3 archives exist for the past 6 years
    aws s3 ls s3://bucket/audit-archive/ --recursive | grep $(date -d '6 years ago' +%Y)

GDPR Requirements

Article 30 - Records of Processing Activities

Requirement: Each controller and, where applicable, the controller's representative, shall maintain a record of processing activities under its responsibility.

Our Implementation:

The audit_log table serves as the record of processing activities (ROPA) for all data mutations.

ROPA ElementAudit Log FieldExample
Name and contact details of the controllerorganization_id (FK to organizations.name)"Restartix Clinic"
Purposes of the processingaction (CREATE/UPDATE/DELETE)"Patient record update"
Categories of data subjectsentity_type (patient, specialist, user)"patient"
Categories of personal dataentity_type + changes JSONB"patient.name, patient.email"
Categories of recipientsuser_id (FK to users.role)"specialist", "admin"
Transfers to third countriesNot applicable (all data stored in EU)N/A
Time limits for erasure6 years (see retention section)Per HIPAA requirement
Technical and organizational measuresRLS, encryption, audit loggingDocumented in 04-auth-and-security.md

Article 32 - Security of Processing

Requirement: The controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including:

  • (a) the pseudonymisation and encryption of personal data
  • (b) the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services
  • (c) the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident
  • (d) a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing

Our Implementation:

MeasureImplementationStatus
PseudonymisationActor hashing in Telemetry (SHA-256 user IDs)✅ Complete
EncryptionInfrastructure (AWS RDS) + app-level (AES-256-GCM for phone, API keys)✅ Complete
ConfidentialityRLS, RBAC, audit logging✅ Complete
IntegrityDatabase constraints, immutable audit log, foreign keys✅ Complete
AvailabilityAWS App Runner auto-scaling, RDS PostgreSQL replication, daily backups✅ Complete
ResilienceHealth checks, automatic restarts, connection pool monitoring✅ Complete
Restore capabilityPoint-in-time recovery (7 days), manual backups before migrations✅ Complete
Regular testingQuarterly HIPAA audit checks, annual penetration testing✅ Complete

Article 33 - Breach Notification

Requirement: In the case of a personal data breach, the controller shall without undue delay and, where feasible, not later than 72 hours after having become aware of it, notify the personal data breach to the supervisory authority competent in accordance with Article 51.

Our Implementation:

See 10-gdpr-compliance.md for the complete breach notification procedure. Summary:

  1. Detection: Datadog alerts on unusual access patterns (Telemetry threat detection)
  2. Assessment: Security Officer determines severity within 12 hours
  3. Notification: 72-hour window to notify supervisory authority (ANSPDCP in Romania)
  4. Documentation: breach_records table (permanent, never deleted)
  5. User notification: Email to affected users within 72 hours

Audit log role in breach detection:

  • All failed access attempts (401, 403) are logged
  • Unusual access patterns trigger Telemetry threat detection
  • Break-glass sessions are flagged for immediate review
  • Mass data access (>100 records in 1 minute) triggers alert

Retention Policy

HIPAA Requirement: 6-Year Minimum

HIPAA 164.316(b)(2)(i) requires retaining documentation for 6 years from the date of its creation or the date when it last was in effect, whichever is later.

Our Three-Tier Retention Strategy

TierStorageDurationQueryableCostPurpose
HotPostgreSQL audit_log table0-12 monthsYes (via API)HighOperational queries, admin dashboards
WarmS3 JSONL archives12 months - 6 yearsOn request (download)LowCompliance retention, forensic investigations
DeletePurgedAfter 6 yearsNoNoneHIPAA allows deletion after 6 years

Archival Process

Cron job: Runs monthly (1st of each month)

go
// internal/jobs/audit_archive.go

func (j *AuditArchiveJob) Run(ctx context.Context) error {
    // 1. Find audit entries older than 12 months
    cutoff := time.Now().AddDate(-1, 0, 0)

    // 2. Export to S3 as compressed JSONL, grouped by org and month
    //    s3://bucket/audit-archive/{org_id}/{year}/{month}.jsonl.gz
    rows, _ := j.db.Query(ctx,
        `SELECT * FROM audit_log WHERE created_at < $1 ORDER BY organization_id, created_at`,
        cutoff,
    )

    // Stream to S3 in batches of 10,000 rows
    var batch []AuditEntry
    var currentOrgID int64
    var currentMonth string

    for rows.Next() {
        var entry AuditEntry
        rows.Scan(&entry.ID, &entry.OrganizationID, /* ... */)

        month := entry.CreatedAt.Format("2006-01")
        if entry.OrganizationID != currentOrgID || month != currentMonth {
            // Flush current batch to S3
            j.uploadBatch(ctx, currentOrgID, currentMonth, batch)
            batch = nil
            currentOrgID = entry.OrganizationID
            currentMonth = month
        }

        batch = append(batch, entry)
    }

    // Flush final batch
    j.uploadBatch(ctx, currentOrgID, currentMonth, batch)

    // 3. Delete archived entries from PostgreSQL
    _, err := j.db.Exec(ctx, `DELETE FROM audit_log WHERE created_at < $1`, cutoff)
    if err != nil {
        return fmt.Errorf("delete archived entries: %w", err)
    }

    // 4. Delete S3 archives older than 6 years
    sixYearCutoff := time.Now().AddDate(-6, 0, 0)
    j.deleteOldArchives(ctx, sixYearCutoff)

    return nil
}

func (j *AuditArchiveJob) uploadBatch(ctx context.Context, orgID int64, month string, batch []AuditEntry) error {
    // Convert to JSONL (one JSON object per line)
    var buf bytes.Buffer
    gzw := gzip.NewWriter(&buf)
    for _, entry := range batch {
        json.NewEncoder(gzw).Encode(entry)
    }
    gzw.Close()

    // Upload to S3
    key := fmt.Sprintf("audit-archive/%d/%s.jsonl.gz", orgID, month)
    return j.s3Client.Upload(ctx, key, &buf)
}

func (j *AuditArchiveJob) deleteOldArchives(ctx context.Context, cutoff time.Time) error {
    // List all archives older than cutoff and delete
    prefix := "audit-archive/"
    objects, _ := j.s3Client.ListObjects(ctx, prefix)

    for _, obj := range objects {
        // Parse date from key: "audit-archive/{org_id}/{year}-{month}.jsonl.gz"
        parts := strings.Split(obj.Key, "/")
        if len(parts) != 3 {
            continue
        }
        month := strings.TrimSuffix(parts[2], ".jsonl.gz")
        date, err := time.Parse("2006-01", month)
        if err != nil {
            continue
        }

        if date.Before(cutoff) {
            j.s3Client.DeleteObject(ctx, obj.Key)
        }
    }

    return nil
}

Special Retention Rules

Never deleted:

  • break_glass_log table (permanent records, no archival)
  • GDPR operation audit entries (7-year retention per legal requirement)

Extended retention:

  • Breach notification records: 7 years (per GDPR Article 33)
  • Key rotation events: Permanent (compliance audit trail)

Downloading Warm Archives

Admins can download S3 archives via the admin API:

GET /v1/admin/audit/archive?organization_id={id}&month={YYYY-MM}
  → Returns a pre-signed S3 URL for the JSONL.gz file (15-minute expiry)

Example:

bash
# Admin requests archive for January 2023
curl -H "Authorization: Bearer $TOKEN" \
  https://api.restartix.com/v1/admin/audit/archive?organization_id=1&month=2023-01

# Response
{
  "download_url": "https://s3.amazonaws.com/bucket/audit-archive/1/2023-01.jsonl.gz?X-Amz-Signature=...",
  "expires_at": "2026-02-14T10:15:00Z"
}

What Gets Logged

Logged Entities

Entity TypeLogged ActionsNotes
patientCREATE, UPDATE, DELETEAll mutations logged (includes soft deletes)
appointmentCREATE, UPDATE, DELETEStatus changes (upcoming → confirmed → done) logged
formCREATE, UPDATEForm submission and signature events
userCREATE, UPDATE, DELETEUser creation, role changes, blocking
specialistCREATE, UPDATE, DELETEIncludes soft deletes
organizationUPDATEOrg settings changes (logo, legal templates)
form_templateCREATE, UPDATETemplate creation and publishing
custom_fieldCREATE, UPDATE, DELETECustom field definition changes
segmentCREATE, UPDATE, DELETESegment rule changes

Not Logged (Read-Only)

GET requests are not audited (they're read-only). This reduces audit log volume and focuses on mutations.

Exception: Break-glass sessions log all actions (including reads) because emergency access must be fully traceable.

Sensitive Data Masking

Before writing to audit_log, sensitive fields in the changes JSONB column are masked.

Masked Patterns

go
var sensitivePatterns = []string{
    "password", "secret", "token", "api_key", "apikey",
    "authorization", "cookie", "session",
}

Example:

json
// Original request body
{
  "name": "John Doe",
  "email": "[email protected]",
  "password": "hunter2"
}

// Logged in audit_log.changes
{
  "name": {"old": "Jane Doe", "new": "John Doe"},
  "email": {"old": "[email protected]", "new": "[email protected]"},
  "password": "[REDACTED]"
}

PII Masking in Analytics

Telemetry applies additional masking for ClickHouse analytics:

  • User IDs are hashed (SHA-256) — no direct user identifiers
  • IP addresses are resolved to country/city but not stored in analytics
  • Patient names/emails are never sent to ClickHouse

Querying Audit Logs

Admin API Endpoints

List audit logs:

GET /v1/audit-logs?organization_id={id}&start_date={date}&end_date={date}&entity_type={type}&user_id={id}

Query parameters:

  • organization_id (required): Tenant context
  • start_date (optional): Filter by created_at >= start_date
  • end_date (optional): Filter by created_at <= end_date
  • entity_type (optional): Filter by entity_type (e.g., "patient")
  • user_id (optional): Filter by user_id (actor)
  • page (optional): Pagination (default 1)
  • limit (optional): Page size (default 50, max 500)

Response:

json
{
  "data": [
    {
      "id": 12345,
      "organization_id": 1,
      "user_id": 42,
      "action": "UPDATE",
      "entity_type": "patient",
      "entity_id": 789,
      "ip_address": "203.0.113.42",
      "user_agent": "Mozilla/5.0...",
      "request_path": "PATCH /v1/patients/789",
      "status_code": 200,
      "created_at": "2026-02-13T10:00:00Z"
    }
  ],
  "pagination": {
    "page": 1,
    "limit": 50,
    "total": 1234
  }
}

Export to CSV:

GET /v1/audit-logs/export?organization_id={id}&start_date={date}&end_date={date}
  → Returns CSV file

CSV format:

csv
id,organization_id,user_id,action,entity_type,entity_id,ip_address,request_path,status_code,created_at
12345,1,42,UPDATE,patient,789,203.0.113.42,PATCH /v1/patients/789,200,2026-02-13T10:00:00Z

SQL Queries (Direct Database Access)

Entity history:

sql
-- All mutations on patient 789
SELECT * FROM audit_log
WHERE entity_type = 'patient'
  AND entity_id = 789
  AND organization_id = 1
ORDER BY created_at DESC;

User activity:

sql
-- All actions by user 42 in the last 30 days
SELECT * FROM audit_log
WHERE user_id = 42
  AND organization_id = 1
  AND created_at > NOW() - INTERVAL '30 days'
ORDER BY created_at DESC;

Failed requests:

sql
-- All 4xx/5xx responses in the last 7 days
SELECT * FROM audit_log
WHERE status_code >= 400
  AND organization_id = 1
  AND created_at > NOW() - INTERVAL '7 days'
ORDER BY created_at DESC;

Break-glass sessions:

sql
-- All actions during break-glass session 123
SELECT * FROM audit_log
WHERE break_glass_id = 123
ORDER BY created_at;

Compliance Checklist

HIPAA

  • [x] All ePHI access/mutations are logged
  • [x] Audit logs are tamper-evident (no UPDATE/DELETE policies)
  • [x] 6-year retention (hot PostgreSQL + warm S3 archives)
  • [x] Regular audit log reviews (monthly)
  • [x] Failed access attempts are logged and reviewed
  • [x] Break-glass sessions are logged and reviewed within 24 hours
  • [x] Archival process tested and documented

GDPR

  • [x] Record of processing activities (ROPA) maintained in audit_log
  • [x] Data subjects can request audit logs (via GDPR export)
  • [x] Breach detection via audit log monitoring
  • [x] Pseudonymisation (actor hashing in Telemetry)
  • [x] Encryption (infrastructure + app-level)
  • [x] Regular testing (quarterly HIPAA checks)
  • [x] 72-hour breach notification procedure documented