Activity Tracking
Best-effort "last seen" timestamps for humans and memberships. Implements P35.
What this is for
UI surfaces like:
- Console "inactive users" view — surface accounts that haven't been seen in 30/60/90 days.
- Per-principal "memberships across orgs" — show last touch per (principal, org) pair so a Console superadmin can see which clinics a support engineer actually works in.
- Admin "last login"-style summaries in the clinic admin UI.
- Auto-cleanup jobs that prune stale memberships or flag idle accounts.
What this is NOT
- Not a substitute for the audit log. Audit is synchronous, durable, and legally relevant; activity is display metadata. NEVER use
humans.last_activityororganization_memberships.last_used_atfor compliance decisions, breach investigations, or anything that has to be defensible after the fact. - Not a real-time signal. The middleware throttles bumps to once per 60 seconds per key. A "last seen" view can show a value up to 60s stale even when the user is actively making requests.
- Not durable across crashes. UPDATEs are dispatched as background goroutines. A SIGKILL between request and UPDATE drops that bump.
Columns
| Column | Type | When it's bumped |
|---|---|---|
humans.last_activity | TIMESTAMPTZ | Every authenticated request, throttled per principal |
organization_memberships.last_used_at | TIMESTAMPTZ (nullable) | Every org-scoped staff request, throttled per (principal, org) |
patients.last_used_at | TIMESTAMPTZ (nullable) | Every org-scoped patient request, throttled per (principal, org) |
The org-scoped column splits by session shape post-1.26: staff sessions write the membership column, patient sessions write the patient column. The middleware reads Subject.IsPatientSession (set by OrganizationContext when the org-scope grant came from a patients row rather than a organization_memberships row) to pick the path. The throttle caches for the two paths are independent — a human who is staff at org A and a patient at org B doesn't have one path's throttle hide the other's activity.
organizations.last_activity_at is not a stored column — when needed, derive it as MAX(organization_memberships.last_used_at) WHERE organization_id = ?. The idx_org_memberships_org index makes the aggregation cheap. If a Console list view ever surfaces a real performance problem from per-row aggregation, we add the column then; preemptive denormalization is the failure mode this avoids.
organization_memberships.last_used_at is nullable on purpose. NULL means "this membership exists but the principal has never made an org-scoped request as part of it" — e.g., an invitation that was accepted but never used. A DEFAULT NOW() would lie about that case.
Write strategy
The path through one request:
Authenticate → OrganizationContext → ActivityTracker (middleware)
│
▼
activity.Track(ctx, principalID, orgID, isPatient)
│
├─ principal cache hit (within 60s) → no-op
│
└─ principal cache miss → goroutine →
admin pool → UPDATE humans SET last_activity = NOW()
+ if isPatient: UPDATE patients ... (joined via patient_profiles.human_id)
else: UPDATE organization_memberships ...Throttle: 60 seconds per key (internal/core/activity/tracker.go:ThrottleInterval). Keys are principalID for the human bump and principalID|orgID for the org-scoped bumps. Membership and patient bumps live in separate caches so a human who is staff at org A and a patient at org B can have independent throttle state per path.
Background dispatch: cache misses kick off a tracked goroutine that uses a fresh 5-second context.Background() so a cancelled request context doesn't abort the UPDATE. The middleware never blocks on the write. Errors are logged and dropped — activity is best-effort.
GC sweep: a background ticker runs every 1 hour (gcInterval) and purges cache entries older than 24 hours (ttlPurgeAge). This bounds memory on long-uptime processes; a re-bump for a user who returns after eviction is harmless.
Multi-instance behavior: each instance throttles locally. With N instances, the worst-case rate per principal is N updates/minute — still negligible at any sane fleet size. If a deployment ever sees write pressure on humans / organization_memberships / patients from these UPDATEs, the next escalation is a Redis-buffered batch (single coordinator) or a PG advisory-lock batch worker. Until then this package is sufficient.
Pool choice: bumps run against the admin pool. The bump goroutine outlives the request, and the request-scoped connection on the app pool is released before the bump fires — running on the admin pool also bypasses RLS, which is the right call for a platform-internal write.
Configuration
None. The thresholds are constants in internal/core/activity/tracker.go. A future Layer 11 (telemetry) might surface a DroppedCount-style metric for these bumps; today the slog.Warn on bump failure is the only signal.
Graceful shutdown
activity.Shutdown(ctx) runs from main.go after srv.Shutdown and events.Shutdown. It closes the GC ticker and waits for in-flight goroutines to finish their UPDATEs, capped by the shutdown context's deadline. A timeout is logged but not fatal — losing a few last bumps on a hard-cancelled drain is acceptable for best-effort data.
Tests
make test covers the throttle (cache hit / miss), per-key independence (distinct users / distinct memberships bumped separately), behavior across the throttle window (advances a fake clock past ThrottleInterval and asserts a second bump fires), GC sweep (entries older than ttlPurgeAge are evicted), bumper-error swallowing (Track never propagates DB errors), and post-shutdown no-op behavior. All race-detector clean.
A real-DB integration test isn't needed at this layer — the persistence boundary is a single pgxBumper running two well-known UPDATEs against existing columns. If a regression in the SQL ever surfaces, the rlstest harness can grow a small smoke that asserts the bumps land.
Where the values appear today
Nowhere yet. The middleware writes the values; no UI consumes them as of Layer 1.11. Console "memberships across orgs" (Layer 1.13) is the first scheduled consumer.
The convention from P35 holds: wire writes before any UI surfaces the values. This document, and the middleware, are the writes.