Kodori's audit log is hash-chained per tenant — every event's prev_hash is the SHA-256 of the previous event in the same tenant's stream. On-demand verification has been available since /audit shipped its "Verify chain integrity" button. The weekly cron is the proactive companion.
**What it does**
Every Sunday 02:00 UTC, an Inngest function:
1. Lists every tenant in the workspace. 2. For each tenant, lists the tenant's chain partitions (D287 — calendar-quarter chain-of-chains). 3. For each (tenant, partition), walks the partition's chain via prev_hash links (NOT by createdAt — same-transaction events share a timestamp; the hash chain itself is the only order-stable traversal). Same code path as the on-demand button. 4. Emits one audit.verification.completed event per (tenant, partition) onto the per-tenant verification stream (audit-verification/<tenantId>) with the result payload — ok, eventsChecked, through, verifiedAt, truncated, partitionKey, and firstMismatch on failure (D288). 5. On failure, broadcasts an alert email to every member of the affected tenant with the first-mismatch detail (event id, type, stream, version, expected vs actual hash) plus the partition that broke.
**Why this matters for SOC 2 / 21 CFR Part 11**
The on-demand button is "we verify when asked"; auditors will ask "do you verify even when no one's asking?" The cron is the answer. Compliance teams filter /audit?stream=audit-verification/<tenantId> for the verification history — every weekly proof going back arbitrarily far, plus the email log on the rare failure.
**Why weekly, not daily**
SOC 2 evidence cadence is monthly at most; weekly is more than enough granularity to claim continuous verification. Daily would 7× the compute cost and per-tenant memory footprint (the verifier holds every event in memory while walking the chain) without payoff. Sunday 02:00 UTC is a low-traffic window across US + EU.
**Failure response**
The audit event records the failure regardless of email delivery. The alert email is on top of that — it's a security notification, transactional in nature, and ignores the onboarding-tip unsubscribe flag (you can't opt out of integrity alerts). If verification fails:
1. Capture the event id from the email or /audit?stream=audit-verification/<tenantId>. 2. Email security@kumokodo.ai with the event id; we engage forensic triage. 3. The chain itself is the system of record — the failure event has its own prev_hash linked to whatever follows, so a tampered chain wouldn't credibly hide its own failure events.
**Per-tenant cadence control**
Today the cron runs weekly for every tenant. If you need daily proof for a specific compliance posture, email security@kumokodo.ai — tenant-level scheduling overrides land when a customer asks.
**How the chain stays verifiable at 100M-doc-tenant scale (D287)**
The hash chain is partitioned by calendar quarter (e.g. `2026-Q2`). Within a partition, every event's prev_hash links to the previous event IN THE SAME PARTITION. The first event of a new partition has prev_hash equal to SHA-256 of the prior partition's tip — that event is the inter-partition link. Verification can run per-partition, and the chain of inter-partition links collectively proves "the entire history is intact." At 100M-doc-tenant volumes a tenant accumulates 500M-2B events; verifying one quarter at a time keeps each verification run inside its wallclock budget while the chain-of-chains semantics preserve the end-to-end tamper-evidence claim.
The cutover is soft: events created before D287 keep a NULL chain_partition_key and are walked as one big "pre-partition" chain by the verifier. Events created after D287 get their YYYY-Q<n> key populated automatically by the append path. The very first post-D287 event for a tenant becomes the inter-partition link from the pre-D287 chain to the new partition.
**How concurrent runs are prevented (and what isolates tenants under load)**
Every Kodori cron declares `concurrency: { limit: 1 }` so two runs of the same cron can't overlap — an essential property for prune crons and verify crons where two parallel walks would double-emit events or apply state twice. Companion event-driven Inngest functions (auto-classify, extract, embed, webhook deliver/fanout, digest send, alert dispatchers, automations dispatcher, legal-hold Object Lock) all declare `concurrency: { key: 'event.data.tenantId', limit: N }` so a busy tenant can't starve the slot pool for everyone else — each tenant gets its own per-function concurrency budget. This is what makes the platform tenant-isolated under load: a customer batch-importing 100K documents won't slow another tenant's classification, extraction, or webhook delivery. Both protections shipped together (D279, 2026-04-30) as the sister-fix to D270's stripe-events-prune single-flight.
**Operator visibility into the work the crons + workflows are doing**
Two admin-only surfaces split cleanly: `/admin/cron-status` answers "did the cron run? what did it return?" (D278), and `/admin/queue-depth` answers "is the work backing up?" (D280). Queue-depth surfaces per-tenant extraction + auto-classify pipeline counts (pending / running / stuck / recently-failed) plus last-hour throughput rates. Tone-coded health rollup at the top — green when work is keeping up, amber when the backlog is growing, red when documents are stuck or the failure rate is elevated. Both surfaces are owner / admin gated. The cron-status query itself is index-backed via the (tenant_id, type, created_at) index added in D284's enterprise-volume index audit — "latest event of type T for tenant X" is O(log n) regardless of how many cron-emit events the tenant has accumulated.