Compliance · AI transparency

AI disclosure.

Last updated 2026-05-02. Documentation, logging, human oversight, and transparency disclosures for the AI features in Kodori, produced in conformance with Articles 11, 12, 14, and 50 of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689). Also applicable to the UK’s pro-innovation AI framework, the US Executive Order 14110 and successor frameworks, and the Colorado AI Act (CO SB24-205).

Risk classification under the EU AI Act

Kodori is a general-purpose document management platform that uses AI for retrieval, classification, metadata extraction, and operator-supervised task execution. None of these uses fall into the Annex III high-risk categories (employment / education / law enforcement / critical infrastructure / migration / justice / democratic process). Kodori is therefore not classified as a "high-risk AI system" under Title III. We voluntarily document the Article 11 / 12 / 14 controls below because regulated customers (legal, AEC, QMS, accounting) frequently require this transparency in their security review even when not legally compelled.

Article 11 — Technical documentation

The AI features Kodori ships, the model providers, and the purpose of each feature:

AI agent (operator-supervised task execution)

  • Model: Anthropic Claude Opus 4.6 (reasoning), Claude Haiku 4.5 (classification, extraction, routing). Provider documentation: anthropic.com/transparency.
  • Routing layer: Vercel AI SDK + Vercel AI Gateway. We do not train, fine-tune, or otherwise modify the base model. We do not retain prompt or completion data beyond the request lifecycle (Anthropic zero-data-retention is enabled on Kodori’s API key).
  • Capability scope: the agent invokes a typed MCP tool catalog (124 tools as of 2026-05-01). Every tool has a Zod input schema, an authorization gate, and a consequential-action confirmation gate (delete, restore, sensitivity-change, retention-change, hold-change, bulk operations larger than 10 items).
  • Data inputs: the agent receives only the content the requesting user already has read access to. The same `canReadDocument` gate the UI uses applies to every agent retrieval. There is no privileged "agent role."
  • Outputs: natural-language responses + tool invocations. Tool invocations are reversible per the event-sourced architecture.

Auto-classification

  • Model: Claude Haiku 4.5.
  • Purpose: propose sensitivity tier (public / internal / confidential / restricted / regulated) + collection assignment + suggested keywords + AI summary on ingest.
  • Decision binding: proposals only. Human-in-the-loop confirmation is required before persistence (see Article 14 below).

Hybrid search (semantic component)

  • Model: OpenAI text-embedding-3-small (1536 dimensions). Embedding-only — text is not sent to OpenAI for completions or summarization.
  • Purpose: generate embeddings for documents and queries. Reciprocal Rank Fusion (k=60) combines semantic ranking with full-text-search ranking.
  • Personal data exposure: chunked document text sent at embed time only; OpenAI does not retain server-side beyond the request.

DLP scanning

  • Method: deterministic pattern + checksum detectors (no AI). US SSN, Luhn-validated CC, ABA-validated routing, MRN-prefixed identifiers, AWS access keys, GitHub tokens, PEM private-key blocks, JWTs, key=value secrets.
  • Decision binding: high-confidence findings auto-escalate sensitivity to "regulated"; the matched value is never stored — only a pre-redacted preview.

Article 12 — Logging and traceability

Every consequential AI action is logged on Kodori’s hash-chained per-tenant audit log:

  • Every agent tool invocation emits an event with `actorKind: 'agent'`, `actorId: <agent-session-id>`, `subjectId: <doc-id|collection-id|...>`, `eventType`, and `payload` (before/after where relevant).
  • Auto-classification proposals emit `document.classified` events with the proposed values; human acceptance emits `document.metadata-changed` with `acceptedFromAi: true`.
  • Anomaly detection automatically writes deny-rules on high-severity AGENT signals (high-volume regulated reads, tool-call ceiling exceeded, off-hours spike against 7-day baseline). Every state transition lands on the audit chain.

Customers can answer "what did the AI do this week?" with a single audit-log filter (`actorKind=agent`) and "what data did the AI see?" by joining events on `subjectId`. Logs are retained for the customer-configured retention class period (default 7 years for regulated, 3 years for confidential, 1 year for internal).

Article 14 — Human oversight

Kodori implements operator-supervised AI throughout. The oversight controls in concrete terms:

  • Confirmation before consequential actions.The agent will not delete, restore, change sensitivity, change retention, change legal-hold state, or run bulk operations larger than 10 items without explicit user confirmation in the same conversation turn. Confirmation requires the user to type a yes-equivalent or click an explicit confirm button; the agent cannot self-confirm by inferring intent.
  • No invented reasons. When the agent invokes a tool that requires a `reason` parameter, the agent must obtain the reason from the user — it is forbidden by system prompt from synthesizing one.
  • Reversibility window. Every consequential action is reversible within the retention window. Deletes are tombstones. The /agent-activity surface lets a user see the sequence of actions taken on their behalf and revert them.
  • Workspace pause. Workspace owners can disable specific MCP tool categories or pause the agent entirely from /admin/settings. Agent-deny rules at /admin/permissions block specific principals.
  • Tool-call ceiling. The agent runner enforces a per-conversation tool-call ceiling; runs that exceed the ceiling halt and request human review. The ceiling is configurable per workspace.

Article 50 — Transparency obligations

  • AI-system disclosure. The agent surface is labeled "Agent" / "AI" throughout the UI; users always know they are interacting with an AI system. No AI feature is disguised as a non-AI feature.
  • AI-generated content marking. AI-generated summaries on the document detail page and the dashboard recent-docs cards are visually distinguished (ochre callout block) and labelled "AI summary." AI-proposed metadata displays a "Suggested by AI" tag until accepted by a human.
  • No deepfake / impersonation use. Kodori does not generate synthetic media (image, audio, video) and is not used for biometric identification. Article 50(4) disclosures are not applicable.

Provider information (Article 16(a))

KumoKodo, Inc. is the provider of Kodori within the meaning of Article 3(3) of the AI Act. Contact:

  • Provider: KumoKodo, Inc., Wyoming, USA.
  • EU representative: To be appointed when an EU sub-processor relationship is finalized. Until then, requests via dpo@kumokodo.ai.
  • AI compliance contact: ai-compliance@kumokodo.ai.

Update commitment

This disclosure is reviewed at every model swap (e.g. Anthropic model version change, embedding model change) and at every AI feature addition. The "Last updated" date at the top of the page reflects the most recent material change. Substantive changes are also announced on /changelog.

Cross-references