| EU AI Act Art. 26(1) | Deployers of high-risk AI systems shall take appropriate technical and organizational measures to ensure they use such systems in accordance with instructions for use and maintain logs automatically generated by the high-risk AI system. | Per-prompt event log with policy_id, timestamp, redaction spans, user identity, AI system (provider + surface), model context where detectable, output hash, and risk category. Append-only events.jsonl retained 12+ months. |
| EU AI Act Art. 26(2) | Deployers shall assign human oversight to natural persons who have the necessary competence, training, and authority. | Human-oversight policy per use case (customer support drafting, financial analysis, code generation, etc.) with reviewer identity, cadence, and per-prompt oversight-tag application logged on every consequential AI interaction. |
| EU AI Act Art. 26(4) | Where deployers exercise control over input data, they shall ensure that input data is relevant and sufficiently representative in view of the intended purpose. | Data-minimization evidence per prompt: raw-vs-redacted delta showing only necessary content crossed the border, GDPR Article 5/6 lawful-basis tag, purpose-limitation check against the use-case taxonomy. |
| EU AI Act Art. 26(5) | Deployers using high-risk AI systems at the workplace shall inform workers' representatives and affected workers before putting into service or use. | Transparency-notice acknowledgment log — per-employee first-use notice delivery, acknowledgment timestamp, works-council consultation reference where applicable. |
| EU AI Act Art. 50 | Deployers of AI systems that generate or manipulate content shall disclose that the content has been artificially generated. | AI-output-disclosure event log — when an employee uses AI-generated content in a deliverable to an affected person, the disclosure event is logged with the output hash, recipient category, and disclosure-notice reference. |
| EU AI Act Art. 6 (risk classification) | Classify each AI system in use according to the high-risk list (Annex III) and the general-purpose AI criteria. | AI-system inventory artifact — every AI system discovered in the estate, with provider, intended purpose, Annex III classification tag, and general-purpose AI flag. Updates on each new surface detection. |
| EU AI Act Art. 72 (post-market monitoring) | Deployers shall monitor the operation of high-risk AI systems and report any serious incident. | Post-market monitoring dashboard — redaction rate trends, false-positive sampling, policy-version drift, incident flags. Incident report template with 15-day clock anchored to incident timestamp. |
| EU AI Act Annex IV (technical documentation) | Maintain technical documentation that includes intended purpose, persons and processes involved, hardware used, validation and testing. | Annex IV appendix to the quarterly pack — Veladon-generated technical documentation covering detection models, policy version history, validation metrics, and integration surface map, ready for hand-off to a notified body. |