| ISO 42001 A.4.1 | AI system life cycle — the organization shall determine the life cycle stages for each AI system and integrate risk controls into each stage. | AI-system inventory artifact with lifecycle status per system (pilot, production, retired), owner, retirement conditions, and dependency map. Updates on every new system detection or status change. |
| ISO 42001 A.5.2 | AI policy — the organization shall establish an AI policy that is appropriate to the purpose of the organization. | Policy template and version history log — Veladon's policy configuration feeds the AI Acceptable Use Policy document your Head of GRC authors; every policy change is versioned with timestamp, editor, and rationale. |
| ISO 42001 A.6.1.4 | AI system risk assessment — risk assessments shall be conducted at planned intervals. | Automated quarterly risk assessment feed — per-system risk categorization under EU AI Act Article 6 crosswalk, aggregated exposure by use case, trend analysis across quarters. |
| ISO 42001 A.6.2.3 | Operational planning and control — usage and operations monitoring. The organization shall plan, implement and control processes to meet AI management system requirements. | Prompt-level usage log with A.6.2.3 control ID inline on every event — employee, timestamp, AI system, prompt hash, redaction spans, policy_id, oversight tag. 12+ month retention. |
| ISO 42001 A.7.3 | Data quality for AI systems — organizations using or developing AI systems shall ensure data quality appropriate to the AI system's intended purpose. | Data-minimization evidence per prompt — redacted-vs-raw delta, dictionary version at prompt time, false-positive sampling for detection-model quality. |
| ISO 42001 A.8.3 | Human oversight and intervention — human oversight shall be planned, designed, implemented and maintained. | Per-use-case human-oversight policy registry + per-prompt oversight-tag application log + exception-handling evidence where policy required review and reviewer signed off. |
| ISO 42001 A.9.2 | AI system performance monitoring — monitoring the operation of an AI system in relation to the intended purpose. | Performance dashboard artifacts — redaction-rate trends, false-positive / false-negative sampling, policy-version drift, user-experience feedback coupling. |
| ISO 42001 A.10.2 | Third-party AI systems — governance of AI systems provided by third parties. | Third-party provider registry — OpenAI, Anthropic, Google, Microsoft, Perplexity classification with contract references (ChatGPT Enterprise DPA, Claude Team BAA, Gemini for Workspace SCC), data-residency evidence, retention policy links. |