US federal baseline · Referenced by OMB M-24-10 + SEC cybersecurity rule

Veladon for NIST AI RMF: GOVERN / MAP / MEASURE / MANAGE Evidence

The NIST AI Risk Management Framework (AI RMF 1.0) is the US federal guidance for managing AI risk across the lifecycle. It organizes controls into four functions — GOVERN, MAP, MEASURE, MANAGE — with an extension profile for Generative AI (NIST AI 600-1, July 2024). It is voluntary but referenced by federal procurement (OMB M-24-10), SEC cybersecurity disclosure (2023), and state-level AI legislation (Colorado AI Act, California AI provisions).

Full name
NIST AI Risk Management Framework 1.0 + Generative AI Profile (NIST AI 600-1)
Effective
January 2023 (RMF 1.0); July 2024 (Generative AI Profile)
Jurisdiction
United States (federal guidance; de facto baseline for enterprise)
Primary regulator
NIST (non-regulatory); referenced by OMB, CISA, SEC, and federal procurement

Executive summary · for CISOs + Compliance Officers

Why this matters for 500–2,500 employee mid-market.

The NIST AI Risk Management Framework (AI RMF 1.0, January 2023) and the Generative AI Profile (NIST AI 600-1, July 2024) are the US federal baseline for AI risk management. The four functions — GOVERN, MAP, MEASURE, MANAGE — organize controls across the AI lifecycle. NIST AI RMF is voluntary, but it is referenced by OMB M-24-10 for federal-agency AI use, by the SEC's 2023 cybersecurity disclosure rule for material AI risk, and by state-level AI legislation (Colorado AI Act, California AI provisions). For private-sector 500–2,500 employee mid-market, NIST AI RMF alignment is the de facto risk-management baseline that US enterprise procurement and audit increasingly expect.

Veladon is engineered for the Generative AI Profile specifically: per-prompt GenAI Profile metadata covering provider, model context, oversight tag, and output disposition; GOVERN / MAP / MEASURE / MANAGE crosswalk indexed on every log event; and quarterly evidence packs pre-assembled against the 19 highest-value sub-categories for mid-market. For a CISO preparing SEC Form 10-K Item 106 disclosure on AI-related cybersecurity risk, or a Head of GRC mapping NIST AI RMF alignment for enterprise customer procurement, the evidence pack ships out of the box. No services engagement to build the crosswalk; no customer-side mapping effort.

Which NIST AI RMF controls matter for employees using public LLMs?

These are the specific articles, controls, or sections that govern the moment an employee pastes data into ChatGPT, Claude, or Gemini. A general-DLP retrofit rarely maps to these by default — Veladon's evidence pack carries the references inline on every log line.

  • GOVERN 1 — cultures, risk tolerance, accountability structures
  • GOVERN 4 — teams empowered to prevent harm (includes human oversight)
  • MAP 3 — AI system context and impact assessment
  • MAP 5 — impact on individuals, groups, communities
  • MEASURE 2 — quantitative measurement of risk and impact
  • MEASURE 2.8 — test, evaluation, verification and validation (the logging control)
  • MANAGE 1 — risk treatment based on measure outputs
  • MANAGE 4 — risk response, including incident reporting and remediation

Control-by-control mapping · 8 controls

What Veladon evidences for each NIST AI RMF control.

The concrete control-ID to evidence mapping auditors request during fieldwork. Every NIST AI RMF control below is indexed inline on every log line Veladon generates — so the quarterly evidence pack ships pre-sampled for each control.

Control IDRequirementVeladon evidence
NIST AI RMF GOVERN 1.1Legal and regulatory requirements involving AI are understood, managed, and documented.Framework crosswalk artifact — every log event tagged with applicable framework references (EU AI Act Article, ISO 42001 Annex A, NIST AI RMF sub-category, state-level AI law where in scope).
NIST AI RMF GOVERN 4.1Organizational teams are committed to a culture that considers and communicates AI risk.Employee training acknowledgment log, policy-notice delivery evidence, incident-response tabletop documentation, and use-case-specific human-oversight policy registry.
NIST AI RMF MAP 3.1Potential benefits of intended AI system functionality and performance are identified.Use-case taxonomy registry — each AI use case with intended benefit, affected stakeholders, and oversight policy. Feeds the Map function's context-establishment requirement.
NIST AI RMF MAP 5.1Likelihood and magnitude of each identified impact are determined.Per-use-case impact assessment with exposure counts (# prompts, # redactions, # oversight tags), severity scoring, and affected-data-category taxonomy. Quarterly aggregation in the evidence pack.
NIST AI RMF MEASURE 2.8Risks associated with transparency and accountability — as identified in the MAP function — are examined and documented.Per-prompt test/evaluation evidence — redaction rates, false-positive sampling, policy-version drift, user-feedback coupling. 12+ month retention of raw log for audit replay.
NIST AI RMF MEASURE 3.3 (GenAI Profile)Feedback processes are established for end users and impacted parties to report AI system issues or concerns.In-browser feedback capture on every policy hit, user-reported false-positive queue, GRC-reviewable feedback log mapped to the triggering prompt event.
NIST AI RMF MANAGE 1.3Responses to the AI risks deemed high priority are developed, planned, and documented.Risk-treatment action log — blocked prompts, confirmation-prompt mode, policy escalation, novel-category flagging. Per-event treatment decision recorded.
NIST AI RMF MANAGE 4.1Post-deployment AI system monitoring plans are implemented.Post-market monitoring dashboard (shared with EU AI Act Article 72), drift detection, anomaly flagging, incident-response runbook with timestamp evidence.

What lands in your quarterly evidence pack for NIST AI RMF.

Veladon's quarterly evidence pack is structured around the exact artifacts a Big 4 auditor or regulator asks for. The list below is what lands in your /quarterly-exports/ folder 30 days after deployment.

  1. 01GOVERN 4 — human oversight per AI use case, documented and evidenced per prompt
  2. 02MAP 3 — AI system context captured via the inventory: provider, purpose, deployment surface, employee cohort
  3. 03MEASURE 2.8 — prompt-level logs with test/evaluation evidence (redaction rates, false-positive sampling, policy version drift)
  4. 04MANAGE 1 — risk treatment actions (blocked prompts, confirmation-prompt mode, policy escalation) evidenced per event
  5. 05MANAGE 4 — incident reporting artifacts for serious events (exfiltration attempts, novel-category detection)
  6. 06Generative AI Profile (NIST AI 600-1) — specific controls for GenAI use evidenced via prompt / output hash, redaction taxonomy, and model-provider metadata

Implementation playbook · 5 phases · 500 employees in 5–10 business days

How to deploy Veladon for NIST AI RMF in a compressed timeline.

  1. Phase 01

    GOVERN setup

    Week 1 · Days 1–5

    Activities

    • Head of GRC authors AI Policy with NIST AI RMF GOVERN function references
    • Define risk tolerance statement per use case
    • Establish roles and accountabilities (CISO + Compliance Officer + Head of GRC)
    • Deploy Veladon pilot for discovery

    Artifacts produced

    • AI Policy v1 (5–10 pages)
    • Risk tolerance statement
    • Roles and accountabilities doc
    • Pilot deployment evidence
  2. Phase 02

    MAP execution

    Week 2 · Days 6–10

    Activities

    • Produce AI-system inventory with context per system (MAP 3)
    • Document use cases and impact assessments (MAP 5)
    • Cross-walk inventory to EU AI Act Article 6 where applicable
    • Configure Veladon use-case taxonomy

    Artifacts produced

    • AI-system inventory (MAP 3 artifact)
    • Use-case impact assessment matrix (MAP 5 artifact)
    • Context-establishment doc per use case
  3. Phase 03

    MEASURE operationalization

    Week 2–3 · Days 10–20

    Activities

    • Full Veladon rollout — redaction + monitoring active
    • Establish 90-day measurement window for baseline
    • Configure sampling protocols for false-positive / false-negative evaluation
    • Enable user feedback capture for MEASURE 3.3

    Artifacts produced

    • MEASURE 2.8 sampling protocol
    • 90-day baseline metrics set
    • User feedback queue live
  4. Phase 04

    MANAGE closure

    Month 2–3

    Activities

    • Define risk-treatment policies per use case (MANAGE 1)
    • Establish incident-response playbook (MANAGE 4)
    • Run first tabletop exercise
    • Generate first quarterly NIST AI RMF crosswalk pack

    Artifacts produced

    • Risk-treatment policy registry
    • Incident-response playbook
    • Tabletop exercise results
    • Quarterly NIST AI RMF crosswalk pack v1
  5. Phase 05

    Governance cadence

    Quarterly

    Activities

    • Quarterly evidence-pack review
    • Annual risk tolerance update
    • Annual GOVERN function review
    • Continuous MEASURE metric collection

    Artifacts produced

    • Quarterly crosswalk packs
    • Annual governance review minutes
    • Risk tolerance updates

Concrete use cases · how NIST AI RMF obligations show up in practice

The specific scenarios Veladon covers for NIST AI RMF.

SEC Form 10-K Item 106 disclosure on AI risk

A pre-IPO fintech preparing its first Form 10-K under the 2023 SEC cybersecurity disclosure rule needs to describe AI-related cybersecurity risk management, strategy, and governance. Veladon's NIST AI RMF crosswalk pack provides the evidence spine: GOVERN (policy + roles), MAP (AI-system inventory + context), MEASURE (90-day metrics), MANAGE (incident log + treatment actions). Drafting counsel pulls specific evidence into the 10-K language; the board audit-committee deck pulls the same evidence into governance oversight reporting.

Federal-procurement RFP citing OMB M-24-10

A SaaS vendor responding to a federal RFP finds a new section: AI Risk Management per OMB M-24-10, which references NIST AI RMF as the baseline. The RFP asks for documented NIST AI RMF alignment across the four functions. Veladon's crosswalk pack is direct evidence: GOVERN policy + roles, MAP inventory + context, MEASURE metrics + sampling, MANAGE treatment + incident log. The vendor attaches the pack as Exhibit B to the RFP response — no custom documentation effort.

Generative AI Profile (NIST AI 600-1) adoption

A healthtech organization expands from base NIST AI RMF coverage to the GenAI Profile (July 2024) specifically for employee use of ChatGPT, Claude, Gemini. The GenAI Profile adds controls for provenance tracking, content authenticity, human oversight on consequential outputs, and logging that captures model provider and prompt context. Veladon's default log captures all five dimensions: provider (OpenAI/Anthropic/Google), model context where detectable, oversight tag, output hash, disposition (used as-is / edited / discarded). GenAI Profile alignment is automatic.

Colorado AI Act impact-assessment input

A fintech selling into Colorado needs to comply with the Colorado AI Act's impact-assessment requirements for high-risk AI systems. The framework closely tracks NIST AI RMF MAP function. Veladon's MAP 3 and MAP 5 artifacts (AI-system inventory with context, use-case impact assessment matrix) feed directly into the Colorado-mandated impact assessment template. The same artifacts serve California ADMT requirements and NIST AI RMF MAP evidence — one data set, three regulatory consumers.

Board audit-committee AI governance review

Quarterly board audit committee reviews cybersecurity and AI risk. Veladon's quarterly crosswalk pack provides the agenda: GOVERN (policy + roles update), MAP (new AI systems in inventory), MEASURE (metrics trend vs prior quarter), MANAGE (incidents closed, treatment actions). Audit-committee members get a 2-page executive summary plus appendix pack for deeper review. Over 4 quarters the pack establishes governance cadence visible to external auditors and institutional investors.

Enterprise customer AI vendor-risk questionnaire

An enterprise customer sends a 120-question AI vendor-risk questionnaire, with 40 questions mapped to NIST AI RMF sub-categories. Veladon's crosswalk pack is direct evidence for 30+ of the 40 questions (MAP 3 inventory, MAP 5 impact, MEASURE 2.8 logs, MANAGE 1 treatment, MANAGE 4 monitoring). The vendor's GRC team answers the remaining questions with policy documents. Time-to-complete: 6 hours vs 40+ hours without the pack.

Deadline calendar

NIST AI RMF deadlines + audit milestones.

Framework deadline

Rolling (voluntary US federal framework)

  1. Annual

    Annual governance review (GOVERN function)

    Review AI Policy, roles, risk tolerance. Update for new regulations, new AI systems, organizational changes.

  2. Quarterly

    Quarterly MEASURE metric review

    Review 90-day metrics — redaction rates, false-positive sampling, incident counts, drift. Feed into MANAGE adjustments.

  3. Annual filing

    SEC 10-K Item 106 annual disclosure

    For public companies, annual disclosure of cybersecurity risk management, strategy, governance — including material AI risk.

  4. February 1, 2026

    Colorado AI Act effective date

    Colorado AI Act high-risk AI system duty-of-care and impact-assessment requirements become enforceable. NIST AI RMF MAP artifacts serve as impact-assessment inputs.

Why a general DLP retrofit is insufficient for NIST AI RMF evidence.

NIST AI RMF, especially the Generative AI Profile, is explicit that logging must capture AI-specific metadata: the model provider, the AI system identifier, the prompt context, the human oversight step, and the output handling. General DLP logs capture file movement and data patterns — not AI system identity or oversight per use case. The crosswalk exists but the mapping is lossy: a classic DLP log does not tell an auditor which AI system was used, whether oversight was applied, or which MEASURE 2.8 test result applies. Veladon's logs are GenAI-Profile-aware by default.

Questions CISOs ask about NIST AI RMF

Common questions about NIST AI RMF and employee AI use.

Is NIST AI RMF required for private-sector companies, or only federal agencies?

NIST AI RMF is not legally required for private-sector companies. It is referenced by OMB M-24-10 for federal-agency AI use, by the SEC's 2023 cybersecurity disclosure rule for material AI risk, and by state legislation (Colorado AI Act, California AI provisions). In practice, it is the de facto risk-management baseline that US enterprise procurement and audit increasingly expect. For a 500–2,500 employee mid-market selling into regulated US markets, NIST AI RMF alignment is the pragmatic minimum.

How do NIST AI RMF functions map to EU AI Act articles and ISO 42001 controls?

The three frameworks align at the conceptual level, with different artifact requirements. GOVERN ↔ EU AI Act Article 29 (AI literacy) + Article 26 accountability ↔ ISO 42001 A.5 (leadership). MAP ↔ EU AI Act Article 6 (risk classification) ↔ ISO 42001 A.4 (lifecycle). MEASURE ↔ EU AI Act Article 26(1) (usage logs) + Article 72 (monitoring) ↔ ISO 42001 A.6.2.3 + A.9. MANAGE ↔ EU AI Act Article 79 (corrective actions) ↔ ISO 42001 A.10 (third-party governance). Veladon's evidence pack carries all three mappings inline so one log line satisfies all three audits.

What does the NIST AI RMF Generative AI Profile require specifically for employee ChatGPT use?

The Generative AI Profile (NIST AI 600-1, July 2024) adds specific controls for generative AI systems: provenance tracking for outputs used in business decisions, content authenticity verification, human oversight on consequential outputs, and logging that captures the model provider and the prompt context. For employee ChatGPT use, this translates to: per-prompt logs with provider identification (OpenAI), model context (GPT-4, GPT-4-mini, etc. where identifiable), oversight tag, and output disposition (used as-is, edited, discarded). Veladon captures all five dimensions in the default usage log.

Does SEC cybersecurity disclosure (Item 106) cover AI risk?

Yes. The SEC's 2023 cybersecurity disclosure rule requires public companies to disclose material cybersecurity incidents (Form 8-K Item 1.05) and describe their cybersecurity risk management, strategy, and governance (Form 10-K Item 106). Material AI-related incidents — data exfiltration via employee prompts, regulatory action under EU AI Act, significant AI-driven business disruption — fall under this rule when material. NIST AI RMF alignment is the practical baseline the SEC references for risk-management governance. For pre-IPO mid-market, this is increasingly relevant to institutional-investor due diligence.

What does a minimum-viable NIST AI RMF evidence pack look like for a 500-employee company?

A minimum-viable pack covers the four functions with these artifacts: GOVERN — AI policy document, roles and accountabilities, risk tolerance statement (5–10 pages, authored by Head of GRC); MAP — AI system inventory with context per system, impact assessment per use case (generated from the tool's inventory); MEASURE — 90 days of prompt-level logs, redaction rate dashboards, false-positive sampling (from the tool); MANAGE — risk treatment log, incident response playbook, corrective action tracking (half policy, half tool-generated). Quarterly export, indexed by NIST function and sub-category.

Tailored FAQ · NIST AI RMF-specific

Additional NIST AI RMF questions Veladon buyers ask.

Is NIST AI RMF legally required for US mid-market companies?

No. NIST AI RMF is a voluntary framework. It is referenced by OMB M-24-10 for federal-agency use, by the SEC's 2023 cybersecurity disclosure rule for material AI risk, and by state-level AI legislation (Colorado AI Act, California AI provisions). In practice, NIST AI RMF alignment is the de facto risk-management baseline that US enterprise procurement and audit increasingly expect. For a 500–2,500 employee US mid-market selling into regulated markets, NIST AI RMF alignment is the pragmatic minimum.

How does NIST AI RMF crosswalk to EU AI Act Article 26?

The four functions map conceptually: GOVERN ↔ Article 29 AI literacy + Article 26 accountability; MAP ↔ Article 6 risk classification; MEASURE ↔ Article 26(1) usage logs + Article 72 monitoring; MANAGE ↔ Article 79 corrective actions. Veladon's evidence pack carries both mappings inline — one log line satisfies both the NIST sub-category and the EU AI Act article. For organizations with both US and EU exposure, the pack supports both audits without duplicate evidence collection.

What's the GenAI Profile (NIST AI 600-1) and how does Veladon support it?

The Generative AI Profile (July 2024) adds specific controls for generative AI: provenance tracking for outputs used in business decisions, content authenticity verification, human oversight on consequential outputs, logging that captures model provider and prompt context. Veladon captures all five dimensions in the default log: provider (OpenAI/Anthropic/Google/Microsoft/Perplexity), model context where detectable (GPT-4, Claude 3.5 Sonnet, etc.), oversight tag, output hash, disposition. GenAI Profile alignment is automatic, no extra configuration.

Does SEC Form 10-K Item 106 require AI-specific evidence?

Yes when AI risk is material. The 2023 rule requires public companies to describe cybersecurity risk management, strategy, and governance. Material AI-related cybersecurity incidents fall under Form 8-K Item 1.05. Material AI risk management falls under Form 10-K Item 106. NIST AI RMF alignment is the practical evidence baseline. Veladon's crosswalk pack provides the evidence spine for Item 106 language covering AI risk governance and management processes.

Can Veladon output feed California ADMT and Colorado AI Act requirements?

Yes. Veladon's MAP 3 (AI-system inventory with context) and MAP 5 (per-use-case impact assessment) artifacts serve both California ADMT (automated decisionmaking technology) requirements and Colorado AI Act impact-assessment requirements. The same data set indexes to NIST AI RMF MAP function, CCPA/CPRA records, and Colorado AI Act impact-assessment templates. One collection, three regulatory consumers.

What's the minimum viable NIST AI RMF evidence set for a 500-employee company?

GOVERN: AI policy (5–10 pages), roles and accountabilities, risk tolerance statement. MAP: AI-system inventory, use-case context per system, impact-assessment matrix. MEASURE: 90-day prompt-level log, redaction rate dashboard, false-positive sampling. MANAGE: risk-treatment action log, incident-response playbook, corrective-action tracking. Veladon generates MAP 3, MAP 5, MEASURE 2.8, MANAGE 1, MANAGE 4 artifacts out of the box; GOVERN artifacts are GRC-authored policy documents that persist.

Pricing context · 500–2,500 employee deployments

What Veladon typically costs for NIST AI RMF coverage.

At 500–2,500 employees for NIST AI RMF coverage, Veladon lands at $22–32k ACV (mid-market tier) or $45–90k (enterprise tier) with quarterly crosswalk packs (GOVERN / MAP / MEASURE / MANAGE, plus GenAI Profile metadata) included. For pre-IPO companies preparing SEC 10-K Item 106 disclosure, the pack's evidence spine is directly usable in counsel-drafted disclosure language. Compare against building the crosswalk manually with a general DLP + 60–120 hours of GRC work per quarter for the MEASURE / MANAGE / MAP evidence. For a fintech, healthtech, or SaaS mid-market with both US-federal and state-level AI regulation exposure, the bundled pack supports all crosswalks from one data set.

Need NIST AI RMF evidence on a compressed timeline?

Veladon deploys via MDM in 30 minutes and generates the first evidence pack at day 30. Get the Veladon early-access brief — detailed architecture, detection taxonomy, and NIST AI RMF crosswalk.

Get the NIST AI RMF crosswalk