Deadline: August 2, 2026 · Deployer obligations

Veladon for EU AI Act: Article 26 Deployer Evidence for 500–2,500 Employee Mid-Market

The EU AI Act is the world's first comprehensive AI regulation, entered into force August 1, 2024 and reaching general application on August 2, 2026. It applies to providers (model developers) and deployers (any company whose employees use AI in a professional capacity) — and catches any non-EU company with even one EU customer, employee, or output-consumer.

Full name
European Union Artificial Intelligence Act (Regulation 2024/1689)
Effective
August 2, 2026 (general application of deployer obligations)
Jurisdiction
European Union (extraterritorial reach)
Primary regulator
European AI Office + national competent authorities per member state

Executive summary · for CISOs + Compliance Officers

Why this matters for 500–2,500 employee mid-market.

The EU AI Act (Regulation 2024/1689) is the first comprehensive AI regulation anywhere. For 500–2,500 employee regulated mid-market companies with any EU exposure — EU customers, EU-based employees, AI output consumed by EU persons — Article 26 deployer obligations apply as of August 2, 2026, and Article 99 enforcement carries penalty tiers up to €35M or 7% of global turnover. The practical threat is not the headline penalty ceiling; it is the Big 4 audit firms now writing EU AI Act readiness into 2026 SOC 2 fieldwork, the enterprise customers writing compliance into renewal contracts, and the vendor-risk-management questionnaires that have added 40–80 new questions about AI inventory, usage logging, and human oversight.

Veladon is engineered to close the 2026 audit gap specifically for the mid-market. Deployment via MDM takes 5–10 business days. The first quarterly evidence pack generates on day 30 pre-indexed to Article 26(1) usage logs, Article 26(2) human oversight, Article 26(4) data-governance alignment, Article 26(5) affected-person notification, Article 50 transparency, and Annex IV technical documentation references. No services engagement required, no services SKU, no 90-day program. A Compliance Officer and CISO dyad operating the 2–8 person GRC team at a regulated SaaS, fintech, healthtech, or legaltech mid-market has the artifacts ready 60–120 days before the August 2, 2026 deadline — with margin for audit iteration.

Which EU AI Act controls matter for employees using public LLMs?

These are the specific articles, controls, or sections that govern the moment an employee pastes data into ChatGPT, Claude, or Gemini. A general-DLP retrofit rarely maps to these by default — Veladon's evidence pack carries the references inline on every log line.

  • Article 5 — prohibited AI practices (social scoring, manipulative techniques, real-time biometric ID in public spaces)
  • Article 6 — high-risk AI system classification; deployers must classify every AI system in use
  • Article 26 — deployer obligations: usage logs, human oversight, informing affected persons, cooperation with authorities
  • Article 50 — transparency obligations for generative AI (disclosure when content is AI-generated, watermarking where applicable)
  • Annex IV — technical documentation requirements (primarily provider-facing; deployers need evidence of provider-supplied docs)
  • Article 72 — post-market monitoring, incident reporting within 15 days of serious incident
  • Article 99 — administrative fines up to €35M or 7% of global annual turnover

Control-by-control mapping · 8 controls

What Veladon evidences for each EU AI Act control.

The concrete control-ID to evidence mapping auditors request during fieldwork. Every EU AI Act control below is indexed inline on every log line Veladon generates — so the quarterly evidence pack ships pre-sampled for each control.

Control IDRequirementVeladon evidence
EU AI Act Art. 26(1)Deployers of high-risk AI systems shall take appropriate technical and organizational measures to ensure they use such systems in accordance with instructions for use and maintain logs automatically generated by the high-risk AI system.Per-prompt event log with policy_id, timestamp, redaction spans, user identity, AI system (provider + surface), model context where detectable, output hash, and risk category. Append-only events.jsonl retained 12+ months.
EU AI Act Art. 26(2)Deployers shall assign human oversight to natural persons who have the necessary competence, training, and authority.Human-oversight policy per use case (customer support drafting, financial analysis, code generation, etc.) with reviewer identity, cadence, and per-prompt oversight-tag application logged on every consequential AI interaction.
EU AI Act Art. 26(4)Where deployers exercise control over input data, they shall ensure that input data is relevant and sufficiently representative in view of the intended purpose.Data-minimization evidence per prompt: raw-vs-redacted delta showing only necessary content crossed the border, GDPR Article 5/6 lawful-basis tag, purpose-limitation check against the use-case taxonomy.
EU AI Act Art. 26(5)Deployers using high-risk AI systems at the workplace shall inform workers' representatives and affected workers before putting into service or use.Transparency-notice acknowledgment log — per-employee first-use notice delivery, acknowledgment timestamp, works-council consultation reference where applicable.
EU AI Act Art. 50Deployers of AI systems that generate or manipulate content shall disclose that the content has been artificially generated.AI-output-disclosure event log — when an employee uses AI-generated content in a deliverable to an affected person, the disclosure event is logged with the output hash, recipient category, and disclosure-notice reference.
EU AI Act Art. 6 (risk classification)Classify each AI system in use according to the high-risk list (Annex III) and the general-purpose AI criteria.AI-system inventory artifact — every AI system discovered in the estate, with provider, intended purpose, Annex III classification tag, and general-purpose AI flag. Updates on each new surface detection.
EU AI Act Art. 72 (post-market monitoring)Deployers shall monitor the operation of high-risk AI systems and report any serious incident.Post-market monitoring dashboard — redaction rate trends, false-positive sampling, policy-version drift, incident flags. Incident report template with 15-day clock anchored to incident timestamp.
EU AI Act Annex IV (technical documentation)Maintain technical documentation that includes intended purpose, persons and processes involved, hardware used, validation and testing.Annex IV appendix to the quarterly pack — Veladon-generated technical documentation covering detection models, policy version history, validation metrics, and integration surface map, ready for hand-off to a notified body.

What lands in your quarterly evidence pack for EU AI Act.

Veladon's quarterly evidence pack is structured around the exact artifacts a Big 4 auditor or regulator asks for. The list below is what lands in your /quarterly-exports/ folder 30 days after deployment.

  1. 01AI system inventory — every public LLM (ChatGPT, Claude, Gemini, Copilot, Perplexity, 50+ others) an employee has used, with deployer classification under Article 6
  2. 02Usage logs — 90+ day retention of prompt-level events with employee ID, timestamp, AI system, prompt hash, output hash, redaction category, and risk score (Article 26(1) + (6))
  3. 03Human oversight mapping — per-use-case documentation of who reviews AI outputs before consequential decisions (Article 26(2) + Article 14)
  4. 04Transparency notice evidence — when employees use AI-generated content in deliverables to customers, Veladon logs the disclosure event for Article 50 evidence
  5. 05Data-handling alignment — redaction events mapped to GDPR Article 6 / 9 lawful bases and EU AI Act Article 26(4) data-governance obligations
  6. 06Quarterly evidence pack — JSON + signed PDF summary, pre-indexed against Article 26 clauses, ready to submit to a notified body or hand to a Big 4 auditor

Implementation playbook · 5 phases · 500 employees in 5–10 business days

How to deploy Veladon for EU AI Act in a compressed timeline.

  1. Phase 01

    Discover

    Week 1 · Days 1–5

    Activities

    • Push Veladon browser extension via MDM (Intune / Jamf / Kandji / Chrome Enterprise) to pilot cohort of 50 employees
    • Enable SaaS connectors for Slack, Google Workspace, Microsoft 365, Notion
    • Run discovery-only mode — no redaction, just inventory of AI systems in use
    • Draft first-pass AI-system inventory artifact for EU AI Act Article 6 classification

    Artifacts produced

    • AI-system inventory v1 (50–200 entries typical)
    • Pilot deployment runbook with MDM configuration reference
    • Discovery-mode 7-day log baseline for false-positive calibration
  2. Phase 02

    Policy

    Week 2 · Days 6–10

    Activities

    • Configure 7 default detection categories (PII, government IDs, payment, PHI, customer identifiers, source code + secrets, internal codenames)
    • Add custom-dictionary entries specific to the organization (customer names, product codenames, internal project identifiers)
    • Define use-case taxonomy and human-oversight policy per use case
    • Head of GRC authors AI Acceptable Use Policy referencing Article 26 clauses

    Artifacts produced

    • Policy v1 live in production
    • Custom-dictionary registry (typically 40–120 entries)
    • AI Acceptable Use Policy document (5–8 pages)
    • Human-oversight policy mapping per use case
  3. Phase 03

    Rollout

    Week 2–3 · Days 10–15

    Activities

    • Expand MDM push to full employee base in staged rings (25% → 50% → 100%)
    • Switch from discovery-only to active redaction at ring 100%
    • Deliver transparency notice under Article 26(5) to affected workers
    • Monitor first-week redaction rates and false-positive sampling

    Artifacts produced

    • Full-deployment evidence — 100% of in-scope employee endpoints covered
    • Transparency-notice acknowledgment log
    • Week-1 redaction dashboard (typical: 3–8% of prompts trigger redaction)
  4. Phase 04

    Evidence

    Month 1 · Day 30

    Activities

    • Generate first quarterly evidence pack — JSON + signed PDF
    • Cross-walk pack against Article 26(1)/(2)/(4)/(5), Article 50, Annex IV
    • Internal GRC review for completeness
    • Optional: pre-review with Big 4 audit partner for EU AI Act readiness

    Artifacts produced

    • Quarterly evidence pack v1 (Article 26(1) usage log, Article 26(2) oversight, Article 26(4) data alignment, Article 26(5) notification, Article 50 transparency, Annex IV technical doc)
    • Internal GRC review memo
  5. Phase 05

    Audit

    Month 2–3

    Activities

    • Iterate on pack based on GRC or audit partner feedback
    • Finalize policy documents (AI Acceptable Use, human-oversight mapping, incident-response runbook)
    • Run tabletop exercise on Article 72 incident-reporting 15-day clock
    • Lock the pack as the August 2, 2026 readiness baseline

    Artifacts produced

    • Quarterly pack v2 (audit-iteration)
    • Incident-response runbook for AI-related events
    • Locked readiness baseline for August 2026

Concrete use cases · how EU AI Act obligations show up in practice

The specific scenarios Veladon covers for EU AI Act.

PII leakage in narrative prompt

A sales rep pastes a 400-word customer escalation email into ChatGPT to get a draft response. The email contains the customer name, email, phone, account number, and a DPIA-relevant mention of a minor dependant. Veladon's narrative-aware detector identifies all five PII patterns inline, redacts them to tokenized placeholders, and logs the event with the Article 26(1) usage-log structure. The sales rep gets the ChatGPT draft without PII leaving the browser. The GRC team sees the event in the dashboard and can replay the redaction spans. The incident that would have been a GDPR Article 5 minimization breach becomes a routine redaction event in the quarterly pack.

Intellectual-property exfiltration to consumer ChatGPT

An engineer pastes 300 lines of internal source code into consumer ChatGPT to debug a stack trace. The code contains an API key in a comment, a proprietary algorithm, and internal project codenames. Veladon detects the API key (secrets category), the algorithm (source-code category), and the codenames (custom-dictionary category), and blocks the prompt with a one-line explanation. The engineer redirects to the organization's ChatGPT Team tenant (with BAA) or removes the secret before re-submitting. The Article 26(1) log captures the block event with policy_id and rule triggers; the Annex IV technical documentation references the detection models that fired.

ChatGPT Enterprise shadow tenant

A finance analyst opens a personal ChatGPT Plus account (not the org's ChatGPT Team tenant) and pastes a 50-row customer contract extract. The personal tenant has no BAA, no SCCs, and no organizational oversight. Veladon detects the non-org OAuth context (via session fingerprint and cookie-domain inspection), flags the prompt as a shadow-tenant use under the AI-system inventory's tenant-classification field, redacts the customer data, and notifies the GRC team in near-real-time. The Article 26(4) data-governance log captures the event; Annex IV technical documentation references the shadow-tenant detection heuristics.

Code exfiltration via desktop ChatGPT app

A developer uses the native ChatGPT macOS app (not a browser tab) to ask about a complex codebase refactor. The app submits a 1,200-character snippet containing function signatures from a proprietary library. Veladon's desktop-app loopback listener observes the outbound request, redacts the proprietary function names via custom dictionary, and logs the event. Without loopback coverage, the native-app surface would be invisible to browser-only DLP — a known gap for competitors. Article 26(1) log captures full event; AI-system inventory flags the native app as a separate deployment surface.

Cross-border transfer to US LLM provider

An EU-based employee at a US-headquartered SaaS pastes a European customer's personal data into Claude (Anthropic US). The cross-border transfer triggers GDPR Article 44–46 considerations. Veladon logs the event with transfer-mechanism metadata (Anthropic SCCs v2021 reference), data-minimization evidence (what was redacted), and DPIA-input data for the Privacy Officer's Article 35 DPIA. The Article 26(4) EU AI Act log and the GDPR Article 30 records align on the same event ID, so the quarterly pack serves both audits without duplicate evidence collection.

AI-generated content disclosure under Article 50

A customer-support agent uses ChatGPT to draft a response to an EU customer, copies the output, and sends it as the agent's own reply. Under Article 50, the AI-generated content used in interaction with an affected person requires disclosure. Veladon's AI-output-handling feature prompts the agent to include the disclosure boilerplate (configurable per-use-case) and logs the disclosure event with output hash and recipient category. If the agent bypasses the disclosure, the event is flagged for GRC review and appears in the Article 50 transparency-events view of the quarterly pack.

Deadline calendar

EU AI Act deadlines + audit milestones.

Framework deadline

August 2, 2026

  1. August 2, 2026

    Article 26 deployer obligations general application

    All deployer obligations (usage logs, human oversight, transparency, cooperation) become enforceable. National competent authorities begin enforcement.

  2. Q1–Q2 2026

    Big 4 audit readiness window

    Big 4 audit firms open EU AI Act readiness engagements. Organizations with August 2026 deadline should have first evidence pack ready for partner pre-review.

  3. Q2–Q4 2026

    Enterprise customer contract renewals citing EU AI Act

    Enterprise buyers add EU AI Act compliance requirements to renewal contracts. Vendor-risk-management questionnaires expand by 40–80 AI-specific questions.

  4. Q3 2026+

    First Article 72 serious-incident reporting cycles

    With 15-day reporting clock, first serious AI incidents (data exfiltration, unauthorized high-risk use) trigger mandatory reports to national competent authorities.

Why a general DLP retrofit is insufficient for EU AI Act evidence.

Classic DLP platforms (Nightfall, Polymer, Cyberhaven, Microsoft Purview) were built for GDPR and SOC 2 — their audit trails are structured for those frameworks. They log data exfiltration events, but they do not classify by AI system, they do not tag prompt-level human oversight, and they do not map evidence to EU AI Act Article 26 clauses. A Big 4 auditor opening an EU AI Act readiness review asks for an AI system inventory and prompt-level usage logs — a structure classic DLP does not produce. Veladon's logs are AI-native: every event carries the provider, the AI system, the policy version, the redaction taxonomy, and the Article 26 clause mapping by default.

Questions CISOs ask about EU AI Act

Common questions about EU AI Act and employee AI use.

Does the EU AI Act apply to a US-based 500-employee SaaS company whose employees use ChatGPT?

Yes, if the company has any EU customers, EU-based employees, or produces AI-influenced output consumed in the EU. The extraterritorial reach under Article 2 catches any company whose AI system output is used in the EU, regardless of where the company is headquartered. In practice, a 500–2,500 employee US SaaS with even one EU enterprise customer typically falls in scope. The deployer obligations under Article 26 — usage logging, human oversight, transparency — apply once you are in scope.

What is the difference between an Article 26 deployer obligation and an Article 50 transparency obligation?

Article 26 obligations are operational — they require you to maintain an AI system inventory, log usage, document human oversight, and cooperate with authorities. Article 50 obligations are disclosure-facing — they require you to inform affected persons (end-users, customers) when they are interacting with AI-generated content or an AI system. Article 26 applies to every deployer; Article 50 kicks in specifically for generative AI outputs shown to affected persons. Veladon's evidence pack covers both: Article 26 via the usage log, Article 50 via transparency-notice event logging.

What does a Big 4 audit firm actually ask for in an EU AI Act Article 26 readiness review?

A Big 4 readiness review typically opens with two requests: (1) an AI system inventory listing every AI tool your employees use, with intended purpose, provider name, and Article 6 risk classification; (2) a 90-day usage-log sample showing prompt-level events with employee, timestamp, and data-handling metadata. Common findings in a 500–2,500 employee review: no formal AI inventory exists, usage is inferrable from browser history but not structured for audit, and human oversight documentation is absent per-use-case. Veladon ships both artifacts as part of the quarterly pack — which closes those findings in week 2 instead of quarter 4.

How does Veladon evidence human oversight under Article 26(2) for employees using ChatGPT and Claude?

Veladon logs every prompt, every redaction, and every policy hit with a per-use-case tag assigned by your Compliance Officer (e.g., 'customer support drafting', 'financial analysis', 'code generation'). Each tag carries a human-oversight policy — who reviews the AI output before it is used in a consequential decision, and on what cadence. The evidence pack shows the oversight policy per use case, the per-prompt application of that policy, and exceptions (cases where policy required review and the reviewer signed off). This structure maps directly to Article 26(2) and to ISO 42001 Annex A control A.8.3.

What are the realistic penalty tiers under the EU AI Act for a 500–2,500 employee deployer with EU exposure?

The top-line penalty tiers are: up to €35M or 7% of global turnover for Article 5 prohibited-practice breaches; up to €15M or 3% for most deployer obligation breaches; up to €7.5M or 1% for supplying incorrect information to authorities. For a 500–2,500 employee US mid-market with EU exposure, the practical ceiling is the 3% bracket on deployer-obligation breaches. The more realistic near-term cost is commercial — enterprise customers writing EU AI Act compliance into renewal contracts in 2026, and vendor-risk-management escalations when a finding surfaces.

What is the minimum-viable evidence pack for a 500-employee deployer by August 2026?

A minimum-viable pack includes: an AI system inventory (50–200 entries typical for a first-pass mid-market discovery), 90 days of structured usage logs with employee / timestamp / AI system / prompt hash / output hash / redaction category / risk score, a written AI Acceptable Use Policy referencing Article 26 clauses, a human oversight mapping per use case, a risk categorization per AI system under Article 6, a GDPR-aligned data-handling policy, and a transparency-notice template with deployment evidence. Exported quarterly as a JSON + signed PDF bundle. Veladon generates the machine artifacts; your Head of GRC authors the policy documents once and they persist.

Tailored FAQ · EU AI Act-specific

Additional EU AI Act questions Veladon buyers ask.

Does Veladon's evidence pack satisfy a Big 4 EU AI Act Article 26 readiness review on its own?

The evidence pack covers the machine-generated artifacts a Big 4 partner asks for during an Article 26 readiness review — usage logs under 26(1), human-oversight mapping under 26(2), data-governance alignment under 26(4), affected-person notifications under 26(5), Article 50 transparency, and Annex IV technical documentation. It does not replace the organizational artifacts (AI Acceptable Use Policy, Statement of Applicability, Article 9 risk-management system documentation) that your Head of GRC authors once and maintains. The typical readiness review closes cleanly when Veladon's pack is paired with the policy stack authored by a 2–8 person GRC team.

How does Veladon handle the Article 26(5) affected-worker notification requirement?

On first detection of a new employee using an AI system in scope, Veladon delivers a policy-notice overlay in the browser extension with the required disclosures and records the acknowledgment with timestamp and user identity. For organizations with active works councils (common in Germany, France, Netherlands), the notification flow integrates with consultation milestones via a configurable pre-notification delay. The Article 26(5) acknowledgment log is part of the quarterly evidence pack.

What's the Article 72 incident-reporting workflow with Veladon?

Veladon flags events that meet the serious-incident threshold (confirmed PII exfiltration bypass, novel-category detection triggering manual review, repeated high-severity policy violations) in near-real-time. The dashboard includes an incident-record template pre-populated with event metadata and starts the 15-day reporting clock under Article 72. Your GRC team authors the regulator-facing incident report using the template; Veladon provides the evidence appendix and the post-market monitoring dashboard excerpt.

Can a 500-employee US company without EU employees still be caught by Article 26?

Yes, if the company has EU customers or its AI output is consumed in the EU. The extraterritorial reach under Article 2 catches US companies whose employees use AI to produce output (a contract, a customer email, a report, a marketing deliverable) that reaches the EU. A typical 500-employee US SaaS with even one EU enterprise customer usually falls in scope. The practical threshold is: do any AI-assisted deliverables from your employees reach EU persons? If yes, Article 26 applies.

What's the difference between Article 26 obligations for providers vs deployers?

Providers (model developers — OpenAI, Anthropic, Google) carry the heavier technical-documentation and conformity-assessment load under Articles 8–17. Deployers (any company whose employees use AI) carry the operational obligations under Article 26 — usage logs, human oversight, transparency, cooperation. Most 500–2,500 employee mid-market companies are deployers, not providers. Veladon's evidence pack is the deployer-side artifact; provider-side documentation comes from your LLM vendor.

How much lead time do I need to be audit-ready by August 2, 2026?

Minimum viable: 60 days end-to-end. Recommended: 90–120 days to allow for policy authorship, tabletop exercises, and Big 4 partner pre-review. Veladon's 5–10 day deployment plus 30-day first-pack generation leaves 60–90 days for GRC-side policy work. Organizations starting deployment after June 1, 2026 are at serious risk of entering August with an incomplete pack; starting by April 1, 2026 produces a comfortable readiness baseline.

Pricing context · 500–2,500 employee deployments

What Veladon typically costs for EU AI Act coverage.

At 500–2,500 employees for EU AI Act coverage, Veladon typically lands at $22–32k ACV in the mid-market tier ($500–1,500 headcount) and $45–90k in the enterprise tier ($1,500–2,500 headcount). The quarterly evidence pack is bundled in base plan — no services SKU, no per-article upcharge for Article 26 / 50 / Annex IV mapping. Compare against Harmonic Security at $60–100k+ ACV with services add-ons for EU AI Act evidence, or Netskope GenAI / Zscaler GenAI at $150–380k total 3-year TCO because of the underlying SASE platform commitment. For a regulated SaaS, fintech, healthtech, or legaltech mid-market with a August 2, 2026 deadline and a GRC team of 2–8, the bundled-pack economics consistently dominate.

Need EU AI Act evidence on a compressed timeline?

Veladon deploys via MDM in 30 minutes and generates the first evidence pack at day 30. Get the Veladon early-access brief — detailed architecture, detection taxonomy, and EU AI Act crosswalk.

Get the EU AI Act evidence-pack sample