For CISOs at 500–2,500 emp regulated companies · EU AI Act Aug 2026

See what your employees paste into ChatGPT — and redact what shouldn't leave.

Veladon's browser extension intercepts every outbound prompt to ChatGPT, Claude, and Gemini, redacts PII inline in under 50 ms, and ships the EU AI Act Article 26 / ISO 42001 / NIST AI RMF audit trail your Compliance Officer needs. Deploy across 500 employees in one day — not one quarter.

Trusted by InfoSec teams preparing for EU AI Act — Aug 2, 2026.

12,847 prompts redacted across 412 employees · last 30 days

veladon.ext · chat.openai.com
policy: fin-svc-v4

Raw prompt (employee typed)

Summarize this support case for John Smith (acct # ###-##-1234) — dispute from last Tuesday about the Project Meridian rollout.

↓ 41 ms · 3 spans redacted

Reached ChatGPT as

Summarize this support case for [REDACTED_NAME] (acct # [REDACTED_SSN]) — dispute from last Tuesday about the [REDACTED_PROJECT] rollout.

logged · event_id 7f3b·a0c·04e1EU AI Act Art. 26 · ISO 42001 A.6.2.3

The three things Veladon does

What does an AI-era DLP do that a classic DLP retrofit doesn't?

→∣←

median 41 ms · p99 78 ms

Inline redaction at paste

The browser extension intercepts outbound prompts to ChatGPT, Claude, Gemini, Copilot, and 50+ other LLM surfaces before a single token leaves the employee's machine.

  • PII, PHI, payment card data, source code, API keys, and your custom dictionary — all redacted in under 50 ms
  • Client-side detection: the public LLM never sees the raw string, and Veladon never stores plaintext either
  • No confirmation-dialog friction — employees keep working; redactions apply transparently in the prompt buffer
⊢⎯⎯⊣

first pack @ day 30

Audit trail regulators actually ask for

Every prompt, every redaction, every policy hit — logged with employee, timestamp, classification, and cryptographic hash. Quarterly evidence packs ship pre-mapped to the frameworks auditors name by default.

  • EU AI Act Article 26 (deployer obligations) + Article 50 (transparency) + Annex IV (technical documentation)
  • ISO 42001 Annex A controls (A.6.2.3 usage logging, A.8.3 human oversight) + NIST AI RMF (GOVERN / MAP / MEASURE / MANAGE)
  • One-click quarterly export: JSON + signed PDF summary, ready to submit to a notified body or present to your Big 4 auditor
⟶ ∙ ∙ ∙

MDM-native · Intune, Jamf, Kandji, Workspace ONE

Deploy in 30 minutes, not 30 days

Chrome / Edge / Firefox managed-policy rollout via your existing MDM. SaaS connectors for Claude Team, ChatGPT Enterprise, Gemini Workspace, Slack AI, Notion AI, Linear, Zendesk. Zero proxy infrastructure. Zero network changes.

  • Day 1: extension covers 80% of shadow-AI risk — the browser-direct surface
  • Day 2–7: OAuth connectors cover the SaaS-embedded AI surface
  • Day 30: first evidence pack generated; compare to 60–120 days for a full endpoint-DLP rollout

Early-access design partners — regulated mid-market InfoSec & GRC teams

Fintech · 1,200 employees
HealthTech · 600 employees
SaaS · 1,800 employees
Legal · 800 employees
Ins. services · 2,200 employees

From a design-partner CISO

“Quote placeholder — will be populated from first design-partner pilot interview. Target attribution: CISO at 1,000–2,000 emp regulated SaaS, specifically on evidence-pack-per-quarter outcome and browser-ext deploy time.”

— Name redacted pending pilot publish approval · Mid-market regulated SaaS

Early access · Q3 2026 design-partner cohort

Get the Veladon early-access brief.

Detailed technical brief for CISOs and Compliance Officers — deployment architecture, detection taxonomy, EU AI Act evidence-pack schema, and 30-minute live redaction demo. No calendar grabs. No sales pitch. Read it on your own time.

We respond to every email personally. No drip sequences, no webinars, no “nurture tracks.”

Questions CISOs ask us

What InfoSec & GRC leaders ask before shortlisting AI Governance DLP.

How is Veladon different from Harmonic Security?
Veladon targets the 500–2,500 employee regulated mid-market with a bottom-up GRC + InfoSec deployment; Harmonic Security targets the Fortune 500 CISO with a top-down named-account motion. Harmonic lands as a 6-figure enterprise contract that requires a steering committee. Veladon ships a browser extension deployable in one day by a single IT admin, bundles EU AI Act / ISO 42001 / NIST AI RMF evidence packs into the base plan (Harmonic sells those as services add-ons), and prices at a department-head approval tier — not enterprise committee. If you have 5,000+ employees and a dedicated AI-governance program, Harmonic is right. If you are 500–2,500 employees with a GRC team of 2–8, Veladon fits better.
What does Veladon redact by default in outbound prompts to ChatGPT, Claude, and Gemini?
Veladon redacts seven data categories by default: personal identifiers (names, emails, phone numbers), government IDs (SSN, passport, driver's license, EIN/ITIN), payment card numbers and bank routing/account numbers, protected health information (diagnosis codes, MRNs, treatment notes) flagged by HIPAA Safe Harbor identifiers, customer account numbers and case IDs keyed from your CRM, source code and API keys / tokens / secrets, and internal project codenames you define in a policy dictionary. Redaction happens inline in the browser before the prompt leaves the employee's machine — the public LLM never sees the original. Everything is logged with a cryptographic hash for audit replay.
Does Veladon work with ChatGPT Team, ChatGPT Enterprise, Claude for Work, and Gemini Business?
Yes. Veladon's browser extension covers ChatGPT (Free, Plus, Team, Enterprise), Claude (Free, Pro, Team, Enterprise via claude.ai), Gemini (Free, Business, Enterprise via gemini.google.com), Microsoft Copilot (web + Edge sidebar), Perplexity, and 50+ other public LLM surfaces that employees access via a browser tab. The SaaS-connector layer additionally covers AI features embedded inside Slack, Notion, Linear, Zendesk, and HubSpot. Coverage is tab-level and URL-pattern based, so even when a new consumer LLM launches, our policy team ships a coverage update within 72 hours. ChatGPT Team and Enterprise enterprise-grade agreements do not replace DLP — OpenAI's contract protects their use of your data, not your employee pasting a customer SSN into the prompt.
How does Veladon help with EU AI Act compliance before the August 2026 effective date?
Veladon ships evidence packs pre-mapped to EU AI Act Articles 26 (deployer obligations), 50 (transparency obligations for generative AI), and Annex IV (technical documentation) — generated on a quarterly cadence and ready to submit to a notified body or present to a regulator. The pack includes a full audit log of every employee prompt to a public LLM, the redaction actions taken, the policy version in effect at prompt time, and a risk categorization per prompt. Veladon also provides the AI system inventory, usage logs, and human oversight documentation required under Article 26 for deployers of general-purpose AI systems. For your DPO, this replaces 60–120 hours of quarterly audit-evidence assembly with a one-click export.
Will Veladon slow down my employees or add friction to their workflow?
No. The browser extension adds ~40ms of inline processing latency per prompt — imperceptible to users. Veladon is not a confirmation-dialog DLP that interrupts employees; it runs transparently and redacts inline. Your employees continue using ChatGPT, Claude, or Gemini the way they always did, and the prompt leaves the browser with redactions already applied. The only visible UI is a small status indicator showing what was redacted, and an optional policy-notice banner the first time a new data category is detected. For high-risk data categories (configurable per policy), you can choose confirmation-prompt mode instead, but default is silent inline redaction plus full audit logging.
What happens when an employee tries to paste a customer SSN into Claude or ChatGPT?
Veladon detects the SSN pattern in the prompt buffer, redacts it to `[REDACTED_SSN]` before the prompt leaves the browser, logs the event with a cryptographic hash of the original (for audit replay, never stored in plaintext), and tags the event with the policy rule that fired, the employee ID, the timestamp, the LLM destination, and a risk score. The employee sees a small inline status: 'SSN redacted — your prompt reached Claude as: [REDACTED_SSN]'. The LLM never sees the SSN. Your SOC gets a low-severity event if this is the first occurrence per employee per week, or a high-severity event if it's a repeat pattern — configurable per policy. The evidence pack logs every such event for regulator-grade audit.
How long does Veladon take to deploy across 500 employees?
One day for the browser extension, one week for the full SaaS connector rollout. The browser extension deploys via your existing MDM (Intune, Jamf, Kandji, Workspace ONE) or Chrome Enterprise / Edge for Business managed-policy channel — same mechanism you already use for other managed extensions. Day 1 covers 80% of shadow-AI risk (browser-based LLM use). Day 2–7 adds OAuth connectors for Slack AI, Notion AI, Zendesk AI, Linear, and HubSpot for SaaS-embedded AI coverage. First EU AI Act / ISO 42001 evidence pack generates at day 30. Compare to 60–120 days for a full endpoint-DLP or enterprise AI-governance platform rollout.
Does Veladon store my employees' actual prompts, and where is the data processed?
Veladon stores redacted prompts (post-redaction, PII removed) plus cryptographic hashes of the pre-redaction content for audit replay — never the raw plaintext containing sensitive data. Data is processed and stored in your choice of AWS us-east-1, us-west-2, or eu-west-1 (Frankfurt) — selectable per tenant for GDPR / Schrems II compliance. Detection runs client-side in the browser extension (no plaintext leaves the employee's machine for the redaction decision itself); only the redacted prompt and metadata are transmitted to Veladon's backend for logging and evidence-pack assembly. SOC 2 Type II and ISO 27001 certifications in scope for 2026; HIPAA BAA available for regulated-health customers.