TRUST CENTER THESIS · #5

AI in Enterprise Security

What AI actually changes about security compliance, what it doesn't, and why the honest answer is more interesting than the hype.

Every security vendor now claims to be "AI-powered." It's the 2026 equivalent of "cloud-native" in 2016 — a label applied so broadly that it communicates nothing specific.

This publication is an attempt at honesty. AI is genuinely transforming parts of enterprise security and compliance. It's also being oversold in ways that will create real problems. The companies that distinguish between the two will build more trust than the ones that claim AI solves everything.

The AI Spectrum in Security

Not all "AI-powered" claims are equal. Here's the spectrum of how AI is actually being applied:

AI Application Spectrum in Security Compliance

Proven & shipping Emerging Aspirational / oversold
Document extraction
& classification
Questionnaire
response drafting
Continuous
monitoring
Autonomous
remediation
Left side: AI can do this reliably today. Right side: marketing claims outpace reality.

What AI Does Well

Document extraction. Upload a SOC 2 report, pen test, or policy document. AI reads it, classifies the content, and maps findings to security controls. This used to take hours. Now it takes minutes. The accuracy is 90%+ on structured documents.

What AI Does Adequately

Questionnaire drafting. Given a knowledge base of previous responses, AI can draft answers to new questions. Quality depends on the knowledge base. First-pass accuracy: 70-85%. Still requires human review. The value is time savings, not replacement.

What AI Oversells

Autonomous security operations. AI "detects and remediates threats automatically." In practice, false positive rates make full autonomy dangerous. The best current implementations are "AI detects, human decides, AI executes."

The Trust Architecture Question

When a vendor says "AI-powered," the evaluator should ask four questions. The answers reveal whether the AI implementation is trustworthy or theatrical:

🔒

Where does the data go?

Does the AI process documents on the vendor's servers, or does data get sent to a third-party AI provider? Is the processing ephemeral (data deleted after analysis) or persistent (data stored for training)?

Best: zero-retention processing
👥

Who approves the output?

Does AI publish content autonomously, or does a human review and approve every change? The difference between "AI-assisted" and "AI-autonomous" is the difference between a useful tool and a liability.

Best: human-in-the-loop always
📊

Is there an audit trail?

Can you see what AI proposed, what was changed, and who approved it? An AI system without an audit trail is a compliance risk, not a compliance tool.

Best: full version history with attribution
🛠

What happens when AI is wrong?

AI will make mistakes. The question is whether the system is designed to catch them. Confidence scores, verification flags, mandatory human review of low-confidence outputs — these are the markers of a mature implementation.

Best: confidence-gated publishing

The Regulatory Landscape

AI in security doesn't exist in a regulatory vacuum. Several overlapping regulations are shaping how AI can and should be used:

Active — 2023

SEC Cybersecurity Disclosure Rules

Public companies must disclose material cybersecurity incidents within 4 business days. Implication: AI-generated security documentation must be accurate, because errors could trigger disclosure obligations.

Active — Jan 2025

EU DORA (Digital Operational Resilience Act)

Financial services firms must document third-party ICT risk management. AI vendors selling to financial services must demonstrate how their AI processes are governed and audited.

Phasing in — Feb 2025+

EU AI Act

Risk-based regulation of AI systems. High-risk AI (including some compliance and security applications) requires transparency, human oversight, and technical documentation. AI vendors need to prove their AI is trustworthy.

Upcoming — 2025-2026

US State Privacy Law Expansion

Texas, Virginia, Connecticut, Colorado and more are adding data processing requirements. AI systems that process personal data face new documentation and transparency obligations in each state.

The regulatory trajectory is clear: AI in security must be transparent, auditable, and human-governed. The vendors that build these properties in from the start will have an advantage over those retrofitting compliance later.

An Honest Assessment

Where AI Genuinely Helps

Where AI Falls Short

"The vendors that will win the AI security market aren't the ones with the most sophisticated models. They're the ones that are most honest about what their AI can and can't do. Enterprise buyers are sophisticated enough to detect hype — and they penalize it."

— Analysis

The Cost Equation Has Changed

The honest economic case for AI in security compliance is about cost curves, not magic:

Security Compliance Cost Comparison

Task
Manual (2023)
AI-Assisted (2026)
Document extraction per report
$50-100
$0.50-2
Trust center initial setup
40-80 hrs
1-4 hrs
Questionnaire first draft
4 hrs
15 min + review
Monthly trust center maintenance
4-8 hrs
10 min review
Framework mapping (per control)
20-30 min
Instant
Annual security compliance team
$150-250K
$10-50K + AI

The cost reduction is real: 10-100x on most compliance documentation tasks. But the human review step remains essential. AI reduces the manual labor. It doesn't eliminate the need for judgment.

What This Means for Trust Centers

The intersection of AI and trust centers is where the most concrete, near-term value exists. Here's why:

  1. Trust centers are structured data problems. Controls, frameworks, documents, dates — this is exactly the kind of organized information AI processes well.
  2. The maintenance problem is solved by AI. The #1 reason trust centers fail is staleness. AI that monitors expirations, detects outdated content, and proposes updates converts trust centers from projects into infrastructure.
  3. The setup barrier disappears. "Enter your URL, get a trust center" is only possible with AI. Scanning publicly available security signals (SSL config, headers, tech stack) and building a populated draft is a task that would take a human consultant days.
  4. The pricing floor drops. AI replaces the human labor that forced trust center tools to charge $15K+/year. At $150/month, the unit economics work because AI does the work that used to require a dedicated customer success team.

The Nuanced Position

We're building an AI-native trust center platform. We use AI for document extraction, trust center generation, maintenance proposals, and framework mapping. We don't use AI for final publishing decisions — every change is human-approved. We don't claim AI replaces security expertise — it replaces documentation labor. We process documents ephemerally — nothing is stored by AI providers. This is the position we'll defend under evaluation from any enterprise security team, because it's true.

The 2026-2028 Trajectory

Three predictions for AI in enterprise security compliance, grounded in current trajectories:

  1. AI-assisted compliance becomes table stakes. By 2028, any security tool that requires fully manual data entry will feel as anachronistic as a fax machine. AI document extraction and framework mapping will be baseline expectations, not premium features.
  2. The EU AI Act will create a new compliance category. Companies using AI in security applications will need to document their AI governance, training data, and accuracy metrics. This creates demand for AI-specific trust center content — and for platforms that can generate it.
  3. Human-in-the-loop becomes a competitive advantage. As AI-generated content proliferates, the vendors that maintain human review will be trusted more than fully autonomous systems. "AI proposes, human approves" will be a selling point, not a limitation.

"The most trustworthy AI companies in security will be the ones that are most transparent about their AI's limitations. That's the paradox: admitting what AI can't do builds more trust than claiming it can do everything."

— Analysis

The Thesis

Trust is proven, not claimed. AI in enterprise security is a powerful tool — but only when its capabilities and limitations are transparent. The companies that use AI honestly, with human oversight and clear audit trails, will build lasting trust. The ones that use AI as a marketing claim without substance will lose it. The thesis applies to AI itself: prove what your AI does, don't just claim it.

This completes the Trust Center Thesis series.

← Back to series index
By Anton Lissone · Trust Center Thesis #5 · INeedTrust 2026