Corsa embeds AI across the platform - from alert triage and investigation narratives to copilot-powered data exploration. Every AI feature is designed with the same security-first approach that governs the rest of the platform, and evaluated against our responsible AI principles.
Responsible AI Principles
Corsa’s AI capabilities are governed by a set of principles that ensure AI is used safely, transparently, and under human control.
- Human in the loop - AI augments human decision-making but never replaces it. Final compliance decisions always rest with your team. AI recommendations include confidence indicators and the evidence they were based on.
- Explainability - Every AI action is transparent. Copilot shows its reasoning steps and tool calls. Alert agents explain the signals that drove their recommendation. Investigation narratives cite the specific data points used.
- No black boxes - Corsa does not use opaque scoring models. When AI produces a risk score, disposition suggestion, or narrative, the inputs and logic are visible and auditable.
- Continuous evaluation - Models and agents are regularly tested against curated benchmarks, adversarial inputs, and edge cases before deployment to production.
Privately Hosted AI Models
Corsa’s default AI configuration uses privately hosted models that do not share infrastructure with public AI services. Customer data is never sent to shared or multi-tenant AI endpoints.
- No customer data is exposed to public AI APIs during inference
- AI models are accessed through secure, private channels with no external data egress
- Model versions are pinned and updated through the same change management process as any other production deployment
Opt-In AI Features
AI capabilities in Corsa are opt-in by default. Platform administrators have full control over which AI features are enabled for their organization.
| Feature | What It Does | Independently Toggleable |
|---|
| Copilot | Natural-language data exploration and insights | Yes |
| Alert Agents | AI-powered alert triage recommendations | Yes |
| Investigation Agent | Automated investigation narrative generation | Yes |
| DD Agent | Due diligence data enrichment and profiling | Yes |
- AI features can be toggled on or off at the platform level at any time
- Changes to AI feature settings take effect immediately
- Platforms that choose not to enable AI features continue to operate with full functionality - AI augments workflows but is never a dependency
This gives compliance officers and CISOs confidence that AI is only active when explicitly authorized.
PII & Sensitive Data Guardrails
All AI pipelines include guardrails that prevent personally identifiable information (PII) and other sensitive data from being mishandled during inference.
- Prompts sent to AI models are scanned and sanitized before inference
- Highly sensitive PII fields (government IDs, account numbers, financial credentials) are redacted or tokenized before reaching the model layer
- Context windows are scoped to the minimum data required for the task - no unnecessary data is included in prompts
Output Guardrails
- Model responses are validated before being surfaced to users
- Outputs are checked for inadvertent PII leakage, hallucinated identifiers, and sensitive content
- Structured outputs (risk scores, alert dispositions) are validated against expected schemas to prevent malformed or dangerous responses
Audit Trail
- Meaningful agent operations (alert triage, investigation narratives, due diligence enrichment) are logged with context including timestamp, output, and the user who initiated the request
- Guardrail activations (redactions, blocks, sanitizations) are logged separately for security review
- AI audit logs are retained according to the platform’s data retention policy and can be exported for compliance reporting
BYOK as an AI Data Protection Layer
Corsa’s Bring Your Own Key (BYOK) encryption provides an inherent layer of protection for sensitive data in AI workflows.
- Fields encrypted with BYOK remain encrypted within Corsa’s environment - they can only be decrypted on the customer’s side using their own KMS
- Because Corsa cannot access the plaintext of BYOK-encrypted fields, these fields are never included in AI model prompts or inference pipelines
- This gives customers cryptographic assurance that their most sensitive data (government IDs, financial credentials, account numbers) cannot be exposed to any AI model - regardless of provider or configuration
- BYOK protection applies across all AI features: Copilot, Alert Agents, Investigation Agent, and DD Agent
For organizations that require the strongest guarantees around sensitive data and AI, BYOK provides a zero-trust boundary that no software-level guardrail alone can match.
Data Retention & Training Policies
Corsa enforces strict policies to ensure customer data is never used to improve AI models.
No Training on Customer Data
Customer data is never used to train, fine-tune, or improve any AI model - whether hosted by Corsa or by a third-party provider. This is contractually guaranteed.
- This policy applies to all data types: transactions, client records, alerts, cases, investigation narratives, documents, and copilot conversations
- Corsa does not aggregate, anonymize, or pseudonymize customer data for model training or benchmarking purposes
- When third-party AI providers are involved, Corsa ensures that provider agreements prohibit training on customer data passed through the API
Data Retention
- AI interaction logs (prompts, responses, metadata) follow the same retention policies as other platform data
- Customers can configure retention periods based on their regulatory requirements
- When data is deleted, it is purged from all systems including AI-related logs, caches, and intermediate processing stores
- No AI-generated data persists beyond the configured retention window
Sub-Processor Commitments
- When Corsa’s hosted models process data, sub-processor agreements explicitly prohibit the use of customer data for model training, evaluation, or improvement
- Corsa maintains a sub-processor list (available on request) that includes any AI infrastructure providers
- Changes to AI sub-processors are communicated to customers in advance
Contractual Guarantees
- Data handling commitments are codified in Corsa’s contractual agreements
- Customers can request amendments for additional AI-specific provisions
Summary
| Capability | How It Works |
|---|
| Responsible AI | Human-in-the-loop, explainable outputs, no black boxes, continuous evaluation. |
| Privately hosted models | AI inference uses private models with no shared or multi-tenant endpoints. |
| Opt-in AI | Every AI feature is off by default. Admins enable them independently per platform. |
| PII guardrails | Input sanitization, output validation, and scoped context windows. |
| BYOK data protection | BYOK-encrypted fields are cryptographically excluded from AI pipelines. |
| No training on data | Customer data is never used to train or improve AI models. Contractually guaranteed. |
| Strict retention | AI logs follow platform retention policies. Deleted data is purged from all systems. |