Trust Center

Security, Privacy & Governance.

Built for regulated financial operations: strong access controls, audit-ready evidence capture, and flexible deployment options for procurement and compliance needs.

Security Architecture

Data Residency

Deploy on your private cloud, on-premise, or our dedicated VPCs. You choose where your data lives. All processing within your VPC boundaries.

Immutable Audit Trail

Every action — from rule creation to break resolution — is logged with WORM (Write Once, Read Many) compliance. Full reasoning traces for every AI decision.

Zero Trust RBAC

Granular role-based access control with multi-tenant isolation. Row-level security ensures users only see what they are explicitly authorized to access.

AI Privacy & Data Processing

Your data never trains models. All LLM inference runs within your environment via AWS Bedrock. Stateless calls with no data persisted by AI providers.

Data Processing Engine

✓ Polars (Rust) — in-memory processing, no persistent intermediate data

✓ Apache Arrow memory model — zero-copy data sharing

✓ Tenant-isolated processing pipelines

✓ No customer data stored in AI model context after session ends

AI Inference Security

✓ AWS Bedrock — VPC-contained inference, no data leaves your environment

✓ Provider-agnostic — switch between Claude, Llama, or open-source without code changes

✓ 3-layer guardrail system: Input, Execution, Output

✓ Sensitive data detection and anonymisation before AI calls

Compliance Ready

FopsAI is designed to meet the rigorous standards of global financial institutions. We are currently undergoing SOC2 Type II certification.

SOC2 Type II (In Progress)
GDPR Compliant
ISO 27001 Aligned
SOC2
AES-256
99.9% SLA
WORM

Explainable Decisions (Glass Box)

In regulated environments, "the computer said so" is not enough. FopsAI is designed to capture evidence and reasoning so teams can operate with confidence and auditability.

Audit Evidence

✓ Who did what, and when — with trace IDs across all services

✓ Inputs, outputs, and key parameters for every AI decision

✓ Approvals for high-impact actions (Maker-Checker)

Human-in-the-Loop

✓ Review thresholds and maker-checker patterns

✓ Policy guardrails and tool allow-lists

✓ Cost ceilings, step limits, and circuit breakers

AI Transparency

How we approach AI in regulated operations

We focus on governance: clear controls, evidence capture, and privacy-first design. Exact configurations vary by deployment and customer requirements.

Models

✓ Provider-agnostic model access (per-tenant)

✓ Policy guardrails (allow-lists, budgets, safety controls)

✓ Tool schemas and prompts designed for auditability

✓ Automatic fallback chains across providers

Data Flow

✓ Residency controls (where data is processed)

✓ Sensitive data detection and anonymisation before AI calls

✓ No customer data used for training without explicit agreement

✓ Stateless LLM calls — no data persisted by providers

Controls

✓ Human-in-the-loop for low-confidence/high-impact actions

✓ Approvals, audit logs, and access governance

✓ Circuit breakers (step limits, tool call limits, budget caps)

✓ Self-correction loops with guardrail enforcement

Privacy

✓ Tenant isolation and least-privilege access

✓ AWS CloudTrail + Bedrock invocation logging (7-year retention)

✓ VPC-contained inference — data never leaves your environment

✓ Deployment options for stricter requirements

Built for Global Regulatory Expectations

We don't claim "one-click compliance". We focus on the capabilities regulators and internal risk teams consistently expect: controls, evidence, access governance, retention, and operational resilience.

UK

• Client asset & reconciliation controls (e.g., FCA CASS)

• Operational resilience expectations

• Strong audit and evidence capture

EU

• Derivatives reconciliation expectations (e.g., EMIR)

• Data protection (GDPR)

• Emerging AI governance requirements

US

• Controls and auditability for regulated operations

• Vendor risk & third-party governance

• Model risk management expectations for AI usage