Pay for Performance.
No hidden implementation fees. Start small with a pilot and scale as your transaction volume grows.
Pilot
For proof-of-concept
- Up to 100k rows/mo
- AI Onboarding Agent
- AI Transform Agent
- Standard Matching
- Email Support
Growth
For scaling operations
- Up to 1M rows/mo
- AI Onboarding Agent
- AI Match Agent
- Break management & SLA tracking
- Intraday re-reconciliation
- Priority Support
Enterprise
Global organizations
- Unlimited Volume
- All AI Agents
- Private Cloud / On-Prem
- SSO & RBAC
- Self-Service Agent Studio (2026)
- Dedicated Success Manager
Common Questions
Can I deploy on my own VPC?
Yes. Enterprise plans support deployment on your private AWS/Azure VPC or on-premise infrastructure for complete data sovereignty.
What is the implementation time?
Most customers are up and running in days, not weeks. The AI Onboarding Agent guides you through setup in a single conversation — from file upload to live reconciliation.
Which LLM providers do you support?
FopsAI is provider-agnostic. We support AWS Bedrock (Claude), Azure OpenAI, Groq, and open-source models. You can configure your preferred provider per tenant with automatic fallbacks.
Is my data used for model training?
No. We are privacy-first. Your data is never used for model training. All LLM inference runs within your VPC via AWS Bedrock. Stateless calls — no data persisted by providers.