This page provides information about the subprocessors Kinter uses to process Customer Data and Customer Content as defined in our agreements available on kinter.ai/legal.
AI Model Providers
| Subprocessor Name |
Purpose |
Location |
| Anthropic, PBC |
Large language model inference for autonomous accounting agents (no training, zero retention beyond inference) |
United States |
| OpenAI, LLC |
Large language model inference for autonomous accounting agents (no training, zero retention beyond inference) |
United States |
| Google LLC (including Gemini API) |
Large language model inference for autonomous accounting agents (no training, zero retention beyond inference) |
United States; Global (may vary by region) |
*Anthropic models are currently only available for US data processing. Customers with data in other regions may contact privacy@kinter.ai for more information.
Cloud Infrastructure
| Subprocessor Name |
Purpose |
Location |
| Amazon Web Services, Inc. |
Cloud infrastructure hosting (compute, storage, networking, key management) |
United States; European Union (where elected) |
| Sentry (Functional Software, Inc.) |
Error tracking and observability |
United States |
Product & Feature Enablement
| Subprocessor Name |
Purpose |
Location |
| Descope, Inc. |
Customer identity and access management (CIAM): authentication flows, MFA, SSO, and session management for end users and AI agents |
United States; European Union (where elected) |
| Mem0, Inc. |
Persistent memory layer for AI agents; storage and retrieval of contextual user data across sessions |
United States |
| Finch (Finch, Inc.) |
HRIS and payroll connectivity for accruals and commissions agents |
United States |
| Plaid Inc. |
Banking connectivity for reconciliation agent |
United States |
| Stripe, Inc. |
Subscription billing and payment processing |
United States |
Customers may contact privacy@kinter.ai with questions or concerns.