# AI Providers

Finora connects directly to the major AI platforms to track your API spend automatically. Pick a provider below to set up the connection.

## Available providers

| Provider                                                                        | What you'll track                                                |
| ------------------------------------------------------------------------------- | ---------------------------------------------------------------- |
| [**OpenAI**](/billing-integrations/ai-providers/connect-to-openai.md)           | API usage across GPT-4o, o1, o3, and every OpenAI model          |
| [**Anthropic**](/billing-integrations/ai-providers/connect-to-anthropic.md)     | API usage across Claude Opus, Sonnet, and Haiku                  |
| [**AWS Bedrock**](/billing-integrations/ai-providers/connect-to-aws-bedrock.md) | Claude, Llama, Mistral and other models hosted on Amazon Bedrock |
| [**Azure AI Foundry**](/billing-integrations/ai-providers/connect-to-azure.md)  | Azure OpenAI Service and AI Foundry deployments                  |
| [**Google Cloud**](/billing-integrations/ai-providers/connect-to-gcp.md)        | Vertex AI, Gemini APIs, and other Google AI services             |
| [**Cursor**](/billing-integrations/ai-providers/connect-to-cursor.md)           | Per-event spend across your Cursor team                          |
| [**xAI / Grok**](/billing-integrations/ai-providers/connect-to-xai.md)          | Grok API spend by model and hour for your xAI team               |

## How every connection works

Every provider follows the same two-step flow:

1. **Create a credential in the provider's console** — a read-only key scoped only to billing data
2. **Paste it into Finora** under **Settings → API Keys**

Finora validates the credential, and your data starts flowing in at the next refresh.

## Read-only by design

Every credential type Finora asks for is **read-only** and **billing-scoped**. None of them can be used to invoke models, change settings, or modify resources in your provider account.

## Refresh frequency

| Your plan      | First data appears | Then refreshes   |
| -------------- | ------------------ | ---------------- |
| Trial / Growth | within 24 hours    | every 24 hours   |
| Scale          | within 1 hour      | every hour       |
| Enterprise     | within 15 minutes  | every 15 minutes |

## What's not supported (yet)

* **Self-hosted models** — anything you run on your own GPUs (Ollama, vLLM, on-prem)
* **Consumer plans** — Claude.ai or ChatGPT subscriptions; for flat-rate seats, use the [AI Subscriptions tracker](/core-features/ai-subscriptions.md)

More providers (OpenRouter, GitHub Copilot, Vercel AI, Snowflake, Databricks) are on our public roadmap.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.finora.services/billing-integrations/ai-providers.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
