Skip to main content

Model Catalog

The LLM API supports models from 4 providers, managed dynamically via the model catalog.

Providers

ProviderModels
OpenAIGPT-4.1, GPT-4.1-mini, GPT-4.1-nano, o3, o3-mini, o4-mini
AnthropicClaude Opus 4.6, Claude Sonnet 4.6, Claude Haiku 4.5
GoogleGemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.5 Flash Lite
GrokGrok 3, Grok 3 Mini, Grok 4 Fast, Grok Code Fast

Model ID Format

Models use the provider.model format:

openai.gpt-4.1-nano
anthropic.claude-sonnet-4-6
google.gemini-2.5-flash
grok.grok-3-mini

Model Tiers

Models are grouped into tiers based on capability. Your plan determines which tiers you can access via the Zihin pool:

TierExamplesPlans (pool)
economicalGPT-4.1-nano, Haiku 4.5, Gemini Flash Lite, Grok 3 MiniAll plans
premiumGPT-4.1, Sonnet 4.6, Gemini Flash, Grok 4 FastCore, Pro, Business
flagshipClaude Opus 4.6, o3, Gemini 2.5 ProPro, Business
BYOK unlocks all tiers

If you configure your own provider key (BYOK), all models from that provider become available regardless of your plan. See Secrets & Provider Keys for setup.

Requesting a model outside your plan's allowed tiers returns HTTP 403:

{
"error": "MODEL_TIER_RESTRICTED",
"details": {
"model_tier": "flagship",
"plan_code": "basic",
"allowed_tiers": ["economical", "premium"]
}
}

GET /api/llm/models

List all available models.

Authentication: Not required — Cache: 5 minutes

{
"success": true,
"count": 29,
"models": [
{
"id": "openai.gpt-4.1-nano",
"name": "GPT-4.1 Nano",
"provider": "openai",
"tier": "economical",
"context": "1048576",
"capabilities": ["chat", "code", "function_calling"],
"performance": { "latency": 9, "quality": 6, "cost": 9 }
}
]
}
FieldDescription
idFull model ID (provider.model)
provideropenai, anthropic, google, grok
tiereconomical, premium, flagship
contextContext window (tokens)
capabilitiesModel capabilities
performanceScores 1-10 (latency, quality, cost)

GET /api/llm/provider/:provider

Get models for a specific provider.

Authentication: Not required — Cache: 15 minutes

GET /api/llm/provider/openai
{
"success": true,
"provider": "openai",
"supportedModels": ["gpt-4.1", "gpt-4.1-mini", "gpt-4.1-nano", "o3", "o3-mini", "o4-mini"],
"modelCount": 6
}