Skip to main content

LLM API

Unified interface for calling large language models across multiple providers through a single endpoint.

What It Does

The LLM API abstracts provider differences so you can switch between OpenAI, Anthropic, Google, Grok, and OpenRouter without changing your code. One endpoint, one format, any model.

Key Capabilities

CapabilityDescription
Multi-ProviderOpenAI, Anthropic, Google, Grok, OpenRouter — all through one API
Auto-RoutingSet model: "auto" and let the system pick the best model for your task
Structured OutputForce JSON responses with schema validation
MultimodalSend images alongside text — auto-converted to each provider's native format
StreamingReal-time token-by-token responses via SSE
Global CacheAutomatic response caching for up to 40% cost reduction

Quick Example

curl -X POST https://llm.zihin.ai/api/v3/llm/public/call \
-H "X-Api-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"query": "Summarize this contract", "model": "auto"}'

Endpoints

EndpointAuthDescription
POST /api/v3/llm/public/callAPI KeyLLM call (external integrations)
POST /api/v3/llm/callJWTLLM call (multi-tenant frontend)
GET /api/v3/llm/modelsPublicList available models
POST /api/v3/llm/test-connectionJWTTest provider connectivity

Next Steps