Build Your First Agent
Create an AI agent that answers questions using your data, accessible via API or webhook.
What You'll Build
By the end of this guide, you'll have an agent that:
- Has a defined persona and behavior rules
- Can query a database to answer questions with real data
- Is accessible via a webhook URL for external integrations
Prerequisites
- A Zihin account at console.zihin.ai
- An API key (see Your First Call)
Step 1: Create the Agent
You can create an agent via the console UI or the API.
- Console
- API
- MCP (IDE)
In the console, go to Agents > New Agent:
- Name:
support-bot - Type: Assistant
- Model: Auto (recommended — picks the best model per query)
- System Prompt: see below
curl -X POST https://llm.zihin.ai/api/agents \
-H "Authorization: Bearer YOUR_JWT" \
-H "x-tenant-id: YOUR_TENANT_ID" \
-H "Content-Type: application/json" \
-d '{
"name": "support-bot",
"commercial_name": "Support Assistant",
"bio": "Customer support agent with database access",
"type": "assistant",
"llm_config": { "model": "auto", "temperature": 0.7 }
}'
If you have the MCP Server configured in your IDE, use the setup-agent guided prompt — it walks you through agent creation interactively.
Configure the Persona
The persona defines how your agent behaves. Create a persona_config schema:
{
"editor_schema": {
"persona": {
"role": "Customer Support Agent",
"objective": "Help customers find information about their orders and answer product questions.",
"tone": "friendly",
"language": "en",
"constraints": [
"Never invent data — always query the database first",
"If unsure, ask the customer for clarification"
],
"response_style": {
"format": "markdown",
"max_response_length": 1000
}
}
}
}
Key decisions:
model: "auto"lets the system pick the cheapest model that handles each query well. For a support bot, most queries are simple (routed to a small model) but complex ones get a larger model automatically.constraintsare enforced rules — the LLM follows them as hard boundaries, not suggestions.
Step 2: Test the Agent
- curl
- Node.js
- Python
curl -N "https://llm.zihin.ai/api/v2/agents/AGENT_ID/stream" \
-H "X-Api-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"message": "What are your business hours?"}'
const response = await fetch(
'https://llm.zihin.ai/api/v2/agents/AGENT_ID/stream',
{
method: 'POST',
headers: {
'X-Api-Key': 'YOUR_API_KEY',
'Content-Type': 'application/json',
},
body: JSON.stringify({
message: 'What are your business hours?',
}),
}
);
for await (const chunk of response.body) {
process.stdout.write(new TextDecoder().decode(chunk));
}
import requests
response = requests.post(
f"https://llm.zihin.ai/api/v2/agents/AGENT_ID/stream",
headers={
"X-Api-Key": "YOUR_API_KEY",
"Content-Type": "application/json",
},
json={"message": "What are your business hours?"},
stream=True,
)
for chunk in response.iter_content(chunk_size=None):
print(chunk.decode(), end="")
The response streams via SSE. Without tools, the agent answers from its training data and persona instructions.
Step 3: Add a Database Tool
This is where agents become powerful — connect real data.
- Create a database connection at Connections > Add Connection
- Enter credentials — passwords are stored encrypted in Vault (AES-256-GCM), never in plain text
- Create a
db_configschema on your agent:
{
"connection_id": "CONNECTION_UUID",
"tool_definition": {
"name": "search_orders",
"description": "Search customer orders by status, date range, or customer ID",
"input_schema": {
"type": "object",
"properties": {
"customer_id": { "type": "string", "description": "Customer ID" },
"status": { "type": "string", "enum": ["pending", "shipped", "delivered"] }
}
}
},
"query_template": "SELECT * FROM orders WHERE customer_id = $1 AND status = $2 LIMIT 20",
"parameter_mapping": ["customer_id", "status"]
}
Now the agent can query your database when a customer asks "Where is my order?". It decides when to use the tool based on the conversation — you don't need to code any logic.
Step 4: Add a Webhook Trigger
Make the agent accessible to external systems (Slack, WhatsApp, your app):
curl -X POST https://llm.zihin.ai/api/triggers \
-H "Authorization: Bearer YOUR_JWT" \
-H "x-tenant-id: YOUR_TENANT_ID" \
-H "Content-Type: application/json" \
-d '{
"agent_id": "AGENT_UUID",
"name": "Support Webhook",
"trigger_type": "webhook",
"trigger_config": {
"auth": { "type": "api_key" },
"query_extraction": { "mode": "field", "field": "message" }
}
}'
The response includes a webhook_url — any system can POST to it to start a conversation with your agent.
- Sync (default): The webhook waits for the agent to finish and returns the response. Good for APIs and chat interfaces.
- Async: The webhook responds immediately with 202 and delivers the result via callback. Use for channels with timeouts (WhatsApp, SMS). See Webhook Triggers.
Step 5: Publish
Publishing locks the agent configuration, creates a version snapshot, and validates all schemas:
curl -X POST https://llm.zihin.ai/api/agents/AGENT_ID/publish \
-H "Authorization: Bearer YOUR_JWT" \
-H "x-tenant-id: YOUR_TENANT_ID"
If any schema has validation errors, publishing is blocked with details on what to fix.
Next Steps
- Agent Tools — API tools, MCP tools, and database connections
- Triggers — Webhooks, email, database events, and schedules
- Auto-Routing — How model selection works and when to override it
- Cost Optimization — Reduce costs without sacrificing quality