API Documentation
Quick Start
Get your first AI response in 3 steps.
1. Create an account
Sign up at /register. You get 500 free credits instantly.
2. Create an API key
Go to your API Keys dashboard and create a new key. Copy it — you won't see it again.
3. Make your first request
import Tchavi from '@tchavi/sdk';
const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });
const response = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);
console.log('Credits used:', response.tchavi.credits_used);Authentication
All API requests require a valid API key sent in the Authorization header:
Authorization: Bearer YOUR_API_KEYAPI keys can be created and managed from your dashboard. Each key is tied to your account's credit balance. You can create multiple keys for different projects.
Keep your API keys secret. Do not share them in client-side code, public repositories, or URLs.
# .env
TCHAVI_API_KEY="sk-tch_..."
# Node.js
const client = new Tchavi({ apiKey: process.env.TCHAVI_API_KEY });
# Python
import os
client = OpenAI(api_key=os.environ["TCHAVI_API_KEY"], base_url="...")Base URL
https://tchavi.com/api/api/v1Tchavi is 100% compatible with the OpenAI API format. If you already use the OpenAI SDK or any OpenAI-compatible library, just change the base URL and API key — the rest of your code stays the same.
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://tchavi.com/api/api/v1',
});
const response = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);Chat Completions
/v1/chat/completionsGenerate a chat completion from a list of messages. This is the primary endpoint for text generation with all supported models.
Request body
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Model ID (e.g. "gpt-4o-mini", "claude-sonnet-4-6") |
| messages | array | Yes | Array of message objects. Each has a role (system, user, or assistant) and content. system sets the AI's behavior; user is your message; assistant is a prior AI reply. |
| temperature | number | No | Controls randomness. 0 = deterministic/focused, 1 = balanced (default), 2 = highly creative/random. |
| max_tokens | integer | No | Maximum tokens to generate |
| stream | boolean | No | Stream response as SSE. Default: false |
| top_p | number | No | Nucleus sampling parameter (0–1) |
| stop | string | string[] | No | Up to 4 stop sequences. The model stops generating when it hits one. |
| frequency_penalty | number | No | -2.0 to 2.0. Positive values penalize repeated tokens. Default: 0 |
| presence_penalty | number | No | -2.0 to 2.0. Positive values push the model toward new topics. Default: 0 |
| seed | integer | No | Reproducibility seed. Same seed + params returns similar output (best-effort). |
| response_format | object | No | { type: "json_object" } or { type: "json_schema", json_schema: ... } for structured output. Model support varies — see the model's API tab. |
| tools | array | No | Function definitions the model can call. Paired with tool_choice. Available on tool-capable models only. |
Example request
import Tchavi from '@tchavi/sdk';
const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });
const response = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is the capital of Benin?' },
],
temperature: 0.7,
});
console.log(response.choices[0].message.content);
console.log('Credits used:', response.tchavi.credits_used);Example response
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1711234567,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of Benin is Porto-Novo."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 24,
"completion_tokens": 12,
"total_tokens": 36
},
"tchavi": {
"credits_used": 2,
"credits_remaining": 498,
"model_tier": "budget"
}
}Streaming
Set stream: true to receive the response token-by-token as Server-Sent Events (SSE). This lets you display text as it arrives rather than waiting for the full response.
import Tchavi from '@tchavi/sdk';
const client = new Tchavi({ apiKey: process.env.TCHAVI_API_KEY });
const stream = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Tell me a short story.' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? '');
}Image Generation
/v1/images/generationsGenerate images from text prompts across all supported image models — Nano Banana, Imagen, GPT Image, DALL·E, and more. The same endpoint also handles image editing when you pass reference images (aliased as POST /v1/images/edits).
Common request body
The fields below are shared by every image model. Model-specific options — size, aspect_ratio, resolution, quality, output_format, negative_prompt, seed, background, etc. — depend on the family. Open the model on /models and switch to the API tab for the full parameter reference.
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Any image model ID — e.g. nano-banana-pro, imagen-4, gpt-image-1, dall-e-3. |
| prompt | string | Yes | Text description of the image to generate. |
| n | integer | No | Number of images (1–4). Default: 1 |
| response_format | string | No | b64_json (default — base64 in response) or url (hosted URL). Model support varies. |
| images | string[] | No | Base64-encoded reference images for editing. The max number accepted depends on the model (e.g. 14 for Nano Banana, 16 for GPT Image). |
| user | string | No | Optional end-user identifier for abuse monitoring. |
Example
Swap model for any image model ID — parameters beyond those shown below must match that model's API tab.
import Tchavi from '@tchavi/sdk';
const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });
const result = await client.images.generations.create({
model: 'YOUR_MODEL_ID',
prompt: 'A colorful parrot on a branch, digital art',
});
console.log(result.data[0].b64_json);
console.log('Credits used:', result.tchavi.credits_used);The response contains base64-encoded image data in data[0].b64_json. Here's how to use it:
// Display the image in a browser
const img = document.createElement('img');
img.src = `data:image/png;base64,${data.data[0].b64_json}`;
document.body.appendChild(img);Audio
Tchavi supports two audio endpoints: text-to-speech (TTS) for generating audio from text, and transcription (Whisper) for converting audio files to text.
Text-to-Speech
/v1/audio/speechConverts text to spoken audio. Returns raw audio bytes.
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | "tts-1" (faster) or "tts-1-hd" (higher quality) |
| input | string | Yes | The text to convert to speech (max 4096 characters) |
| voice | string | Yes | alloy, ash, ballad, cedar, coral, echo, fable, marin, nova, onyx, sage, shimmer |
| response_format | string | No | mp3, opus, aac, flac, wav, pcm. Default: mp3 |
| speed | number | No | Playback speed 0.25–4.0. Default: 1.0 |
import Tchavi from '@tchavi/sdk';
import { writeFileSync } from 'fs';
const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });
const response = await client.audio.speech.create({
model: 'tts-1',
input: 'Tchavi is the best AI API gateway in Africa.',
voice: 'nova',
response_format: 'mp3',
});
const buffer = Buffer.from(await response.arrayBuffer());
writeFileSync('speech.mp3', buffer);Transcription (Whisper)
/v1/audio/transcriptionsTranscribes audio files to text. Send as multipart/form-data.
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | "whisper-1" |
| file | file | Yes | Audio file (mp3, wav, m4a, webm, ogg…). Max 25MB |
| language | string | No | ISO-639-1 code (e.g. "fr", "en"). Auto-detected if omitted |
| response_format | string | No | json, text, srt, vtt, verbose_json. Default: json |
| prompt | string | No | Optional text to guide the model's style or continue a previous segment. Must match the audio language. |
| temperature | number | No | Sampling temperature 0–1. Higher values yield more varied transcriptions. Default: 0 |
import Tchavi from '@tchavi/sdk';
import { createReadStream } from 'fs';
const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });
const result = await client.audio.transcriptions.create({
model: 'whisper-1',
file: createReadStream('audio.mp3'),
language: 'fr',
});
console.log(result.text);
console.log('Duration:', result.tchavi.duration_minutes, 'min');
console.log('Credits used:', result.tchavi.credits_used);Embeddings
/v1/embeddingsEmbeddings convert text into a numeric vector that captures its semantic meaning. Use them for semantic search (find content by meaning, not keywords), clustering similar documents, recommendations, and RAG (retrieval-augmented generation) pipelines.
import Tchavi from '@tchavi/sdk';
const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });
const response = await client.embeddings.create({
model: 'text-embedding-3-small',
input: 'Tchavi is the best AI API gateway in Africa.',
});
console.log(response.data[0].embedding);Video Generation
/v1/videos/generationsVideo generation runs asynchronously: the endpoint returns a 202 with a job.id immediately, and the video renders in the background. Poll GET /v1/jobs/:id (see Async Jobs) until status becomes completed or failed. The SDK's createAndWait(...) helper does both in one call.
Billing is per second of output, resolution-dependent: 480p = 260 cr/sec · 720p = 580 cr/sec. A typical 10-second 480p clip costs 2,600 credits. If a job fails after billing (upstream error, timeout), the charged credits are automatically refunded.
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Currently seedance-2 (ByteDance Seedance 2.0 via Replicate). |
| prompt | string | Yes | Text description of the scene and motion. |
| duration | integer | No | Seconds, 1–15. Use -1 for auto (billed at 10s). Default 5. |
| resolution | string | No | "480p" or "720p". Default "480p". |
| aspect_ratio | string | No | 16:9, 9:16, 1:1, 4:3, 3:4, or adaptive. Default "16:9". |
| generate_audio | boolean | No | Synthesize a soundtrack (dialogue, SFX, music). Default true. |
| seed | integer | No | Fixes output for reproducibility. |
| image_url | string | No | First-frame reference (HTTP URL). Enables image-to-video. |
| last_frame_image_url | string | No | Last-frame target. Requires image_url. |
| reference_images | string[] | No | Up to 9 URLs. Mutually exclusive with image_url. Referenced in the prompt as [Image1], [Image2], … |
| reference_videos | string[] | No | Up to 3 URLs (combined ≤ 15 s). Referenced as [Video1], … |
| reference_audios | string[] | No | Up to 3 URLs. Requires image_url or reference_images. Referenced as [Audio1], … |
import Tchavi from '@tchavi/sdk';
const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });
// One-liner: submit + poll until completed/failed
const job = await client.videos.generations.createAndWait({
model: 'seedance-2',
prompt: 'A cinematic shot of the Cotonou Amazone statue at golden hour',
duration: 5,
resolution: '480p',
aspect_ratio: '9:16',
generate_audio: true,
});
if (job.status === 'completed') {
console.log('Video URL:', job.output?.video_url);
console.log('Credits used:', job.tchavi?.credits_used);
} else {
console.error('Failed:', job.error?.message);
}Async Jobs
Some endpoints — currently video generation — run asynchronously. They respond immediately with a job id and you track completion via these endpoints.
Retrieve a job
/v1/jobs/:idReturns the full job record. The payload shape depends on status:
pending/processing— nooutputyet;tchavi.credits_usedis set.completed—output.video_url,output.duration,output.resolution,output.file_size_mb,output.url_expires_at.failed—error.message,error.code, andtchavi.credits_refunded(the charged credits are returned automatically).
List jobs
/v1/jobsLists the caller's jobs, most recent first. Query params: status (filter), limit (1–50, default 20), offset (default 0).
import Tchavi from '@tchavi/sdk';
const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });
// Retrieve a specific job
const job = await client.jobs.retrieve('job_abc123');
console.log(job.status, job.output?.video_url);
// List the 10 most recent completed jobs
const { data } = await client.jobs.list({ status: 'completed', limit: 10 });
for (const j of data) {
console.log(j.created_at, j.model, j.output?.video_url);
}Models
Tchavi gives you access to 40+ AI models from OpenAI, Anthropic, Google, and more — all through a single API. Models are organized into budget groups:
| Model | Provider | Type | Credits |
|---|---|---|---|
| GPT-5 | OpenAI | chat | 37 cr/req |
| Claude Opus 4.7 | Anthropic | chat | 98 cr/req |
| Claude Sonnet 4.6 | Anthropic | chat | 59 cr/req |
| DeepSeek Chat | DeepSeek | chat | 2 cr/req |
| GPT Image 1.5 | OpenAI | image | 33 cr/image |
| Nano Banana Pro | image | 223 cr/image | |
| Seedance 2.0 | ByteDance | video | 260 cr/sec |
| Whisper | OpenAI | audio | 20 cr/min |
See all available models on the Models page. New models are added within 48 hours of release.
Credits & Billing
Tchavi uses a credit-based billing system. Each API request consumes credits based on the model used and the number of tokens processed.
How credits are calculated
- Chat models: Credits = (input_tokens × rate + output_tokens × rate) per 1K tokens
- Image models: Flat credit cost per image based on resolution
- TTS (text-to-speech): Credits per 1K characters of input text
- Transcription (Whisper): Credits per minute of audio
Response headers
Every API response includes metadata headers:
| Header | Description |
|---|---|
| X-Credits-Used | Credits consumed by this request |
| X-Credits-Remaining | Your current credit balance |
| X-RateLimit-RPM-Limit | Your requests-per-minute limit |
| X-RateLimit-RPM-Remaining | Requests remaining in the current minute |
| X-RateLimit-TPM-Limit | Your tokens-per-minute limit |
| X-RateLimit-TPM-Remaining | Token budget remaining in the current minute |
| X-Request-Id | Unique request ID for support/debugging |
| Retry-After | Seconds to wait before retrying (on 429 responses) |
gpt-4o-mini request with 500 input tokens + 200 output tokens at the Budget tier (e.g. 1 cr/1K input, 2 cr/1K output) costs: (500/1000 × 1) + (200/1000 × 2) = 0.9 credits → rounded up to 1 credit. The exact rate for each model is shown in the Models table.Recharging
Buy credit packs from your billing dashboard using Wave, Orange Money, MTN MoMo, and 30+ mobile money operators. Credits are added instantly after payment.
Rate Limits
Tchavi is pay-as-you-go: every user can call every model, and credits are the natural gate. Rate limits depend on your account level, which is unlocked automatically based on your lifetime spend on the platform — there is no subscription. Two independent limits apply per user per minute:
- RPM — maximum number of requests per minute.
- TPM — maximum tokens processed per minute (input + output). For TTS, each character counts as 1 token. For Whisper, each billed minute counts as 1,000 tokens.
| Account level | Unlocked at | RPM | TPM | Max API keys |
|---|---|---|---|---|
| Free | Default | 10 req/min | 100,000 | 1 |
| Builder | First top-up | 30 req/min | 500,000 | 3 |
| Growth | 10,000 FCFA lifetime | 120 req/min | 1,000,000 | 10 |
| Pro | 50,000 FCFA lifetime | 300 req/min | 3,000,000 | Unlimited |
A global IP-based limit of 500 req/min also applies across all users sharing the same IP address.
When a limit is exceeded you receive a 429 response with a Retry-After header indicating how many seconds to wait before retrying. Your current level, RPM budget, and TPM budget are always visible in the X-RateLimit-* response headers (see Credits & Billing).
Error Handling
Tchavi returns standard HTTP status codes. Errors include a JSON body:
{
"error": {
"code": "insufficient_credits",
"message": "You don't have enough credits for this request.",
"status": 402
}
}Common error codes
| Status | Code | Description |
|---|---|---|
| 401 | invalid_api_key | Missing or invalid API key |
| 402 | insufficient_credits | Not enough credits — recharge to continue |
| 403 | model_not_allowed | Your account tier doesn't include this model |
| 429 | rate_limit_exceeded | RPM limit reached — check Retry-After header |
| 429 | user_rate_limit_exceeded | Per-user RPM limit reached — upgrade plan to increase |
| 429 | tpm_rate_limit_exceeded | Token-per-minute limit reached — wait before retrying |
| 500 | internal_error | Server error — retry or contact support |
| 502 | upstream_error | AI provider is temporarily unavailable |
SDKs
@tchavi/sdk (recommended)
Our official SDK wraps the API with type-safe methods and credit tracking.
npm install @tchavi/sdkimport Tchavi from '@tchavi/sdk';
const client = new Tchavi({ apiKey: 'YOUR_API_KEY' });
// Chat
const chat = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
});
// Images
const images = await client.images.generations.create({
model: 'nano-banana-pro',
prompt: 'A futuristic Cotonou skyline at dusk',
});
// Audio — TTS
const tts = await client.audio.speech.create({
model: 'tts-1',
input: 'Welcome to Tchavi',
voice: 'nova',
});
// Video — async; createAndWait polls until completed/failed
const video = await client.videos.generations.createAndWait({
model: 'seedance-2',
prompt: 'A cinematic shot at golden hour',
duration: 5,
resolution: '480p',
});
// Retrieve / list async jobs
const job = await client.jobs.retrieve(video.id);
const { data } = await client.jobs.list({ status: 'completed', limit: 10 });OpenAI SDK (drop-in)
Already using the OpenAI Python or Node.js SDK? Just change the base URL:
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://tchavi.com/api/api/v1",
)
# Use it exactly like OpenAI
response = client.chat.completions.create(
model="claude-sonnet-4-6",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://tchavi.com/api/api/v1',
});
const response = await client.chat.completions.create({
model: 'claude-sonnet-4-6',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);cURL
No SDK needed — use standard HTTP requests:
curl -X POST https://tchavi.com/api/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}]
}'