Developer Documentation

Manage keys →

SentX API

OpenAI-compatible chat completions with vision and document understanding. Streaming. Pay-per-token, top up with Stripe.

Base URL: https://brain.devs.group/v1 Drop-in: works with any OpenAI SDK (Python, JS, curl) and every OpenAI-compatible client.


1. Get a key

  1. Sign in to sentx.ai → sidebar → API.
  2. Click + Create new key, give it a name. Copy it now — the full key is shown exactly once.
  3. Add credits: + Top Up, pick $5$1000, complete Stripe checkout.

Keys start with sk-. Max 25 active keys per account.


2. First call

Python

from openai import OpenAI client = OpenAI(api_key="sk-YOUR_KEY", base_url="https://brain.devs.group/v1") r = client.chat.completions.create( model="sentx", messages=[{"role": "user", "content": "What is the capital of France?"}], ) print(r.choices[0].message.content) print(r.usage)
python

curl

curl https://brain.devs.group/v1/chat/completions \ -H "Authorization: Bearer $SENTX_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "sentx", "messages": [{"role": "user", "content": "What is the capital of France?"}] }'
bash

3. Streaming

stream = client.chat.completions.create( model="sentx", messages=[{"role": "user", "content": "Explain black holes in 2 paragraphs"}], stream=True, ) for chunk in stream: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="", flush=True)
python

The final chunk carries usage (OpenAI-standard). Send request header X-Include-Cost: 1 to also receive usage.cost_cents.


4. System messages

You can influence tone/style with a role="system" message. Safety rules always take precedence.

client.chat.completions.create( model="sentx", messages=[ {"role": "system", "content": "Reply in under 200 chars. Be concise."}, {"role": "user", "content": "Thoughts on index funds?"}, ], )
python

Max 2,000 characters.


5. Attachments

Three ways to attach an image or document.

A. Inline base64 (images, ≤10 MB)

import base64 with open("chart.png", "rb") as f: b64 = base64.b64encode(f.read()).decode() client.chat.completions.create( model="sentx", messages=[{"role": "user", "content": [ {"type": "image_url", "image_url": {"url": f"data:image/png;base64,{b64}"}}, {"type": "text", "text": "What does this chart show?"}, ]}], )
python

B. Files API (any size, reusable across turns)

Upload once, reference by file_id:

f = client.files.create(file=open("report.pdf", "rb"), purpose="assistants") client.chat.completions.create( model="sentx", messages=[{"role": "user", "content": [ {"type": "input_file", "file_id": f.id}, {"type": "text", "text": "Summarize this PDF in 3 bullets."}, ]}], )
python

Supported: png, jpeg, webp, gif, mp4, webm, mov, pdf, docx, xlsx, csv, txt, md.

C. Remote URLs — not supported

http(s):// URLs are rejected. Use inline base64 or the Files API.

Per-file cap: 25 MB. Cumulative cap: 500 MB. Retention: 30 days. Max 100 live files.


6. Pricing

Cents per 1M tokensUSD per 1M tokens
Input300$3.00
Output1000$10.00
  • Minimum 1 cent per billable request.
  • Maximum $5.00 per single request.
  • Round half up.

The response usage block reports exact token counts. With X-Include-Cost: 1 header, also cost_cents.

Credits never expire. No monthly minimum.


7. Rate limits

  • 60 requests / minute per API key.
  • 5 concurrent streams per API key.
  • 5 API-key creations / hour per account.
  • 25 active API keys per account.

Exceeding returns 429 rate_limit_exceeded.


8. Errors

All errors:

{"error": {"message": "...", "type": "...", "code": "..."}}
json
HTTPcodeMeaning
401invalid_api_keyKey not recognized, revoked, or wrong format.
402insufficient_balanceBalance too low. Top up at sentx.ai.
400unsupported_url_schemehttp(s):// URL in content. Use data: or Files API.
400invalid_file_formatUpload failed format check.
409max_keys_reachedRevoke an old key.
422av_hitFile failed antivirus scan.
429rate_limit_exceededSlow down.
500internal_errorRetry with exponential backoff.

Stock OpenAI SDKs surface 402 as APIStatusError/PermissionDeniedError; branch on err.status_code.


9. Files API

POST /v1/files multipart (file, purpose) → {id, bytes, expires_at, ...} GET /v1/files list your files GET /v1/files/{id} metadata DELETE /v1/files/{id} delete
text

Duplicate bytes (SHA-256 match) deduplicate automatically. The id is a stable reference valid until deletion or retention expiry.


10. Support


Integrator notes

  • Model name in requests: use "sentx". Any model string is accepted; the server ignores it.
  • usage.cost_cents is an optional extension inside the standard usage block. Set request header X-Include-Cost: 1 to receive it. Standard OpenAI clients ignore the extra field.
  • SSE framing is standard: data: {chunk}\n\n events terminated by data: [DONE]\n\n. The final chat.completion.chunk carries usage.