🤖 Supported LLM Models
✅ Detoxio supports text-based LLMs only at the moment (no image/audio tools).
🛡️ All requests are routed securely throughhttps://api.detoxio.ai
.
OpenAI Models
Model | Description |
---|---|
gpt-4o | Multimodal flagship model |
gpt-4 | Advanced reasoning + long context |
gpt-3.5-turbo | Fast, cost-efficient text model |
All
chat/completions
models are supported.
GROQ Models
Model Name | Description |
---|---|
meta-llama/llama-3-70b-instruct | LLaMA 3 70B fine-tuned |
meta-llama/llama-3-8b-instruct | LLaMA 3 8B instruct variant |
meta-llama/llama-2-70b-chat | LLaMA 2 70B |
gemma-7b-it | Google Gemma Instruct |
mixtral-8x7b-instruct | Mixture of experts (Mistral) |
codellama/CodeLlama-70b-Instruct-hf | Code-focused LLaMA |
Detoxio supports GROQ via OpenAI-compatible SDK.
Together AI Models
Model Name | Highlights |
---|---|
deepseek-ai/DeepSeek-V3 | Strong multilingual LLM |
togethercomputer/StripedHyena-Nous | Efficient & high-performing |
mistralai/Mixtral-8x7B-Instruct-v0.1 | Open-weight mixture model |
google/gemma-7b-it | Instruction-tuned Gemma |
meta-llama/Llama-3-70b-chat-hf | LLaMA 3 70B chat from Meta |
Most models under the "Chat" or "Instruct" category are supported.
Using Models via Detoxio
Simply update your code with:
client.chat.completions.create(
model="your-selected-model-name",
messages=[{"role": "user", "content": "Hello"}]
)
You can use any model listed above as long as it's supported for
chat.completions
.
📌 Note
- Streaming is supported for many models, but may depend on provider.
- Rate limits and quotas are enforced upstream (per API key).
- Want image, audio, or embedding support? Reach out to us at detoxio.ai.
Need help selecting the right model? Contact Detoxio for production support or benchmarking.