🤖 Supported LLM Models
✅ Detoxio supports text-based LLMs only at the moment (no image/audio tools).
🛡️ All requests are routed securely throughhttps://api.detoxio.ai
.
OpenAI Models
Model | Description |
---|---|
gpt-4o | Multimodal flagship model |
gpt-4 | Advanced reasoning + long context |
gpt-3.5-turbo | Fast, cost-efficient text model |
All
chat/completions
models are supported.
GROQ Models
GROQ model docs Certainly! Here's the updated GROQ Models section in your requested format, with current supported models from GroqCloud:
GROQ Models
Model Name | Description |
---|---|
llama-3.1-8b-instant | Meta LLaMA 3.1 8B fast-response model |
llama-3.3-70b-versatile | Meta LLaMA 3.3 70B versatile large model |
gemma2-9b-it | Google Gemma 2 Instruct |
meta-llama/llama-guard-4-12b | Meta LLaMA Guard 4, safety/moderation |
deepseek-r1-distill-llama-70b | DeepSeek Distilled LLaMA 70B |
meta-llama/llama-4-maverick-17b-128e-instruct | Meta LLaMA 4 Maverick 17B (preview) |
meta-llama/llama-4-scout-17b-16e-instruct | Meta LLaMA 4 Scout 17B (preview) |
mistral-saba-24b | Mistral Saba 24B |
qwen/qwen3-32b | Alibaba Qwen 3 32B |
compound-beta | Groq Compound System (preview) |
compound-beta-mini | Groq Compound System Mini (preview) |
Detoxio supports GROQ via OpenAI-compatible SDK.
Detoxio supports GROQ via OpenAI-compatible SDK.
Together AI Models
Model Name | Highlights |
---|---|
deepseek-ai/DeepSeek-V3 | Strong multilingual LLM |
togethercomputer/StripedHyena-Nous | Efficient & high-performing |
mistralai/Mixtral-8x7B-Instruct-v0.1 | Open-weight mixture model |
google/gemma-7b-it | Instruction-tuned Gemma |
meta-llama/Llama-3-70b-chat-hf | LLaMA 3 70B chat from Meta |
Most models under the "Chat" or "Instruct" category are supported.
Using Models via Detoxio
Simply update your code with:
client.chat.completions.create(
model="your-selected-model-name",
messages=[{"role": "user", "content": "Hello"}]
)
You can use any model listed above as long as it's supported for
chat.completions
.
📌 Note
- Streaming is supported for many models, but may depend on provider.
- Rate limits and quotas are enforced upstream (per API key).
- Want image, audio, or embedding support? Reach out to us at detoxio.ai.
Need help selecting the right model? Contact Detoxio for production support or benchmarking.