Skip to main content

🤖 Supported LLM Models

✅ Detoxio supports text-based LLMs only at the moment (no image/audio tools).
🛡️ All requests are routed securely through https://api.detoxio.ai.


OpenAI Models

OpenAI model overview

ModelDescription
gpt-4oMultimodal flagship model
gpt-4Advanced reasoning + long context
gpt-3.5-turboFast, cost-efficient text model

All chat/completions models are supported.


GROQ Models

GROQ model docs

Model NameDescription
meta-llama/llama-3-70b-instructLLaMA 3 70B fine-tuned
meta-llama/llama-3-8b-instructLLaMA 3 8B instruct variant
meta-llama/llama-2-70b-chatLLaMA 2 70B
gemma-7b-itGoogle Gemma Instruct
mixtral-8x7b-instructMixture of experts (Mistral)
codellama/CodeLlama-70b-Instruct-hfCode-focused LLaMA

Detoxio supports GROQ via OpenAI-compatible SDK.


Together AI Models

Together AI models

Model NameHighlights
deepseek-ai/DeepSeek-V3Strong multilingual LLM
togethercomputer/StripedHyena-NousEfficient & high-performing
mistralai/Mixtral-8x7B-Instruct-v0.1Open-weight mixture model
google/gemma-7b-itInstruction-tuned Gemma
meta-llama/Llama-3-70b-chat-hfLLaMA 3 70B chat from Meta

Most models under the "Chat" or "Instruct" category are supported.


Using Models via Detoxio

Simply update your code with:

client.chat.completions.create(
model="your-selected-model-name",
messages=[{"role": "user", "content": "Hello"}]
)

You can use any model listed above as long as it's supported for chat.completions.


📌 Note

  • Streaming is supported for many models, but may depend on provider.
  • Rate limits and quotas are enforced upstream (per API key).
  • Want image, audio, or embedding support? Reach out to us at detoxio.ai.

Need help selecting the right model? Contact Detoxio for production support or benchmarking.