Providers Overview
DTX supports a diverse set of providers, each enabling interaction with different types of language modelsβfrom local runtimes to hosted APIs and plugins. This modular system allows seamless integration and red teaming across a wide spectrum of model backends.
Provider Categoriesβ
Local Modelsβ
Provider | Description |
---|---|
hf_model | Local or hosted Hugging Face models |
ollama | Run models locally via Ollama |
SaaS Modelsβ
Provider | Description |
---|---|
openai | Access GPT, Whisper, Embedding APIs via OpenAI |
groq | Ultra-fast access to LLaMA models hosted by Groq |
litellm | Proxy interface for OpenAI-compatible APIs |
vllm | Optimized open-source inference backend for hosted models |
API Endpointsβ
Provider | Description |
---|---|
http | Custom RESTful model endpoints |
gradio | Interface with models served via Gradio UIs |
Plugin Systemsβ
Provider | Description |
---|---|
langhub | Use prompt templates from LangChain Hub |
Agent Invocationβ
Most models can be invoked using:
dtx redteam run --agent <provider> --url <model>
Example:β
dtx redteam run --agent openai --url gpt-4o --dataset airbench --eval ibm38
This command uses OpenAI's GPT-4o model to evaluate the Airbench dataset.
For Providers Requiring Configurationβ
API-based and plugin providers (http
, gradio
, langhub
) require additional configuration. Use the interactive agent builder:
dtx redteam quick
Sample Output:β
βοΈ Agent Builder
-------------------------------------
βοΈ Environment check passed.
Choose your provider:
1. HTTP Provider
2. Gradio Provider
3. LangHub Prompts
Supported Models by Providerβ
OpenAIβ
Model Name | Task Type | Modalities | Description |
---|---|---|---|
gpt-4.5-preview | Generation | Text, Code | Most capable GPT model |
gpt-4o | Generation | Text, Code | Fast and intelligent multi-modal GPT |
gpt-4o-mini | Generation | Text, Code | Lightweight, low-cost model |
gpt-4-turbo | Generation | Text, Code | Older turbocharged GPT |
gpt-3.5-turbo | Generation | Text | Legacy GPT for general tasks |
text-embedding-3-large | Embedding | Text | Powerful embedding model |
omni-moderation-latest | Classification | Text, Image | Moderation tool for safety detection |
Groqβ
Model Name | Task Type | Modalities | Description |
---|---|---|---|
llama-3.3-70b-versatile | Generation | Text | Metaβs versatile 70B model |
llama-3.1-8b-instant | Generation | Text | Lightweight fast-response model |
llama-guard-3-8b | Classification | Text | Moderation and safety classification |
llama3-70b-8192 | Generation | Text | 8k context LLaMA 3 model |
llama3-8b-8192 | Generation | Text | 8B LLaMA model |
whisper-large-v3 | Classification | Text | Speech-to-text model |
whisper-large-v3-turbo | Classification | Text | Fast Whisper variant |
Provider Model Linksβ
For full flexibility and configuration, use:
dtx redteam run --agent <provider> --url <model>
Or launch the guided flow:
dtx redteam quick