Skip to main content

Providers Overview

DTX supports a diverse set of providers, each enabling interaction with different types of language modelsβ€”from local runtimes to hosted APIs and plugins. This modular system allows seamless integration and red teaming across a wide spectrum of model backends.


Provider Categories​

Local Models​

ProviderDescription
hf_modelLocal or hosted Hugging Face models
ollamaRun models locally via Ollama

SaaS Models​

ProviderDescription
openaiAccess GPT, Whisper, Embedding APIs via OpenAI
groqUltra-fast access to LLaMA models hosted by Groq
litellmProxy interface for OpenAI-compatible APIs
vllmOptimized open-source inference backend for hosted models

API Endpoints​

ProviderDescription
httpCustom RESTful model endpoints
gradioInterface with models served via Gradio UIs

Plugin Systems​

ProviderDescription
langhubUse prompt templates from LangChain Hub

Agent Invocation​

Most models can be invoked using:

dtx redteam run --agent <provider> --url <model>

Example:​

dtx redteam run --agent openai --url gpt-4o --dataset airbench --eval ibm38

This command uses OpenAI's GPT-4o model to evaluate the Airbench dataset.

For Providers Requiring Configuration​

API-based and plugin providers (http, gradio, langhub) require additional configuration. Use the interactive agent builder:

dtx redteam quick

Sample Output:​

✏️ Agent Builder
-------------------------------------
βœ”οΈ Environment check passed.

Choose your provider:
1. HTTP Provider
2. Gradio Provider
3. LangHub Prompts

Supported Models by Provider​

OpenAI​

Model NameTask TypeModalitiesDescription
gpt-4.5-previewGenerationText, CodeMost capable GPT model
gpt-4oGenerationText, CodeFast and intelligent multi-modal GPT
gpt-4o-miniGenerationText, CodeLightweight, low-cost model
gpt-4-turboGenerationText, CodeOlder turbocharged GPT
gpt-3.5-turboGenerationTextLegacy GPT for general tasks
text-embedding-3-largeEmbeddingTextPowerful embedding model
omni-moderation-latestClassificationText, ImageModeration tool for safety detection

Groq​

Model NameTask TypeModalitiesDescription
llama-3.3-70b-versatileGenerationTextMeta’s versatile 70B model
llama-3.1-8b-instantGenerationTextLightweight fast-response model
llama-guard-3-8bClassificationTextModeration and safety classification
llama3-70b-8192GenerationText8k context LLaMA 3 model
llama3-8b-8192GenerationText8B LLaMA model
whisper-large-v3ClassificationTextSpeech-to-text model
whisper-large-v3-turboClassificationTextFast Whisper variant


For full flexibility and configuration, use:

dtx redteam run --agent <provider> --url <model>

Or launch the guided flow:

dtx redteam quick