Test Hosted Models
Common Setupβ
from dtx.sdk.runner import DtxRunner, DtxRunnerConfigBuilder
from dtx_models.providers.base import ProviderType
OpenAI (OPENAI
)β
Use OpenAI's GPT models like gpt-4
or gpt-3.5-turbo
.
cfg = (
DtxRunnerConfigBuilder()
.agent_from_provider(ProviderType.OPENAI, "gpt-4")
.max_prompts(5)
.build()
)
report = DtxRunner(cfg).run()
print(report.model_dump_json(indent=2))
β Requires:
OPENAI_API_KEY
Get API Keyβ
- Visit https://platform.openai.com/account/api-keys
- Create a key and export it:
export OPENAI_API_KEY="sk-..."
Ollama (OLLAMA
)β
Run local models via Ollama, like llama3
, qwen3
, etc.
cfg = (
DtxRunnerConfigBuilder()
.agent_from_provider(ProviderType.OLLAMA, "qwen3:0.6b")
.max_prompts(5)
.build()
)
report = DtxRunner(cfg).run()
print(report.model_dump_json(indent=2))
β Requires: Ollama running locally
Install Ollamaβ
- Download from: https://ollama.com/download
- After installation, run:
ollama run qwen3:0.6b
# or pull the model manually
ollama pull qwen3:0.6b
Groq via LiteLLM (LITE_LLM
)β
Use Groqβs ultra-fast inference through LiteLLM-compatible endpoint.
cfg = (
DtxRunnerConfigBuilder()
.agent_from_provider(ProviderType.LITE_LLM, "groq/llama3-8b-8192")
.max_prompts(5)
.build()
)
report = DtxRunner(cfg).run()
print(report.model_dump_json(indent=2))
β Requires:
GROQ_API_KEY
Get API Keyβ
- Visit https://console.groq.com/keys
- Generate a key and export it:
export GROQ_API_KEY="..."
βΉοΈ This uses
ProviderType.LITE_LLM
and a model name likegroq/llama3-8b-8192
.
Notesβ
- All LiteLLM-based providers use
ProviderType.LITE_LLM
- The
model
parameter must exactly match the supported model name - You can add a custom prompt template with
.with_prompt_template(...)
if needed