Skip to main content

Test Hosted Models

Common Setup​

from dtx.sdk.runner import DtxRunner, DtxRunnerConfigBuilder
from dtx_models.providers.base import ProviderType

OpenAI (OPENAI)​

Use OpenAI's GPT models like gpt-4 or gpt-3.5-turbo.

cfg = (
DtxRunnerConfigBuilder()
.agent_from_provider(ProviderType.OPENAI, "gpt-4")
.max_prompts(5)
.build()
)

report = DtxRunner(cfg).run()
print(report.model_dump_json(indent=2))

βœ… Requires: OPENAI_API_KEY

Get API Key​

  1. Visit https://platform.openai.com/account/api-keys
  2. Create a key and export it:
export OPENAI_API_KEY="sk-..."

Ollama (OLLAMA)​

Run local models via Ollama, like llama3, qwen3, etc.

cfg = (
DtxRunnerConfigBuilder()
.agent_from_provider(ProviderType.OLLAMA, "qwen3:0.6b")
.max_prompts(5)
.build()
)

report = DtxRunner(cfg).run()
print(report.model_dump_json(indent=2))

βœ… Requires: Ollama running locally

Install Ollama​

  1. Download from: https://ollama.com/download
  2. After installation, run:
ollama run qwen3:0.6b
# or pull the model manually
ollama pull qwen3:0.6b

Groq via LiteLLM (LITE_LLM)​

Use Groq’s ultra-fast inference through LiteLLM-compatible endpoint.

cfg = (
DtxRunnerConfigBuilder()
.agent_from_provider(ProviderType.LITE_LLM, "groq/llama3-8b-8192")
.max_prompts(5)
.build()
)

report = DtxRunner(cfg).run()
print(report.model_dump_json(indent=2))

βœ… Requires: GROQ_API_KEY

Get API Key​

  1. Visit https://console.groq.com/keys
  2. Generate a key and export it:
export GROQ_API_KEY="..."

ℹ️ This uses ProviderType.LITE_LLM and a model name like groq/llama3-8b-8192.


Notes​

  • All LiteLLM-based providers use ProviderType.LITE_LLM
  • The model parameter must exactly match the supported model name
  • You can add a custom prompt template with .with_prompt_template(...) if needed