Python Client Library
The Python client library is auto-generated based on our API schema.
Installation
Install the client library and its dependencies
python3 -m pip install \
detoxio-api-protocolbuffers-python detoxio-api-grpc-python grpcio grpcio-tools \
--upgrade --extra-index-url https://buf.build/gen/python
Usage
Setup authentication
An API key is required to authenticate with the APIs. Refer to API authentication for more details. Once you have the API key, export it as an environment variable.
export DETOXIO_API_KEY=<your-api-key>
Setup gRPC channel
import grpc
import os
token = grpc.access_token_call_credentials(os.getenv('DETOXIO_API_KEY'))
channel = grpc.secure_channel('api.detoxio.ai:443', grpc.composite_channel_credentials(grpc.ssl_channel_credentials(), token))
Setup API client
import proto.dtx.services.prompts.v1.prompts_pb2 as dtx_prompts_pb2
import proto.dtx.services.prompts.v1.prompts_pb2_grpc as dtx_prompts_pb2_grpc
import proto.dtx.messages.common.llm_pb2 as dtx_llm_pb2
client = dtx_prompts_pb2_grpc.PromptServiceStub(channel)
Test for connectivity and credentials
import google.protobuf.empty_pb2 as empty_pb2
client.Ping(empty_pb2.Empty())
Authentication problems or rate limits will lead to an exception here
Generate a prompt
res = client.GeneratePrompts(dtx_prompts_pb2.PromptGenerationRequest(count=1))
print(res.prompts[0].data.content)
Evaluate a prompt response
The prompt evaluation API is meant to evaluate a response from an LLM under testing. You must use the generated prompt to perform an LLM inference to get it's output. The output from the LLM can subsequently used in this operation to identify vulnerabilities.
model_output = '<output from LLM under testing>'
# Create the request object for hydration
req = dtx_prompts_pb2.PromptEvaluationRequest()
# Pass the prompt generated earlier - res.prompts[0]
req.prompt.CopyFrom(res.prompts[0])
req.responses.extend([dtx_prompts_pb2.PromptResponse(message=dtx_llm_pb2.LlmChatIo(content=model_output))])
client.EvaluateModelInteraction(req)
Note on Datasets
If you are generating datasets for model security testing and evaluation, you can use our threats taxonomy to categorize your dataset. This can help in generating useful reports across different threat categories.
import proto.dtx.messages.common.threat_pb2 as dtx_threat_pb2
Define an utility function to get name from constants
def dtx_get_threat_name(tid):
return dtx_threat_pb2.ThreatCategory.DESCRIPTOR.values_by_number[tid].name
def dtx_get_threat_class(tid):
return dtx_threat_pb2.ThreatClass.DESCRIPTOR.values_by_number[tid].name
Example categories
dtx_get_threat_class(dtx_threat_pb2.TOXICITY)
dtx_get_threat_name(dtx_threat_pb2.ABUSIVE_LANGUAGE)