Python SDK
The Python is SDK built on top of the generated Python client library with the goal of providing excellent developer experience for building LLM security tools in Python. The SDK focusses on use-cases while hiding low level gRPC protocol details. This is the recommended way of using detoxio.ai
APIs in Python based applications.
Installation
python3 -m pip install detoxio \
detoxio-api-protocolbuffers-python detoxio-api-grpc-python \
--upgrade --extra-index-url https://buf.build/gen/python
Note: We need to install additional packages because as per PEP-440, we cannot have a package with direct dependencies outside the public index.
Install detoxio-api-protocolbuffers-pyi
package for type hints and
auto-completion in your IDE
LLM Security Scanner
The LLM security scanner interface is used to scan an LLM for security vulnerabilities while maintaining necessary abstraction from LLM specific inference runtime. This ensures that detoxio
SDK and APIs are decoupled from LLM specific frameworks such as TensorFlow, PyTorch etc. and can seamlessly integrate with any framework.
To get started, write a function that will take a detoxio.ai
generated prompt, perform model specific inferencing and return the output generated by the model
from detoxio.scanner import LLMPrompt, LLMResponse
def model_adapter(prompt: LLMPrompt) -> LLMResponse:
output = llm(input=prompt.content) # llm(...) is a placeholder function
return LLMResponse(content=output)
Use the LLMScanner
class to start scanning an LLM using your model_adapter
function
from detoxio.scanner import LLMScanner
scanner = LLMScanner(count=10)
result = scanner.start(prompt_handler=model_adapter)
Finally generate JSON report from the result produced by the scanner
from detoxio.reporting import JSONLinesLLMScanReport
report = JSONLinesLLMScanReport(file_path="/tmp/report.jsonl")
report.render(result)
The JSONL report generator will write JSON lines that are serialized PromptEvaluationResponse objects