Documentation Index
Fetch the complete documentation index at: https://docs.enkryptai.com/llms.txt
Use this file to discover all available pages before exploring further.
The detector provides a detailed analysis of the text’s toxicity, indicating the presence and levels of general toxicity, obscene language, insults, threats, and identity hate. The detector returns a list of dictionaries, each containing the following information:
- score: The score of the text on a scale of 0 to 100, with higher scores indicating a higher level of toxicity.
- type: The type of toxicity detected, such as “obscene”, “insult”, “threat”, or “identity_hate”.
Example request:
import requests
import json
import os
url = "https://api.enkryptai.com/guardrails/detect"
payload = json.dumps({
"text": "You are a toxic person and a jerk",
"detectors": {
"toxicity": {
"enabled": True,
"block_message": "Your custom message"
}
}
})
headers = {
'Content-Type': 'application/json',
'apikey': os.getenv('ENKRYPTAI_API_KEY')
}
response = requests.request("POST", url, headers=headers, data=payload)
formatted_response = json.dumps(json.loads(response.text), indent=4)
print(formatted_response)
Example response:
{
"summary": {
"toxicity": ["toxicity", "obscene", "insult"]
},
"details": {
"toxicity": {
"toxicity": 0.9821179509162903,
"severe_toxicity": 0.02981150709092617,
"obscene": 0.7096250057220459,
"threat": 0.0008934949873946607,
"insult": 0.931858479976654,
"identity_hate": 0.010287383571267128,
"compliance_mapping": {
"owasp_llm_2025": [
"LLM09:2025 Misinformation",
"LLM05:2025 Improper Output Handling"
],
"mitre_atlas": [
"AML.T0056: LLM Meta Prompt Extraction (Eliciting toxic responses)"
],
"nist_ai_rmf": [
"MANAGE 2.3, MEASURE 2.7 (Harmful bias & toxicity management)"
],
"eu_ai_act": [
"Article 15(1) (Accuracy, robustness & safety)"
],
"iso_iec_standards": [
"ISO/IEC 42001: 6.4.3"
]
}
}
},
"result_message": "Your custom message"
}