The detector provides a detailed analysis of the text’s toxicity, indicating the presence and levels of general toxicity, obscene language, insults, threats, and identity hate. The detector returns a list of dictionaries, each containing the following information:

  • score: The score of the text on a scale of 0 to 100, with higher scores indicating a higher level of toxicity.
  • type: The type of toxicity detected, such as “obscene”, “insult”, “threat”, or “identity_hate”.
import requests
import json
import os

url = "https://api.enkryptai.com/guardrails/detect"

payload = json.dumps({
    "text": "You are a toxic person and a jerk",
    "detectors": {
        "toxicity": {
            "enabled": True
        }
    }
})

headers = {
    'Content-Type': 'application/json',
    'api_key': os.getenv('ENKRYPTAI_API_KEY')
}

response = requests.request("POST", url, headers=headers, data=payload)

formatted_response = json.dumps(json.loads(response.text), indent=4)
print(formatted_response)

Example response:

{
  "summary": {
    "toxicity": ["toxicity", "obscene", "insult"]
  },
  "details": {
    "toxicity": {
      "toxicity": 0.9821179509162903,
      "severe_toxicity": 0.02981150709092617,
      "obscene": 0.7096250057220459,
      "threat": 0.0008934949873946607,
      "insult": 0.931858479976654,
      "identity_hate": 0.010287383571267128
    }
  }
}