This detector takes a request text and the corresponding response text to check for potential Hallucination in the output of a LLM response. The detector outputs a boolean value indicating whether the response text contains Hallucination.

Example request:

import requests
import json
import os

url = "https://api.enkryptai.com/guardrails/hallucination"

payload = json.dumps({
  "request_text": "What is the capital of France?",
  "response_text": "Tokyo is the capital of france.",
  "context": ""
}

)

headers = {
    'Content-Type': 'application/json',
    'api_key': os.getenv('ENKRYPTAI_API_KEY')
}

response = requests.request("POST", url, headers=headers, data=payload)

formatted_response = json.dumps(json.loads(response.text), indent=4)
print(formatted_response)

Example response:

{
  "summary": {
    "is_hallucination": true
  },
  "details": {
    "prompt_based": 1.0
  }
}