This detector takes a request text and the corresponding response text to check for potential Hallucination in the output of a LLM response. If the response text contains Hallucination, is_hallucination will be set to 1, otherwise, it will be set to 0. The detector also provides a prompt_based score to indicate the extent to which the hallucination is based on the prompt.
NOTE: This is unavailable at the moment.(Coming soon)
import requestsimport jsonimport osurl = "https://api.enkryptai.com/guardrails/hallucination"payload = json.dumps({ "request_text": "What is the capital of France?", "response_text": "Tokyo is the capital of france.", "context": ""})headers = { 'Content-Type': 'application/json', 'apikey': os.getenv('ENKRYPTAI_API_KEY')}response = requests.request("POST", url, headers=headers, data=payload)formatted_response = json.dumps(json.loads(response.text), indent=4)print(formatted_response)