Guardrails API Documentation
Hallucination
This detector takes a request text and the corresponding response text to check for potential Hallucination in the output of a LLM response. If the response text contains Hallucination, is_hallucination will be set to 1, otherwise, it will be set to 0. The detector also provides a prompt_based score to indicate the extent to which the hallucination is based on the prompt.