Analyze an image with text using a saved guardrail with input/output mode selection
Documentation Index
Fetch the complete documentation index at: https://docs.enkryptai.com/llms.txt
Use this file to discover all available pages before exploring further.
The guardrail name
"My Guardrail"
Whether to apply input (prompt) or output (response) detectors
prompt, response "prompt"
Request body for guardrail/policy-based image detection. Detectors are defined by the guardrail or policy; no inline detectors config is needed.
Multimodal image detection results.
Per-detector integer flags (1 = detected, 0 = not detected).
{
"toxicity": 0,
"nsfw": 0,
"pii": 1,
"injection_attack": 0,
"policy_violation": 0
}Per-detector detail objects containing human-readable results and any extra fields (e.g. entities for PII, explanation for policy_violation).
{
"toxicity": { "toxicity": "Toxicity Not Detected" },
"nsfw": { "nsfw": "NSFW Not Detected" },
"pii": {
"entities": { "person": { "John Doe": "<person_0>" } }
},
"injection_attack": {
"injection_attack": "Injection Attack Not Detected"
},
"policy_violation": {
"policy_violation": "Policy Violation Not Detected"
}
}