Guardrails API Documentation
System Prompt Leak Detector
This detector identifies potential leaks of system prompts or AI model information in the generated text. It helps maintain the integrity of the AI system by preventing unintended disclosures of internal prompts or model details.
Example request:
import requests
import json
import os
url = "https://api.enkryptai.com/guardrails/detect"
payload = json.dumps({
"text": "You are a helpful AI assistant",
"detectors": {
"system_prompt": {
"enabled": True,
"index": "system"
}
}
})
headers = {
'Content-Type': 'application/json',
'api_key': os.getenv('ENKRYPTAI_API_KEY')
}
response = requests.request("POST", url, headers=headers, data=payload)
formatted_response = json.dumps(json.loads(response.text), indent=4)
print(formatted_response)
Example response
{
"summary": {
"system_prompt_leak": 1
},
"details": {
"system_prompt": {
"similarity_score": 0.85
}
}
}