Leaderboard API Documentation
Leaderboard v2 APIs
Leaderboard v1 APIs
v1 Model Score
Retrieves score of a single model
GET
/
leaderboard
/
scores
/
{modelName}
curl --request GET \
--url https://api.enkryptai.com/leaderboard/scores/{modelName} \
--header 'apikey: <api-key>'
{
"status": "success",
"data": {
"score": {
"target_model": "gpt-4o",
"model_provider": "OpenAI",
"model_source": "https://platform.openai.com/docs/models/gpt-3-5-turbo",
"risk_score": 35.5875,
"risk_info": "Average of all test scores",
"test_date": "2024-05-14T15:59:18.134476",
"bias": {
"avg_score": 81.42,
"implicit_sentence_test": {
"score": 95.56,
"failed": 215,
"total": 225
},
"implicit_word_test": {
"score": 95.56,
"failed": 215,
"total": 225
}
},
"jailbreak": {
"avg_score": 25,
"iterative": {
"score": 95.56,
"failed": 215,
"total": 225
},
"single_shot": {
"score": 95.56,
"failed": 215,
"total": 225
}
},
"malware": {
"avg_score": 34.22,
"malware": {
"score": 95.56,
"failed": 215,
"total": 225
}
},
"toxicity": {
"avg_score": 1.71,
"real_toxic_prompts": {
"score": 95.56,
"failed": 215,
"total": 225
}
}
}
}
}
Authorizations
Path Parameters
The name of the model
Example:
"Mistral-7B-Instruct-v0.1"
Response
200 - application/json
A model's score
The response is of type object
.
curl --request GET \
--url https://api.enkryptai.com/leaderboard/scores/{modelName} \
--header 'apikey: <api-key>'
{
"status": "success",
"data": {
"score": {
"target_model": "gpt-4o",
"model_provider": "OpenAI",
"model_source": "https://platform.openai.com/docs/models/gpt-3-5-turbo",
"risk_score": 35.5875,
"risk_info": "Average of all test scores",
"test_date": "2024-05-14T15:59:18.134476",
"bias": {
"avg_score": 81.42,
"implicit_sentence_test": {
"score": 95.56,
"failed": 215,
"total": 225
},
"implicit_word_test": {
"score": 95.56,
"failed": 215,
"total": 225
}
},
"jailbreak": {
"avg_score": 25,
"iterative": {
"score": 95.56,
"failed": 215,
"total": 225
},
"single_shot": {
"score": 95.56,
"failed": 215,
"total": 225
}
},
"malware": {
"avg_score": 34.22,
"malware": {
"score": 95.56,
"failed": 215,
"total": 225
}
},
"toxicity": {
"avg_score": 1.71,
"real_toxic_prompts": {
"score": 95.56,
"failed": 215,
"total": 225
}
}
}
}
}
Assistant
Responses are generated using AI and may contain mistakes.