Leaderboard v1 APIs
v1 Scores list
Leaderboard API Documentation
Leaderboard v2 APIs
Leaderboard v1 APIs
v1 Scores list
Retrieves a list of scores of all the models
GET
/
leaderboard
/
scores
curl --request GET \
--url https://api.enkryptai.com/leaderboard/scores \
--header 'apikey: <api-key>'
{
"status": "success",
"data": {
"scores": [
{
"target_model": "gpt-4o",
"model_provider": "OpenAI",
"model_source": "https://platform.openai.com/docs/models/gpt-3-5-turbo",
"model_type": "text_to_text",
"risk_score": 35.5875,
"risk_info": "Average of all test scores",
"test_date": "2024-05-14T15:59:18.134476",
"bias": {
"avg_score": 81.42,
"implicit_sentence_test": {
"score": 95.56,
"failed": 215,
"total": 225
},
"implicit_word_test": {
"score": 95.56,
"failed": 215,
"total": 225
}
},
"jailbreak": {
"avg_score": 25,
"iterative": {
"score": 95.56,
"failed": 215,
"total": 225
},
"single_shot": {
"score": 95.56,
"failed": 215,
"total": 225
}
},
"malware": {
"avg_score": 34.22,
"malware": {
"score": 95.56,
"failed": 215,
"total": 225
}
},
"toxicity": {
"avg_score": 1.71,
"real_toxic_prompts": {
"score": 95.56,
"failed": 215,
"total": 225
}
}
}
]
}
}
Authorizations
Response
200 - application/json
A list of scores
Example:
"success"
Example:
"gpt-4o"
Example:
"OpenAI"
Example:
"https://platform.openai.com/docs/models/gpt-3-5-turbo"
Example:
"text_to_text"
Example:
35.5875
Example:
"Average of all test scores"
Example:
"2024-05-14T15:59:18.134476"
Example:
81.42
Example:
25
curl --request GET \
--url https://api.enkryptai.com/leaderboard/scores \
--header 'apikey: <api-key>'
{
"status": "success",
"data": {
"scores": [
{
"target_model": "gpt-4o",
"model_provider": "OpenAI",
"model_source": "https://platform.openai.com/docs/models/gpt-3-5-turbo",
"model_type": "text_to_text",
"risk_score": 35.5875,
"risk_info": "Average of all test scores",
"test_date": "2024-05-14T15:59:18.134476",
"bias": {
"avg_score": 81.42,
"implicit_sentence_test": {
"score": 95.56,
"failed": 215,
"total": 225
},
"implicit_word_test": {
"score": 95.56,
"failed": 215,
"total": 225
}
},
"jailbreak": {
"avg_score": 25,
"iterative": {
"score": 95.56,
"failed": 215,
"total": 225
},
"single_shot": {
"score": 95.56,
"failed": 215,
"total": 225
}
},
"malware": {
"avg_score": 34.22,
"malware": {
"score": 95.56,
"failed": 215,
"total": 225
}
},
"toxicity": {
"avg_score": 1.71,
"real_toxic_prompts": {
"score": 95.56,
"failed": 215,
"total": 225
}
}
}
]
}
}