LLM Safety Leaderboard API Documentation

Welcome to the LLM Safety Leaderboard API documentation. This guide provides comprehensive information to help you integrate and utilize our API for assessing and comparing the safety of language models within your enterprise. By leveraging our API, you can ensure the deployment of safer models, enhancing overall operational security and compliance.

Purpose

The LLM Safety Leaderboard API serves as a critical tool for evaluating the safety of various language models. It enables you to compare different models, ensuring that only the safest and most reliable models are utilized in your applications. This is particularly crucial for maintaining high standards of safety, security, and user trust within your enterprise.

Offered APIs

Our API suite includes the following endpoints to support your safety evaluation needs:

  • Scores: Provides safety scores for different models based on various risk types such as bias, jailbreak, malware, and toxicity. A lower score indicates a better safety performance.
  • Summary: Delivers a comprehensive summary of specific risk types and subtypes across different categories.
  • Details: Offers extensive details on particular risk types and subtypes, providing in-depth insights into the safety metrics.
  • Model Ranks: Allows you to rank models based on their safety performance.
  • Model Percentiles: Shows the percentile ranks of models, with lower percentiles indicating better safety.

Obtaining an API Key

To get started with the LLM Safety Leaderboard API, you need to obtain an API key. Follow these steps:

  1. Login: Access your account at app.enkryptai.com.
  2. Get API Key: Navigate to the API section to get your unique API key.
  3. Authentication: Use this API key in the authorization headers of your leaderboard API calls.

By following these steps, you can seamlessly integrate the LLM Safety Leaderboard API into your enterprise applications, ensuring enhanced safety and compliance.

We are committed to helping you maintain the highest standards of safety in your language models. Should you have any questions or require further assistance, our support team is readily available to assist you.

Let’s ensure safer AI together!