Models API Documentation
Modify Model
curl --request PATCH \
--url https://api.enkryptai.com/models/modify-model \
--header 'Content-Type: application/json' \
--header 'X-Enkrypt-Model: <x-enkrypt-model>' \
--header 'apikey: <api-key>' \
--data '{
"model_saved_name": "Test Model",
"testing_for": "LLM",
"model_name": "mistralai/Mixtral-8x7B-Instruct-v0.1",
"model_type": "text_2_text",
"certifications": [],
"model_config": {
"model_provider": "together",
"model_version": "1",
"hosting_type": "External",
"model_source": "https://together.ai",
"rate_per_min": 20,
"system_prompt": "",
"conversation_template": "",
"endpoint": {
"scheme": "https",
"host": "api.together.xyz",
"port": 443,
"base_path": "/v1"
},
"paths": {
"completions": "/completions",
"chat": "/chat/completions"
},
"auth_data": {
"header_name": "Authorization",
"header_prefix": "Bearer",
"space_after_prefix": true
},
"apikeys": [
"xxxxx"
],
"metadata": {},
"default_request_options": {}
}
}'
{
"message": "Model details updated successfully",
"data": {
"model_saved_name": "Test Model",
"testing_for": "LLM",
"model_name": "mistralai/Mixtral-8x7B-Instruct-v0.1",
"model_type": "text_2_text",
"certifications": [],
"model_config": {
"paths": {
"chat": "/chat/completions",
"completions": "/completions"
},
"apikeys": [
"xxxxx"
],
"endpoint": {
"host": "api.together.xyz",
"port": 443,
"scheme": "https",
"base_path": "/v1"
},
"metadata": {},
"auth_data": {
"header_name": "Authorization",
"header_prefix": "Bearer",
"space_after_prefix": true
},
"hosting_type": "External",
"model_source": "https://together.ai",
"model_version": "1",
"system_prompt": "",
"model_provider": "together",
"conversation_template": "",
"default_request_options": {}
},
"created_at": "2024-10-15T17:22:47.872389+00:00",
"project_name": "default",
"updated_at": "2024-10-15T17:22:47.872389+00:00",
"model_id": 1234567890
}
}
Authorizations
Headers
The model saved name. E.g. Test Model
"Test Model"
Body
Name of the saved model
"Test Model"
Name of the model
"mistralai/Mixtral-8x7B-Instruct-v0.1"
Provider of the model which determines the request response format
openai
, together
, huggingface
, groq
, azure_openai
, anthropic
, cohere
, bedrock
, gemini
, ai21
, fireworks
, alibaba
, portkey
, deepseek
, mistral
, llama
, openai_compatible
, cohere_compatible
, anthropic_compatible
"together"
Scheme of the endpoint
"https"
Host of the endpoint
"api.together.xyz"
Port of the endpoint
443
Base path of the endpoint
"/v1"
Custom identifier for the model
"v1"
Hosting type of the model
External
, Internal
"External"
Source of the model
"https://together.ai"
< 100 won't enable async. > 100 will enable async mode. > 200 we can run boosted async (all tests in parallel). Default 20.
20
System prompt
""
Conversation template
""
["TOGETHER_AI_API_KEY"]
2048
2.5
10
If Azure, it's instance type
"enkrypt2024"
If Azure, it's API version
"2024-02-01"
If Azure, it's deployment ID
"gpt3"
If Anthropic, it's version
""
If Llama2, it's format
openai
"openai"
If Mistral, it's format
openai
, ollama
"openai"
If running Gemini on Vertex, specify the regional API endpoint (hostname only)
""
If running Gemini on Vertex, specify the project ID
""
If running Gemini on Vertex, specify the location ID
""
Purpose of testing
Copilot
, LLM
, Chatbot
"LLM"
Type of the model
text_2_text
"text_2_text"
List of certifications
[
"GDPR",
"CCPA",
"HIPAA",
"SOC 2 Type II",
"SOC 3"
]
Response
"Model details updated successfully"
Name of the saved model
"Test Model"
Name of the model
"mistralai/Mixtral-8x7B-Instruct-v0.1"
Provider of the model which determines the request response format
openai
, together
, huggingface
, groq
, azure_openai
, anthropic
, cohere
, bedrock
, gemini
, ai21
, fireworks
, alibaba
, portkey
, deepseek
, mistral
, llama
, openai_compatible
, cohere_compatible
, anthropic_compatible
"together"
Scheme of the endpoint
"https"
Host of the endpoint
"api.together.xyz"
Port of the endpoint
443
Base path of the endpoint
"/v1"
Custom identifier for the model
"v1"
Hosting type of the model
External
, Internal
"External"
Source of the model
"https://together.ai"
< 100 won't enable async. > 100 will enable async mode. > 200 we can run boosted async (all tests in parallel). Default 20.
20
System prompt
""
Conversation template
""
["TOGETHER_AI_API_KEY"]
2048
2.5
10
If Azure, it's instance type
"enkrypt2024"
If Azure, it's API version
"2024-02-01"
If Azure, it's deployment ID
"gpt3"
If Anthropic, it's version
""
If Llama2, it's format
openai
"openai"
If Mistral, it's format
openai
, ollama
"openai"
If running Gemini on Vertex, specify the regional API endpoint (hostname only)
""
If running Gemini on Vertex, specify the project ID
""
If running Gemini on Vertex, specify the location ID
""
Purpose of testing
Copilot
, LLM
, Chatbot
"LLM"
Type of the model
text_2_text
"text_2_text"
List of certifications
[
"GDPR",
"CCPA",
"HIPAA",
"SOC 2 Type II",
"SOC 3"
]
1234567890
"2024-10-15T17:22:47.872389+00:00"
"2024-10-15T17:22:47.872389+00:00"
"default"
{
"model_saved_name": "Test Model",
"testing_for": "LLM",
"model_name": "mistralai/Mixtral-8x7B-Instruct-v0.1",
"model_type": "text_2_text",
"certifications": [],
"model_config": {
"paths": {
"chat": "/chat/completions",
"completions": "/completions"
},
"apikeys": ["xxxxx"],
"endpoint": {
"host": "api.together.xyz",
"port": 443,
"scheme": "https",
"base_path": "/v1"
},
"metadata": {},
"auth_data": {
"header_name": "Authorization",
"header_prefix": "Bearer",
"space_after_prefix": true
},
"hosting_type": "External",
"model_source": "https://together.ai",
"model_version": "1",
"system_prompt": "",
"model_provider": "together",
"conversation_template": "",
"default_request_options": {}
},
"created_at": "2024-10-15T17:22:47.872389+00:00",
"project_name": "default",
"updated_at": "2024-10-15T17:22:47.872389+00:00",
"model_id": 1234567890
}
curl --request PATCH \
--url https://api.enkryptai.com/models/modify-model \
--header 'Content-Type: application/json' \
--header 'X-Enkrypt-Model: <x-enkrypt-model>' \
--header 'apikey: <api-key>' \
--data '{
"model_saved_name": "Test Model",
"testing_for": "LLM",
"model_name": "mistralai/Mixtral-8x7B-Instruct-v0.1",
"model_type": "text_2_text",
"certifications": [],
"model_config": {
"model_provider": "together",
"model_version": "1",
"hosting_type": "External",
"model_source": "https://together.ai",
"rate_per_min": 20,
"system_prompt": "",
"conversation_template": "",
"endpoint": {
"scheme": "https",
"host": "api.together.xyz",
"port": 443,
"base_path": "/v1"
},
"paths": {
"completions": "/completions",
"chat": "/chat/completions"
},
"auth_data": {
"header_name": "Authorization",
"header_prefix": "Bearer",
"space_after_prefix": true
},
"apikeys": [
"xxxxx"
],
"metadata": {},
"default_request_options": {}
}
}'
{
"message": "Model details updated successfully",
"data": {
"model_saved_name": "Test Model",
"testing_for": "LLM",
"model_name": "mistralai/Mixtral-8x7B-Instruct-v0.1",
"model_type": "text_2_text",
"certifications": [],
"model_config": {
"paths": {
"chat": "/chat/completions",
"completions": "/completions"
},
"apikeys": [
"xxxxx"
],
"endpoint": {
"host": "api.together.xyz",
"port": 443,
"scheme": "https",
"base_path": "/v1"
},
"metadata": {},
"auth_data": {
"header_name": "Authorization",
"header_prefix": "Bearer",
"space_after_prefix": true
},
"hosting_type": "External",
"model_source": "https://together.ai",
"model_version": "1",
"system_prompt": "",
"model_provider": "together",
"conversation_template": "",
"default_request_options": {}
},
"created_at": "2024-10-15T17:22:47.872389+00:00",
"project_name": "default",
"updated_at": "2024-10-15T17:22:47.872389+00:00",
"model_id": 1234567890
}
}