Skip to main content
The payload for EnkryptAI’s Red Teaming API V3 is structured as a JSON object with three primary sections: dataset_configuration, redteam_test_configurations, and endpoint_configuration.

What’s New in V3

V3 introduces significant enhancements to attack methods configuration:
  • Granular Parameter Control: Each attack method now supports specific parameters for fine-tuned testing
  • Structured Attack Hierarchy: Clear organization of basic, static, and dynamic attack methods
  • Enhanced Attack Methods: Expanded suite of encoding, obfuscation, and multi-modal attack techniques

V2 to V3 Migration

V2 Format (Legacy):
{
  "redteam_test_configurations": {
    "privacy_test": {
      "sample_percentage": 50,
      "attack_methods": {
        "basic": ["basic"],
        "advanced": {
          "static": ["encoding"],
          "dynamic": ["iterative"]
        }
      }
    }
  }
}
V3 Format (Current):
{
  "redteam_test_configurations": {
    "privacy_test": {
      "sample_percentage": 50,
      "attack_methods": {
        "basic": {"basic": {"params": {}}},
        "static": {
          "base64_encoding": {
            "params": {"encoding_type": "base64", "iterations": 2}
          }
        },
        "dynamic": {
          "iterative": {
            "params": {
              "width": 5,
              "branching_factor": 9,
              "depth": 3
            }
          }
        }
      }
    }
  }
}
Key Changes:
  1. Attack methods are now objects with params instead of arrays
  2. Each attack method requires a params object (can be empty {})
  3. Use specific encoding keywords (e.g., base64_encoding instead of generic encoding)
  4. Configure parameters for iterative attacks: width, branching_factor, depth

Quick Reference: Attack Methods

Choose attack methods based on your model type and security testing needs. This table provides an enhanced overview of all available attack methods.

By Model Type

  • LLM (Text)
  • VLM (Vision)
  • ALM (Audio)
Attack MethodKeywordCategoryUse WhenParametersComplexity
Raw PromptsbasicBasicAlways (baseline)None⭐ Low
ObfuscationobfuscationStaticStandard testingNone⭐⭐ Medium
Base64base64_encodingStaticEncoding evasioniterations (1-3)⭐⭐ Medium
Hexadecimalhex_encodingStaticTechnical filtersNone⭐⭐ Medium
ASCIIascii_encodingStaticText filtersNone⭐⭐ Medium
Binarybinary_encodingStaticAdvanced encodingNone⭐⭐ Medium
URL Encodingurl_encodingStaticWeb applicationsNone⭐ Low
Leet Speakleet_encodingStaticCharacter matchingNone⭐ Low
ROT13rot13_encodingStaticCipher testingNone⭐ Low
ROT21rot21_encodingStaticCipher testingNone⭐ Low
Morse Codemorse_encodingStaticUnique encodingNone⭐⭐ Medium
Frenchlang_frStaticMultilingual bypassNone⭐⭐ Medium
Italianlang_itStaticMultilingual bypassNone⭐⭐ Medium
Hindilang_hiStaticNon-Latin scriptsNone⭐⭐ Medium
Spanishlang_esStaticMultilingual bypassNone⭐⭐ Medium
Japaneselang_jaStaticAsian languagesNone⭐⭐ Medium
EAI Attackeai_attackStaticAdvanced jailbreakNone⭐⭐⭐ High
Deep Inceptiondeep_inceptionStaticNested injectionNone⭐⭐⭐ High
IterativeiterativeDynamicAdaptive attackswidth, branching_factor, depth⭐⭐⭐ High
Multi-Turnmulti_turnDynamicConversation exploitNone⭐⭐⭐ High

Quick Lookup: All Keywords

  • basic - LLM, VLM, ALM
  • obfuscation - LLM, VLM
  • eai_attack - LLM, VLM
  • ascii_encoding - LLM
  • base64_encoding - LLM
  • binary_encoding - LLM
  • hex_encoding - LLM
  • url_encoding - LLM
  • leet_encoding - LLM
  • morse_encoding - LLM
  • rot13_encoding - LLM
  • rot21_encoding - LLM
  • lang_fr - French (LLM)
  • lang_it - Italian (LLM)
  • lang_hi - Hindi (LLM)
  • lang_es - Spanish (LLM)
  • lang_ja - Japanese (LLM)
  • iterative - LLM
  • multi_turn - LLM
  • masking - VLM
  • figstep - VLM
  • hades 🔒 - VLM
  • jood 🔒 - VLM
  • waveform - ALM
  • echo - ALM
  • speed - ALM
  • pitch - ALM
  • reverb - ALM
  • noise - ALM
  • deep_inception - LLM

Payload Structure

High-Level Overview

{
    "dataset_configuration": {
        // Optional: For generating custom datasets
        "system_description": "Your AI system description",
        "policy_description": "What the model should NOT do",
        "max_prompts": 100,
        "scenarios": 2,
        "categories": 2,
        "depth": 2
    },
    "redteam_test_configurations": {
        // Required: Tests to run
        "test_name": {
            "sample_percentage": 10,
            "attack_methods": {
                "basic": {"basic": {"params": {}}},
                "static": { /* encoding/obfuscation methods */ },
                "dynamic": { /* adaptive attack methods */ }
            }
        }
    },
    "endpoint_configuration": {
        // Required: Model to test
        "testing_for": "foundationModels",
        "model_name": "gpt-4o",
        "model_config": {
            "model_provider": "openai",
            "endpoint": { /* API endpoint details */ },
            "auth_data": { /* Authentication config */ },
            "apikeys": ["YOUR_API_KEY"],
            "input_modalities": ["text"],
            "output_modalities": ["text"]
        }
    }
}
For complete field descriptions, see Configuration Reference.

Quick Start Examples

Starter: Basic LLM Test

{
  "redteam_test_configurations": {
    "harmful_test": {
      "sample_percentage": 10,
      "attack_methods": {
        "basic": {"basic": {"params": {}}}
      }
    }
  },
  "endpoint_configuration": {
    "testing_for": "foundationModels",
    "model_name": "gpt-4o-mini",
    "model_config": {
      "model_provider": "openai",
      "endpoint_url": "https://api.openai.com/v1/",
      "apikey": "YOUR_OPENAI_API_KEY",
      "input_modalities": ["text"],
      "output_modalities": ["text"]
    }
  }
}

Standard: Multi-Test Assessment

{
  "redteam_test_configurations": {
    "harmful_test": {
      "sample_percentage": 20,
      "attack_methods": {
        "basic": {"basic": {"params": {}}},
        "static": {
          "obfuscation": {"params": {}},
          "base64_encoding": {"params": {"encoding_type": "base64", "iterations": 1}}
        }
      }
    },
    "bias_test": {
      "sample_percentage": 15,
      "attack_methods": {
        "basic": {"basic": {"params": {}}},
        "static": {"lang_es": {"params": {}}}
      }
    },
    "pii_test": {
      "sample_percentage": 20,
      "attack_methods": {
        "basic": {"basic": {"params": {}}}
      }
    }
  },
  "endpoint_configuration": {
    "testing_for": "foundationModels",
    "model_name": "gpt-4o",
    "model_config": {
      "model_provider": "openai",
      "endpoint_url": "https://api.openai.com/v1/",
      "apikey": "YOUR_OPENAI_API_KEY",
      "input_modalities": ["text"],
      "output_modalities": ["text"]
    }
  }
}
For more examples including VLM, ALM, agents, and industry-specific use cases, see the Examples page.

Available Test Types

Standard Tests (12 tests)

Tests available for all model types:
TestKeywordPurpose
Biasbias_testIdentifies biased outputs
CBRNcbrn_testChemical, biological, radiological, nuclear
CSEMcsem_testChild abuse exploitation
Harmfulharmful_testHarm or danger promotion
Insecure Codeinsecure_code_testVulnerable code generation
Toxicitytoxicity_testOffensive content
PIIpii_testPersonal information leakage
Copyrightcopyright_testCopyrighted material
Misinformationmisinformation_testFalse information
System Promptsystem_prompt_extractions_testPrompt extraction
Spongesponge_testResource exhaustion
Competitorcompetitor_testCompetitor information

Custom & Agentic Tests

custom_test - Test with your own generated dataset
For AI agents with tool use and autonomous capabilities:
  • governance_test - Alignment, goal misalignment, and policy drift
  • agent_output_quality_test - Output quality, hallucinations, bias, and toxicity
  • tool_misuse_test - API integration, supply chain, and resource consumption
  • privacy_test - Sensitive data exposure and exfiltration channels
  • reliability_and_observability_test - Data poisoning, concept drift, and opaque reasoning
  • agent_behaviour_test - Human manipulation and unsafe actuation
  • access_control_and_permissions_test - Credential theft, privilege escalation, confused deputy
  • tool_extraction_test - Tool information extraction
For generated adversarial datasets:
  • adv_bias_test - Adversarial bias detection
  • adv_info_test - Sensitive information extraction
  • adv_persona_test - Persona manipulation
  • adv_command_test - Command injection
  • adv_pii_test - Advanced PII extraction
  • adv_competitor_test - Competitor information

Best Practices

Start Simple

Begin with basic attacks at low sample percentage (2-5%) to establish baseline.

Progressive Testing

Add static methods, then dynamic attacks as you identify vulnerabilities.

Match Your Model

Use appropriate modalities: ["text"] for LLM, ["text", "image"] for VLM, ["text", "audio"] for ALM.

Multiple Tests

Run multiple test types (harmful, bias, PII) for comprehensive coverage.

Sample Wisely

Dev: 2-5% | Staging: 10-20% | Production: 50-100%

Consider Cost

Dynamic attacks are thorough but resource-intensive. Start with static methods.

Configuration Guidelines

Sample Percentage by Stage

// Development - Fast iteration
{
  "sample_percentage": 2  // 2-5%
}

// Staging - Balanced testing
{
  "sample_percentage": 15  // 10-20%
}

// Production - Comprehensive
{
  "sample_percentage": 50  // 50-100%
}

Attack Method Combinations

  • Minimal (Dev)
  • Standard (Staging)
  • Comprehensive (Prod)
{
  "attack_methods": {
    "basic": {"basic": {"params": {}}}
  }
}
Time: 2-5 min | Coverage: Baseline

Common Parameters

iterative.width
integer
default:"5"
Number of parallel attack paths (1-10)
iterative.branching_factor
integer
default:"9"
Variations per iteration (1-15)
iterative.depth
integer
default:"3"
Maximum iteration depth (1-5)
base64_encoding.iterations
integer
default:"1"
Encoding iterations (1-3). Higher = more obfuscation but less comprehension

Security & Usage Notes

Security:
  • Never commit API keys to version control
  • Use environment variables for credentials
  • Rotate keys regularly
  • Separate test and production keys
Usage:
  • Verify endpoints and keys are correct
  • Adjust sample_percentage based on dataset size
  • Choose attack methods appropriate for your model type
  • Use in accordance with provider terms of service

Next Steps

Additional Resources


This payload structure facilitates in-depth testing across various model types, allowing for comprehensive assessments of behavior and security with fine-grained control over attack parameters.