Explore the Complete Risk Framework
View Detailed AI Risk Categories → Access comprehensive descriptions, examples, and mitigation strategies for each of the 300+ risk sub-categories to build robust AI safety programs.Understanding the Landscape of AI Risks
Enkrypt AI maps safety risks into six main categories and 300 sub-categories for comprehensive red teaming that detects and mitigates real-world harm across sensitive use cases.Six Main Risk Categories
Criminal Planning
Comprehensive detection of criminal activities and planning:- Phishing - Email and social engineering attacks
- Ransomware - Malicious software that encrypts data
- Identity Theft - Unauthorized use of personal information
- Data Breach - Unauthorized access to sensitive data
- Social Engineering - Manipulation tactics to gain access
- Crypto-jacking - Unauthorized cryptocurrency mining
- Botnets - Networks of compromised devices
- Cyber-espionage - Intelligence gathering through cyber means
- DDoS Attacks - Distributed denial of service attacks
- Pharming - Redirecting users to malicious sites
Guns and Illegal Substances
Detection of weapons and controlled substances:- Biological Weapons - Manufacturing and safety protocols
- Chemical Explosives - Explosive materials and devices
- Illegal Firearms - Unauthorized weapons and trafficking
- Radioactive Materials - Nuclear and radiological threats
- Narcotics - Illegal drug manufacturing and distribution
- Synthetic Drugs - Laboratory-produced substances
Hate Speech and Discrimination
Identification of discriminatory content and behavior:- Body Shaming - Discrimination based on physical appearance
- Ableism - Discrimination against people with disabilities
- Racial Discrimination - Ethnic slurs and stereotypes
- LGBTQ+ Discrimination - Homophobia and hate crimes
- Classism - Discrimination based on socioeconomic status
- Religious Intolerance - Discrimination against religious groups
- Misogyny - Gender-based discrimination and violence
Regulated/Controlled Substances
Monitoring of legal but controlled substances:- Alcohol - Underage drinking and alcohol-related crimes
- Cannabis - Illegal distribution and possession
- Prescription Drugs - Opioids and controlled medications
- Tobacco - Underage use and illegal sales
- Synthetic Substances - Laboratory-produced drugs
Sexual Content
Detection of inappropriate sexual content:- Sexual Harassment - Unwanted advances and comments
- Pornographic Content - Explicit and adult material
- Sexual Violence - Content depicting sexual assault
- Revenge Porn - Non-consensual intimate content
- Voyeurism - Unauthorized recording of intimate moments
Suicide and Self-Harm
Identification of self-harm and suicidal content:- Suicidal Ideation - Thoughts and plans of self-harm
- Self-Injury - Cutting and other self-harm methods
- Eating Disorders - Anorexia and bulimia content
- Substance Abuse - Drug and alcohol-related self-harm
- Mental Health Crisis - Content promoting self-harm
Risk Assessment Framework
Detection Capabilities
- Real-time Analysis - Instant detection of risk categories
- Context Understanding - Nuanced interpretation of content
- Multi-language Support - Detection across different languages
- Cultural Sensitivity - Context-aware risk assessment
Mitigation Strategies
- Content Filtering - Automatic blocking of harmful content
- Risk Scoring - Quantitative assessment of threat levels
- Alert Systems - Immediate notification of high-risk content
- Compliance Reporting - Detailed documentation for audits
Risk assessment is an ongoing process that should be integrated throughout the AI development lifecycle.