Access the Complete Glossary

View the Full AI Governance, Risk, and Compliance Glossary → Explore detailed definitions, examples, and context for hundreds of AI security, governance, and compliance terms and concepts.

AI Governance, Risk, and Compliance Glossary

Explore comprehensive definitions of must-know terms in the AI security, governance, and compliance industry.

Core AI Concepts

Artificial Intelligence (AI)

Artificial Intelligence (AI) is a field of computer science and engineering that focuses on creating intelligent machines capable of performing tasks that typically require human intelligence. AI involves developing algorithms, computer programs, and systems that can learn from data and make decisions or predictions based on that learning.

Artificial General Intelligence (AGI)

Artificial General Intelligence (AGI) refers to AI systems that can perform any cognitive task a human could.

Transformative AI (TAI)

Transformative AI defines AI Systems by their consequences rather than their capabilities.

AI Safety and Security

AI Safety

AI Safety is measures taken to ensure that AI systems operate within defined boundaries and do not cause harm to individuals or society.

AI Security

AI Security refers to the practices and technologies designed to protect artificial intelligence systems from threats, vulnerabilities, and malicious attacks. It encompasses safeguarding data integrity, model confidentiality, and system availability.

AI Guardrails

AI Guardrails refer to a framework of guidelines, safety protocols, and ethical standards designed to ensure the responsible and safe deployment of artificial intelligence systems.

AI Alignment

AI Alignment refers to the field of study focused on ensuring that artificial intelligence systems operate in accordance with human values, intentions, and ethical standards.

Safety Alignment of AI

Safety Alignment of AI refers to the process of ensuring that artificial intelligence systems operate in accordance with human values, ethics, and safety standards.

AI Governance and Compliance

AI Governance

AI Governance refers to the framework and practices that guide the development, deployment, and management of artificial intelligence systems. It encompasses policies, procedures, and oversight mechanisms.

AI Policy

AI policy refers to rules, regulations, and guidelines designed by some form of governing authority (private or public) that govern the development, deployment, and use of AI technologies.

AI Policy Compliance

AI Policy Compliance refers to the adherence to regulations, guidelines, and ethical standards governing the use of artificial intelligence technologies. This encompasses ensuring systems meet legal and ethical requirements.

AI Law

AI Law refers to the body of law related to court cases, including Artificial Intelligence adjudication. It encompasses various legal issues related to the development, deployment, and use of AI systems, including intellectual property, data & privacy, liability, consumer protection, antitrust issues, human oversight, human rights, and ethics.

Compliance-Aware AI

Compliance-Aware AI refers to artificial intelligence systems designed to operate within regulatory frameworks, ensuring adherence to legal standards and industry guidelines.

AI Risk Management

AI Risk

AI Risk refers to the potential for negative consequences arising from the development and deployment of AI systems, including bias, discrimination, cybersecurity threats, privacy violations, and safety concerns.

AI Risk Management

AI Risk Management is the process of identifying, assessing, and mitigating risks in AI development and deployment. It ensures compliance with regulations and safeguards against ethical, legal, and operational risks.

Compliance Risk

Compliance risk refers to the potential for financial loss, legal penalties, and reputational damage that organizations face when they fail to adhere to laws, regulations, and internal policies.

Trust Risk

AI Trust Risk refers to the potential loss of confidence and trust from stakeholders in AI Systems due to biases in the data used to train the AI system, errors in the algorithmic decision-making process, lack of transparency in how decisions are made, or unethical or illegal uses of the technology.

AI Red Teaming and Testing

AI Red Teaming

AI Red Teaming is a proactive security practice that involves simulating cyberattacks on artificial intelligence systems to identify vulnerabilities, weaknesses, and potential threats.

Credible AI Red Teaming

Credible AI Red Teaming refers to the practice of rigorously testing artificial intelligence systems by simulating adversarial attacks and identifying vulnerabilities.

AI Vulnerability Testing

AI Vulnerability Testing is a systematic process designed to identify and evaluate weaknesses in artificial intelligence systems. This critical assessment involves simulating attacks, analyzing system responses, and identifying potential security gaps.

Bias and Fairness

Bias

Social bias refers to human-created biases, such as stereotypes, that may be reflected in AI systems. Statistical bias refers to the systematic error in an AI system’s predictions that arise from biased data or algorithms.

Bias Detection

Bias detection refers to the systematic identification of prejudiced or unfair tendencies in data, algorithms, or decision-making processes. It involves analyzing datasets and models to uncover biases that may lead to discriminatory outcomes.

Bias Mitigation

Bias mitigation refers to the strategies and techniques employed to identify and reduce biases in data, algorithms, and decision-making processes. This practice is essential for ensuring fair and equitable AI systems.

Security and Threats

Adversarial Attacks

Adversarial attacks are deliberate manipulations of input data designed to deceive machine learning models, particularly in fields like computer vision and natural language processing.

Adversarial Machine Learning

Adversarial Machine Learning is a subfield of artificial intelligence focused on developing models that can withstand attacks from malicious inputs. This discipline explores techniques for making AI systems more robust against adversarial manipulation.

Backdoor Attacks

Backdoor attacks are a form of cybersecurity threat where an attacker secretly creates a hidden entry point into a system, allowing unauthorized access to bypass normal authentication methods.

Black Box Attacks

Black Box Attacks refer to a type of adversarial machine learning exploit where an attacker manipulates input data to deceive a model without knowing its internal structures or parameters.

Data Poisoning

Data poisoning is a type of adversarial attack where malicious actors intentionally introduce false or misleading data into a machine learning model’s training set.

Privacy and Data Protection

Data Privacy in AI

Data Privacy in AI refers to the protection of personal and sensitive information during the collection, processing, and storage phases of artificial intelligence systems.

Data Integrity

Data Integrity refers to the accuracy, consistency, and reliability of data throughout its lifecycle. It ensures that data remains unaltered during storage, retrieval, and processing, maintaining its authenticity and quality.

Data Provenance

Data Provenance refers to the documentation and tracking of the origins, history, and transformations of data throughout its lifecycle. It encompasses the processes, people, and systems involved in data creation and modification.

User Privacy in AI

User Privacy in AI refers to the protection of personal information and data collected by artificial intelligence systems. It encompasses practices and technologies designed to ensure that user data is handled securely, transparently, and ethically.

Transparency and Accountability

Transparency

Transparency in AI is not only about explainability, but also about ensuring that people comprehend what your AI System does and how it does it.

Transparent AI Decision-Making

Transparent AI decision-making refers to the practice of ensuring that artificial intelligence systems operate in a clear and understandable manner, allowing stakeholders to see how decisions are made.

Accountability

Accountability refers to the attribute of being responsible for the actions and decisions of AI, as well as the impact AI systems have on individuals and society.

Transparency Report

A transparency report is a broad category of artifacts that are produced about AI systems that provides transparency about how they work and what potential risks/harms they pose; the term “transparency report” is a category rather than referring to one specific type of artifact.

Trust and Reliability

Trust

Trust refers to the confidence and belief that individuals or entities have in the reliability, integrity, and capability of a system or a person. In AI, trust is crucial because it directly impacts user adoption, satisfaction, and overall success of AI systems.

Trustworthy AI

Trustworthy AI refers to artificial intelligence systems that are designed to be reliable, ethical, and transparent. These systems prioritize user safety, privacy, and fairness while ensuring accountability in their decision-making processes.

Robustness

Robustness refers to the ability of an algorithm or model to maintain its performance and accuracy under different conditions, such as changes in the input data, noise or outliers in the data, or attacks designed to manipulate or disrupt the model’s behavior.

Advanced Security Concepts

Zero Trust AI Frameworks

Zero Trust AI Frameworks are security models that operate under the principle of “never trust, always verify.” They leverage artificial intelligence to continuously assess and authenticate users, devices, and applications within an organization’s network.

Confidential Computing

Confidential Computing is a cutting-edge technology designed to enhance data privacy and security by keeping sensitive data encrypted and secure while in use.

Secure Multi-Party Computation for AI

Secure Multi-Party Computation (SMPC) for AI is a cryptographic technique that enables multiple parties to collaboratively compute functions over their private data without revealing the data itself.

Watermarking AI-Generated Content

Watermarking AI-generated content refers to the practice of embedding identifiable marks or symbols within digital media created by artificial intelligence. This process serves to protect intellectual property, ensure authenticity, and deter unauthorized use or reproduction.
Our glossary is continuously updated to reflect the evolving landscape of AI security, governance, and compliance terminology.