AI Compliance Glossary

Shared language for legal, product, and ops teams. Terms line up with how XIRA tags regulations.

Last reviewed April 8, 2026. For deeper walkthroughs, see obligation guides.

A

  • Algorithmic discrimination

    Differential treatment or impact caused by an automated decision-making system that results in unlawful discrimination based on protected characteristics such as race, gender, age, or disability.

  • Automated decision-making (ADM)

    Any process where a computational system makes or substantially contributes to a decision affecting a person, with limited or no human involvement at the point of decision.

B

  • Bias audit

    An independent evaluation of an automated system to assess whether it produces disparate impact on protected groups. Required annually under NYC Local Law 144 for covered automated employment decision tools.

C

  • Consequential decision

    A decision that materially affects a person's access to employment, housing, credit, insurance, education, or other significant opportunities. Many AI regulations apply only to AI used in consequential decisions.

D

  • Deployer

    A company that uses an AI system built by someone else. Distinguished from a developer (who builds the AI). Most state AI laws impose different obligations on deployers versus developers.

  • Developer

    A company that builds and distributes an AI system for others to use. Developers typically have model documentation, bias testing disclosure, and customer notification obligations.

H

  • High-risk AI system

    An AI system used to make or substantially assist in consequential decisions about people. The specific definition varies by jurisdiction, but generally includes AI used in hiring, lending, insurance, housing, and criminal justice.

I

  • Impact assessment

    A structured document evaluating the risks and benefits of deploying an AI system. Typically covers data inputs, decision outputs, affected populations, potential harms, and mitigation measures.

M

  • Model card

    A standardized document describing an AI model's purpose, performance metrics, training data, known limitations, and intended use cases. Originated from a 2019 Google research paper and now referenced in several state laws.

O

  • Opt-out mechanism

    A process allowing individuals to decline AI-driven decision-making and request human review instead. Required by several privacy laws with automated decision-making provisions.

P

  • Profiling

    Any form of automated processing that evaluates personal aspects of an individual, such as work performance, economic situation, health, preferences, interests, reliability, behavior, location, or movements.

  • Protected class

    A group of people legally protected from discrimination. Includes race, color, national origin, sex, religion, age, disability, and other characteristics. AI impact assessments must evaluate effects on all applicable protected classes.

R

  • Rebuttable presumption

    A legal principle where compliance with specified requirements creates a presumption that the company acted with reasonable care. Under Colorado SB 24-205, following prescribed compliance steps creates this presumption as a legal defense.

  • Right to human review

    The right of an individual affected by an AI-driven decision to have that decision reviewed by a human with authority to override the AI's output.

S

  • Shadow AI

    AI tools used within a company without the knowledge or approval of IT, compliance, or management. Poses compliance risk because untracked AI systems may trigger regulatory obligations the company is unaware of.

T

  • Transparency notice

    A disclosure informing individuals about the use of AI in decisions affecting them. Content requirements vary by jurisdiction but typically include what the AI does, what data it uses, and how to opt out.