ISO/IEC
standardactive

ISO/IEC 22989 - Artificial Intelligence - Concepts and Terminology

ISO/IEC

View original resource

ISO/IEC 22989 - Artificial Intelligence - Concepts and Terminology

Summary

ISO/IEC 22989 serves as the foundational dictionary for the AI standards ecosystem, establishing a common vocabulary that enables consistent communication across industries, regulators, and technical communities. Unlike technical implementation guides, this standard focuses on defining what we actually mean when we discuss AI concepts like "explainability," "robustness," and "transparency" - terms that are often used loosely but need precise definitions for effective governance. Published in 2022, it provides the conceptual bedrock that other AI standards and regulations reference, making it essential reading for anyone involved in AI policy, compliance, or standardization efforts.

The Language Problem This Solves

AI governance suffers from a Tower of Babel syndrome - everyone talks about "ethical AI" and "trustworthy systems," but often means different things. ISO/IEC 22989 addresses this by establishing authoritative definitions for over 100 AI-related terms. For example, it distinguishes between "explainability" (the ability to understand AI system behavior) and "interpretability" (the degree to which humans can consistently predict model results), clarifying concepts that are frequently conflated in policy discussions.

The standard also introduces a structured taxonomy of AI system properties, organizing concepts like safety, security, and privacy into coherent relationships rather than treating them as isolated requirements. This systematic approach helps organizations avoid the common pitfall of implementing disconnected AI governance measures.

What You'll Find Inside

The standard organizes AI concepts into several key domains:

Core AI Terminology: Definitions for fundamental concepts like machine learning, neural networks, and algorithmic bias, ensuring consistent understanding across different contexts.

System Properties: Detailed explanations of critical AI characteristics including transparency, accountability, fairness, and reliability - the building blocks of trustworthy AI systems.

Lifecycle Concepts: Terms related to AI development, deployment, and maintenance phases, providing vocabulary for process-oriented discussions.

Risk and Impact Categories: Standardized language for discussing AI-related risks, enabling more precise risk assessment and mitigation planning.

Stakeholder Roles: Clear definitions of different parties involved in AI systems, from developers to deployers to affected individuals.

Who This Resource Is For

Standards Bodies and Regulators developing AI-specific requirements need this common vocabulary to ensure their rules can be interpreted consistently across jurisdictions and industries.

Legal and Compliance Teams working on AI governance will find essential definitions that help translate between technical concepts and legal requirements, particularly when drafting policies or interpreting regulations like the EU AI Act.

Technical Leaders and Architects designing AI systems can use these standardized concepts to communicate more effectively with business stakeholders and ensure their technical decisions align with governance objectives.

Procurement and Vendor Management Teams evaluating AI solutions benefit from standardized terminology when writing RFPs, evaluating proposals, and establishing contractual requirements.

Academic Researchers and Policy Analysts studying AI governance can reference these definitions to ensure their work builds on established conceptual foundations rather than reinventing terminology.

How This Connects to Real-World AI Governance

While ISO/IEC 22989 doesn't prescribe specific implementation approaches, its definitions directly support practical governance activities. When the EU AI Act references "transparency obligations," the specific meaning can be traced back to concepts defined in this standard. Similarly, when organizations implement AI risk management frameworks like NIST AI RMF, having standardized definitions ensures consistent application across different teams and business units.

The standard also enables more effective benchmarking and assessment. When vendors claim their AI systems are "explainable" or "robust," ISO/IEC 22989 provides the reference definitions that procurement teams can use to evaluate these claims objectively.

Getting Maximum Value

Start by identifying the AI-related terms your organization uses most frequently in policies, contracts, or technical discussions. Look up the ISO/IEC 22989 definitions and compare them to your current understanding - you may discover important nuances that affect how you should interpret requirements or design systems.

Use the standard as a reference when reviewing AI-related contracts, standards, or regulations. When you encounter terms like "algorithmic transparency" or "AI safety," check whether the specific context aligns with the ISO/IEC definitions or uses different interpretations that could create compliance gaps.

Consider adopting ISO/IEC 22989 terminology in your organization's AI governance documentation. This creates consistency with emerging international standards and makes it easier to demonstrate compliance with regulations that reference these concepts.

Tags

AI terminologyAI conceptstransparencyexplainabilityrobustnessAI standards

At a glance

Published

2022

Jurisdiction

Global

Category

Standards and certifications

Access

Paid access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

ISO/IEC 22989 - Artificial Intelligence - Concepts and Terminology | AI Governance Library | VerifyWise