Skip to main content

Governance Frameworks

NIST AI RMF, EU AI Act, AI Bill of Rights, ISO 42001, industry standards, and compliance checklists

~45 min
Listen to this lesson

AI Governance Frameworks

As AI deployment accelerates, governments and standards bodies worldwide are creating frameworks to ensure AI is safe, fair, and trustworthy. Understanding these frameworks is essential for any organization building or deploying AI systems.

Governance vs Ethics

Ethics tells us what we SHOULD do; governance tells us what we MUST do and how to verify compliance. Governance frameworks translate ethical principles into concrete processes, requirements, and accountability structures that organizations can implement and audit.

NIST AI Risk Management Framework (AI RMF)

The National Institute of Standards and Technology released the AI RMF in January 2023 as a voluntary framework for managing AI risks. It is structured around four core functions:

1. GOVERN

Establish organizational structures, policies, and cultures for AI risk management.
  • Define roles and responsibilities
  • Establish risk tolerance levels
  • Create oversight mechanisms
  • Foster a culture of responsible AI
  • 2. MAP

    Contextualize and identify AI risks in your specific deployment environment.
  • Identify intended use cases and known limitations
  • Map stakeholders and potential impacts
  • Assess the operational environment
  • Document assumptions and constraints
  • 3. MEASURE

    Quantify, track, and benchmark AI risks using metrics and tests.
  • Establish performance metrics (including fairness metrics)
  • Conduct bias testing and red-teaming
  • Evaluate robustness and security
  • Track risks over time
  • 4. MANAGE

    Prioritize and address identified risks through mitigation, monitoring, and governance.
  • Implement risk mitigations
  • Define incident response procedures
  • Establish monitoring and alerting
  • Plan for model updates and retirement
  • EU AI Act

    The European Union AI Act (effective August 2024, with phased enforcement through 2027) is the world's first comprehensive AI regulation. It takes a risk-based approach:

    Risk Levels

    LevelExamplesRequirements
    UnacceptableSocial scoring, real-time biometric surveillance, manipulation of vulnerable groupsBanned
    High-RiskHiring, credit scoring, medical devices, law enforcement, educationConformity assessment, risk management, human oversight, transparency
    Limited RiskChatbots, deepfakes, emotion recognitionTransparency requirements (users must know they are interacting with AI)
    Minimal RiskSpam filters, AI in games, recommendation systemsNo specific requirements (voluntary codes of conduct)

    High-Risk AI Requirements

    1. Risk management system throughout the AI lifecycle 2. Data governance — training data must be relevant, representative, and free of errors 3. Technical documentation — detailed system description 4. Record-keeping — automatic logging of system operation 5. Transparency — users informed they are interacting with AI 6. Human oversight — humans can monitor and override the system 7. Accuracy, robustness, and cybersecurity standards 8. Conformity assessment — third-party or self-assessed depending on category

    EU AI Act Penalties

    Non-compliance with the EU AI Act can result in fines up to 35 million euros or 7% of global annual turnover (whichever is higher) for prohibited AI practices. Even limited-risk violations can incur fines up to 7.5 million euros or 1% of turnover.

    Blueprint for an AI Bill of Rights (US)

    Released by the White House Office of Science and Technology Policy in October 2022, this non-binding framework outlines five principles:

    1. Safe and Effective Systems — AI should be tested and monitored for safety 2. Algorithmic Discrimination Protections — Systems should not discriminate 3. Data Privacy — People should have control over their data 4. Notice and Explanation — People should know when AI is used and understand how decisions are made 5. Human Alternatives — People should be able to opt out of AI and access a human

    ISO/IEC 42001: AI Management System

    Published in 2023, ISO 42001 is the first international management system standard for AI. It provides a certifiable framework (similar to ISO 27001 for information security) that covers:

  • AI governance and leadership commitment
  • Risk assessment and treatment
  • AI system lifecycle management
  • Data management
  • Performance evaluation and continuous improvement
  • Supplier and third-party management
  • python
    1# AI Governance Compliance Checker
    2# Maps your AI system against major governance frameworks
    3
    4from dataclasses import dataclass, field
    5from typing import List, Dict
    6from enum import Enum
    7
    8class ComplianceStatus(Enum):
    9    COMPLIANT = "Compliant"
    10    PARTIAL = "Partially Compliant"
    11    NON_COMPLIANT = "Non-Compliant"
    12    NOT_APPLICABLE = "N/A"
    13
    14@dataclass
    15class ComplianceCheck:
    16    requirement: str
    17    framework: str
    18    status: ComplianceStatus
    19    evidence: str
    20    remediation: str = ""
    21
    22@dataclass
    23class GovernanceAudit:
    24    system_name: str
    25    risk_level: str  # "minimal", "limited", "high", "unacceptable"
    26    checks: List[ComplianceCheck] = field(default_factory=list)
    27
    28    def add_check(self, requirement: str, framework: str,
    29                  status: ComplianceStatus, evidence: str,
    30                  remediation: str = ""):
    31        self.checks.append(ComplianceCheck(
    32            requirement=requirement,
    33            framework=framework,
    34            status=status,
    35            evidence=evidence,
    36            remediation=remediation,
    37        ))
    38
    39    def compliance_score(self) -> float:
    40        if not self.checks:
    41            return 0.0
    42        applicable = [c for c in self.checks
    43                      if c.status != ComplianceStatus.NOT_APPLICABLE]
    44        if not applicable:
    45            return 100.0
    46        compliant = sum(1 for c in applicable
    47                        if c.status == ComplianceStatus.COMPLIANT)
    48        partial = sum(1 for c in applicable
    49                      if c.status == ComplianceStatus.PARTIAL)
    50        return ((compliant + 0.5 * partial) / len(applicable)) * 100
    51
    52    def print_report(self):
    53        print(f"\n{'=' * 60}")
    54        print(f"GOVERNANCE AUDIT: {self.system_name}")
    55        print(f"Risk Level: {self.risk_level.upper()}")
    56        print(f"Compliance Score: {self.compliance_score():.1f}%")
    57        print(f"{'=' * 60}")
    58
    59        by_framework: Dict[str, List[ComplianceCheck]] = {}
    60        for check in self.checks:
    61            by_framework.setdefault(check.framework, []).append(check)
    62
    63        for fw, checks in by_framework.items():
    64            print(f"\n--- {fw} ---")
    65            for c in checks:
    66                icon = {
    67                    ComplianceStatus.COMPLIANT: "[PASS]",
    68                    ComplianceStatus.PARTIAL: "[WARN]",
    69                    ComplianceStatus.NON_COMPLIANT: "[FAIL]",
    70                    ComplianceStatus.NOT_APPLICABLE: "[ NA ]",
    71                }[c.status]
    72                print(f"  {icon} {c.requirement}")
    73                print(f"        Evidence: {c.evidence}")
    74                if c.remediation:
    75                    print(f"        Fix: {c.remediation}")
    76
    77        gaps = [c for c in self.checks
    78                if c.status == ComplianceStatus.NON_COMPLIANT]
    79        if gaps:
    80            print(f"\n--- ACTION ITEMS ({len(gaps)} gaps) ---")
    81            for i, c in enumerate(gaps, 1):
    82                print(f"  {i}. [{c.framework}] {c.requirement}")
    83                print(f"     Remediation: {c.remediation}")
    84
    85
    86# --- Example: Audit a hiring AI system ---
    87audit = GovernanceAudit(
    88    system_name="TalentMatch AI (Resume Screening)",
    89    risk_level="high"  # Hiring = high-risk under EU AI Act
    90)
    91
    92# NIST AI RMF checks
    93audit.add_check(
    94    "Risk management policy documented",
    95    "NIST AI RMF (GOVERN)",
    96    ComplianceStatus.COMPLIANT,
    97    "AI Risk Policy v2.1 approved by board on 2024-06-01"
    98)
    99audit.add_check(
    100    "Stakeholder impact assessment completed",
    101    "NIST AI RMF (MAP)",
    102    ComplianceStatus.COMPLIANT,
    103    "SIA completed 2024-05-15, covering applicants and hiring managers"
    104)
    105audit.add_check(
    106    "Bias metrics tracked across demographic groups",
    107    "NIST AI RMF (MEASURE)",
    108    ComplianceStatus.PARTIAL,
    109    "Gender metrics tracked; race and age metrics not yet implemented",
    110    "Add disaggregated metrics for race, age, and disability status"
    111)
    112
    113# EU AI Act checks
    114audit.add_check(
    115    "Conformity assessment completed",
    116    "EU AI Act (High-Risk)",
    117    ComplianceStatus.NON_COMPLIANT,
    118    "No conformity assessment performed",
    119    "Engage notified body for third-party conformity assessment"
    120)
    121audit.add_check(
    122    "Human oversight mechanism in place",
    123    "EU AI Act (High-Risk)",
    124    ComplianceStatus.COMPLIANT,
    125    "All AI recommendations reviewed by human recruiter before action"
    126)
    127audit.add_check(
    128    "Technical documentation maintained",
    129    "EU AI Act (High-Risk)",
    130    ComplianceStatus.PARTIAL,
    131    "Model card exists but lacks training data documentation",
    132    "Complete data sheet and add to technical documentation package"
    133)
    134
    135# ISO 42001 checks
    136audit.add_check(
    137    "AI management system established",
    138    "ISO 42001",
    139    ComplianceStatus.NON_COMPLIANT,
    140    "No formal AIMS in place",
    141    "Initiate ISO 42001 implementation project with target certification date"
    142)
    143
    144audit.print_report()

    Start with NIST AI RMF

    If you are just beginning your AI governance journey, start with the NIST AI RMF. It is voluntary, flexible, and well-structured. Its four functions (Govern, Map, Measure, Manage) provide a clear roadmap that maps well to both the EU AI Act requirements and ISO 42001.