- Over 160 AI ethics guidelines have been published globally[1], yet most enterprises still lack the organizational capacity to translate principles into actionable governance mechanisms — the core challenge of AI governance has shifted from "whether it's needed" to "how to implement it"
- The EU AI Act[2] was formally enacted in 2024, establishing the world's first risk-tiered AI regulatory framework; high-risk systems must pass compliance assessments before market deployment, with maximum penalties reaching 7% of global revenue — directly impacting all Taiwanese enterprises operating in the EU
- The NIST AI RMF[3] introduced the Govern-Map-Measure-Manage four-function framework, providing an actionable methodology for enterprise AI risk management that has become the de facto global standard for AI governance
- ISO/IEC 42001[7], the world's first international standard for AI management systems, provides a certification framework for enterprises to build systematic AI governance systems, and is expected to become a prerequisite qualification in multinational enterprise supply chains
1. Why AI Governance Is a Mandatory Course for Listed Enterprises
As AI transitions from a laboratory technology into the core business processes of enterprises, it brings not only opportunities for efficiency gains but also a series of unprecedented governance challenges. For listed enterprises, AI governance has evolved from an optional forward-looking initiative to a mandatory agenda item at the board level. The systematic analysis of global AI ethics guidelines by Jobin et al.[1] reveals a crucial fact: despite the publication of over 160 AI ethics guidelines spanning government agencies, international organizations, and corporations, significant divergences remain among frameworks on core principles such as transparency, fairness, and safety. This means enterprises cannot simply "follow one standard" but must build their own governance capabilities to navigate this complex regulatory landscape.
From the perspective of Taiwanese listed enterprises, the urgency of AI governance stems from pressure on three dimensions. First, regulatory pressure. Taiwan's Financial Supervisory Commission (FSC) has continuously strengthened corporate governance codes since 2020, explicitly requiring listed companies to incorporate ESG and risk management into the board of directors' responsibilities. The bias risks, data privacy risks, and operational decision risks posed by AI systems have become unavoidable disclosure items in ESG reports. The extraterritorial application of the EU AI Act[2] further compels Taiwanese enterprises operating in the EU market to directly confront global AI regulatory compliance requirements. Second, market pressure. International brand clients and supply chain partners increasingly demand transparency reports on AI usage from their suppliers. Enterprises lacking an AI governance framework may be downgraded or even excluded in supply chain audits. Third, trust pressure. The impact of brand crises, litigation risks, and stock price volatility caused by AI decision-making failures is far greater for listed enterprises than for non-listed ones.
1.1 The Paradigm Shift from AI Ethics to AI Governance
Floridi et al.[5] proposed five core principles in the AI4People ethical framework — Beneficence, Non-maleficence, Autonomy, Justice, and Explicability. This framework provides a philosophical foundation for AI ethics, but enterprises need more than principles — they need to translate principles into actionable, measurable, and auditable organizational mechanisms. Mantymaki et al.[4] defined "organizational AI governance" as a system encompassing norms, processes, roles, and tools designed to ensure that AI system development and deployment remain aligned with organizational objectives and regulatory requirements. This marks a paradigm shift from abstract ethical discussion to concrete governance practice.
1.2 The Cost of AI Governance Failure
For listed enterprises, the absence of AI governance can trigger a cascade of adverse effects. Discriminatory decisions resulting from model bias may invite regulatory investigations and significant fines; privacy breaches caused by poor data governance may trigger compensation liabilities under personal data protection laws; the opacity of AI systems may render enterprises unable to explain their decision-making basis to regulators, thereby affecting business licenses. More fundamentally, enterprises lacking AI governance often fall into a predicament where "AI projects proliferate everywhere, but no one is responsible for risk" — various departments independently adopt AI tools without unified risk assessment standards, model lifecycle management processes, or incident response mechanisms. Raji et al.[6] explicitly noted in their internal algorithmic auditing framework that the biggest gap in enterprise AI governance lies not in technology, but in the "Accountability Gap" — when an AI system fails, there is no clear assignment of responsibility or remediation process.
2. The Three-Layer Architecture of AI Governance Frameworks: Strategy, Process, Technology
Building an effective enterprise AI governance system requires a systematic framework that balances top-level design with ground-level execution. Based on the guiding principles of the NIST AI RMF[3] and the research by Mantymaki et al.[4] on organizational AI governance, we divide the enterprise AI governance framework into three mutually reinforcing layers: Strategy, Process, and Technology.
2.1 Strategy Layer: The Top-Level Design of AI Governance
The strategy layer defines the direction and boundaries of enterprise AI governance. Its core elements include: an AI Governance Policy Statement, articulating the enterprise's stance, principles, and red lines regarding AI usage; an AI Risk Appetite Statement, delineating the types and degrees of AI risk the enterprise is willing to accept; and an AI Governance Organizational Structure, establishing the roles and responsibilities of the board, management, and execution levels in AI governance. The key at the strategy layer is to elevate AI governance from a technical matter for the IT department to a core corporate governance issue, securing board-level commitment and resource allocation.
2.2 Process Layer: Governance Checkpoints Across the AI Lifecycle
The process layer embeds governance principles into the complete AI system lifecycle — from requirements assessment, data collection, model development, testing and validation, deployment to ongoing monitoring and decommissioning. Each stage should have clearly defined governance checkpoints. For example, during the model development phase, an AI Impact Assessment should be conducted to identify potential bias and fairness risks; before deployment, independent Model Validation should confirm that model performance and safety meet predetermined standards. Raji et al.[6] specifically emphasized in their end-to-end internal auditing framework that governance processes must have "friction" — striking a balance between AI development speed and governance rigor to prevent governance from becoming a mere formality.
2.3 Technology Layer: The Digital Infrastructure for Governance
The technology layer provides the tools and platform support required by governance processes. This includes: a Model Registry, recording metadata, training data, performance metrics, and deployment status of all AI models; automated fairness testing tools, which automatically detect bias during model development and updates; a model monitoring dashboard, providing real-time tracking of model drift, data distribution changes, and anomalous prediction behavior for deployed models; and an Audit Trail system, which maintains a complete record of every input, inference process, and output of AI system decisions to ensure traceability. Shneiderman[8] emphasized that reliable, safe, and trustworthy AI systems require human oversight mechanisms to be embedded at the technical design level, rather than retrofitted after the fact.
3. Board and Senior Management AI Oversight Responsibilities
The success or failure of AI governance ultimately depends on the commitment and participation of the organization's highest levels. For listed enterprises, the board's AI oversight responsibility has been upgraded from "worth paying attention to" to "non-delegable" — this is not merely a requirement of governance best practices but a regulatory compliance necessity. Taiwan's FSC-issued corporate governance best practices require the board to oversee significant corporate risks, and as a critical technology influencing enterprise operational decisions, AI systems should be included within the board's risk oversight scope.
3.1 The Board's AI Governance Responsibilities
The board should assume three core responsibilities in AI governance. First, approving AI strategy direction. The board should deliberate and approve the enterprise's AI development strategy, ensuring alignment with the overall business strategy and evaluating the risk-return profile of AI investments. Second, overseeing AI risks. The board should regularly receive management reports on the AI risk posture, including model risk incidents, compliance status, ethical disputes, and data governance metrics. Third, ensuring AI governance resources. The board should ensure the enterprise allocates sufficient human, technical, and financial resources to support the operation of the AI governance system. Shneiderman[8] emphasized in his human-centered AI framework that effective AI governance requires multi-level organizational oversight — from team-level quality checks to industry-level certification standards, every layer is indispensable.
3.2 Establishing an AI Governance Committee
Below the board, enterprises should establish a dedicated AI Governance Committee as a standing executive body for AI governance. The committee composition should have cross-functional representation, including at minimum the following roles: the CTO or CIO (serving as chair), the General Counsel (responsible for regulatory compliance perspective), the Chief Risk Officer (responsible for risk management perspective), the Chief Data Officer (responsible for data governance perspective), and business unit representatives (ensuring governance mechanisms do not become disconnected from business reality). The AI Governance Committee's core functions include: formulating AI usage policies, reviewing deployment requests for high-risk AI projects, handling AI ethics disputes, coordinating cross-departmental governance standards, and regularly submitting AI governance reports to the board.
3.3 AI Literacy Requirements for Management
Effective AI oversight requires board members and senior management to possess basic AI literacy — not to become machine learning experts, but to understand the capability boundaries, risk characteristics, and governance requirements of AI systems. Specifically, management should be able to answer the following questions: What AI systems does the enterprise currently have deployed? What business decisions do these systems influence? Are the training data sources reliable? Has the model undergone fairness and bias testing? What is the contingency plan if the model fails? Enterprises should arrange regular AI literacy training for board members and, when necessary, engage external AI consultants to provide independent opinions.
4. Model Risk Management (MRM): From Development to Decommissioning
Model Risk Management (MRM) is the most technically intensive component of the AI governance framework. Model risk refers to financial losses, compliance violations, or reputational damage caused by model errors, improper use, or failure. The Govern-Map-Measure-Manage four-function framework of the NIST AI RMF[3] provides a systematic methodology for enterprises to build model risk management systems.
4.1 Model Tiering and Risk Assessment
Not all AI models require the same level of governance intensity. Enterprises should establish a model tiering system, classifying models into high, medium, and low risk levels based on decision-making impact, data sensitivity, and substitutability. High-risk models include credit approval models, customer churn prediction models (affecting differentiated pricing), and recruitment screening models — these models' decisions directly affect individual rights, and any bias or error could trigger legal action and regulatory penalties. Medium-risk models, such as demand forecasting and inventory optimization — their errors primarily result in operational efficiency losses but do not directly affect individual rights. Low-risk models, such as automated internal report generation and email classification — have limited impact scope and manageable risk. Models at different risk levels should be subject to differentiated governance requirements: high-risk models must undergo independent validation, periodic revalidation, and continuous monitoring; low-risk models may follow simplified processes.
4.2 Model Development Governance
The governance focus during the model development phase is to ensure the design and training process conforms to governance standards. Key control points include: Problem definition review — confirming that AI is the appropriate solution for the problem and that the model's expected output aligns with business needs and regulatory requirements; Data quality inspection — verifying training data representativeness, completeness, and labeling quality, and detecting potential historical biases; Model selection justification — evaluating the appropriateness of the chosen model architecture and confirming whether simpler, equally effective alternatives exist (principle of simplicity preference); Fairness testing — conducting multi-dimensional fairness evaluations of the model to ensure it does not produce systematic discrimination against protected groups.
4.3 Model Deployment, Monitoring, and Decommissioning
Post-deployment governance is equally critical. Enterprises should establish continuous monitoring mechanisms to track model performance in production environments and promptly detect model drift. When input data distributions change significantly (Data Drift) or model prediction accuracy falls below preset thresholds, retraining or model update processes should be triggered. Additionally, enterprises should define clear decommissioning conditions and procedures for each model — when a model no longer meets performance standards, regulatory requirements change, or a superior solution is available, the model should be decommissioned in an orderly manner, including notifying relevant stakeholders, migrating downstream systems dependent on the model, and preserving complete model documentation for future auditing.
5. Data Governance: The Foundation of AI
Data governance is the foundation of AI governance — without high-quality data governance, all AI governance mechanisms will be built on an unstable foundation. AI system performance is directly determined by training data quality, and failures in data governance not only affect model performance but can also trigger serious compliance risks. In an environment of increasingly strict personal data protection laws, data governance has become a prerequisite for listed enterprise AI compliance.
5.1 An AI-Oriented Data Governance Framework
Traditional enterprise data governance focuses on data accuracy, consistency, and security, but AI-oriented data governance requires additional attention to several dimensions. Data representativeness — does the training data adequately reflect the diversity of the target population? If training data contains systematic sampling bias (for example, a significant underrepresentation of certain demographic groups), the model will inevitably replicate and even amplify that bias. Data provenance — is there a complete record of each training data point's source, collection method, and authorization status? Under the requirements of the EU AI Act[2], data provenance for high-risk AI systems is a key focus of compliance reviews. Data labeling quality — supervised learning model performance is highly dependent on labeling quality; enterprises should establish labeling guidelines, multi-annotator cross-validation, and labeling quality spot-check mechanisms.
5.2 Data Classification and Access Control
The training and inference processes of AI models involve extensive data access and processing. Enterprises need to balance "data openness to promote AI innovation" against "data control to ensure compliance." We recommend enterprises establish a four-tier data classification system: Public data (no access restrictions), Internal data (restricted to internal enterprise use), Confidential data (restricted to authorized personnel), Highly confidential data (such as Personally Identifiable Information (PII), medical records, and financial transaction data, requiring encrypted storage and full access logging). AI projects should complete a data classification review before using data, confirming that the data usage purpose and authorization scope comply with personal data protection laws and internal enterprise policies.
5.3 Synthetic Data and Privacy-Enhancing Technologies
When original data cannot be directly used for AI training due to privacy restrictions, synthetic data and Privacy-Enhancing Technologies (PETs) offer viable alternatives. Synthetic data uses generative models to produce datasets with similar statistical properties to the original data but without real personal information, suitable for model training and testing. Differential Privacy ensures that individual data points' privacy is not compromised through model training by injecting calibrated noise into query results. Federated Learning allows cross-institutional collaborative AI model training without centralizing raw data. These technologies provide enterprises with compromise solutions between privacy compliance and AI innovation.
6. EU AI Act Compliance Practice Guide
The EU AI Act[2] was formally enacted in 2024 and is the world's first comprehensive legislation with legal force governing AI development and deployment. For Taiwanese enterprises operating in the EU market, understanding and complying with the EU AI Act requirements is a compliance task that can no longer be deferred.
6.1 Risk Tiering System and Compliance Requirements
The EU AI Act employs a four-tier risk classification framework. Unacceptable Risk — prohibited AI applications, including social credit scoring systems, AI systems that exploit subliminal manipulation, and real-time biometric identification in public spaces (with limited exceptions). High Risk — AI systems subject to strict compliance requirements, covering critical infrastructure, education and vocational training, employment and human resource management, public services, law enforcement, immigration management, and the judiciary. Compliance requirements for high-risk systems include: establishing risk management systems, ensuring data quality, maintaining technical documentation, providing transparency information, ensuring human oversight mechanisms, and meeting accuracy and robustness standards. Limited Risk — only transparency obligations apply, such as chatbots being required to inform users they are interacting with AI. Minimal Risk — free to use with no additional compliance requirements.
6.2 Response Strategies for Taiwanese Enterprises
Taiwanese enterprises should structure their EU AI Act response strategy in three phases. Short-term (0-6 months): Complete an AI system inventory, identifying all AI systems that directly or indirectly serve the EU market, and classify them according to the EU AI Act risk tiers. Medium-term (6-18 months): For AI systems classified as high-risk, initiate compliance gap analysis, and establish the necessary technical documentation, risk management processes, and quality management systems. Long-term (18+ months): Integrate EU AI Act compliance requirements into the enterprise's AI governance framework, building organizational capability for continuous compliance. It is worth noting that the EU AI Act follows a phased implementation timeline: prohibitory provisions have been applicable since February 2025, general-purpose AI model (GPAI) obligations since August 2025, and the complete requirements for high-risk systems from August 2026. Enterprises should plan their compliance roadmap accordingly.
6.3 Special Obligations for General-Purpose AI Models
The EU AI Act has a dedicated chapter governing General-Purpose AI Models (GPAI). All GPAI providers must comply with basic transparency obligations, including maintaining technical documentation, providing usage policies, and complying with copyright laws. GPAI models with "systemic risk" (determined by training compute thresholds) must additionally conduct model evaluations, adversarial testing, incident reporting mechanisms, and cybersecurity protection measures. For Taiwanese enterprises using third-party GPAI models (such as GPT, Claude), while the model provider bears the primary GPAI obligations, enterprises integrating GPAI into high-risk applications must still ensure the downstream system as a whole complies with high-risk AI system requirements.
7. ISO/IEC 42001 AI Management System Implementation
ISO/IEC 42001[7] was officially published in December 2023 and is the world's first international standard for AI Management Systems (AIMS). This standard provides an certifiable framework for enterprises to build systematic AI governance systems, with significance comparable to ISO 27001 for information security management.
7.1 Architecture and Core Requirements of ISO/IEC 42001
ISO/IEC 42001 adopts the ISO High-Level Structure (HLS), sharing a consistent framework structure with management system standards such as ISO 9001 and ISO 27001, facilitating multi-system integration for enterprises. Its core requirements cover: Organizational context — understanding the needs and expectations of internal and external stakeholders of the AI system; Leadership — ensuring top management commitment and resource allocation for the AI management system; Planning — identifying AI-related risks and opportunities, and establishing AI management objectives; Support — allocating necessary human, technical, and infrastructure resources; Operations — executing AI system lifecycle management processes; Performance evaluation — continuously evaluating governance effectiveness through internal audits and management reviews; Continual improvement — driving governance system optimization based on performance evaluation results.
7.2 Implementation Roadmap
Enterprise implementation of ISO/IEC 42001 typically requires 12-18 months and can be divided into four phases. Phase 1: Gap Analysis (1-2 months) — Assess the maturity of the enterprise's existing AI governance system against ISO/IEC 42001 requirements, identifying areas that need strengthening. Phase 2: System Establishment (3-6 months) — Develop AI policies, establish management processes, design the necessary documentation architecture, and complete stakeholder analysis and risk assessment. Phase 3: Implementation and Internalization (4-6 months) — Apply newly established management processes to actual AI projects, conduct personnel training, execute internal audits, and correct identified deficiencies. Phase 4: Certification Audit (2-3 months) — Engage a third-party certification body to conduct preliminary and formal audits, and upon passing, obtain ISO/IEC 42001 certification.
7.3 Strategic Value of Certification
For Taiwanese listed enterprises, the value of ISO/IEC 42001 certification extends beyond compliance to building market trust. In multinational supply chains, enterprises with AI management system certification will gain significant advantages in supplier selection. Furthermore, ISO/IEC 42001 is highly aligned with the EU AI Act's compliance requirements — certified enterprises will already have most of the necessary management processes and documentation when facing EU AI Act compliance reviews, substantially reducing compliance costs. For enterprises planning IPOs or international mergers and acquisitions, AI management system certification serves as compelling evidence of governance maturity.
8. Practical AI Governance Recommendations for Taiwanese Listed Enterprises
Taiwanese listed enterprises face unique challenges and opportunities in AI governance implementation that differ from their European and American counterparts. The FSC's ongoing corporate governance reforms, Taiwan's distinctive industrial structure (a supply chain ecosystem dominated by SMEs), and the gradually emerging local AI regulatory environment all provide unique context for AI governance implementation.
8.1 Aligning Governance Architecture with FSC Corporate Governance Codes
The FSC's Corporate Governance Best Practice Principles require boards to oversee significant corporate risks and encourage listed enterprises to establish risk management committees. Enterprises can embed AI governance mechanisms within existing corporate governance structures rather than building from scratch. Specific approaches include: incorporating AI risks into the existing risk management committee's agenda; adding an AI governance section to sustainability reports, disclosing AI usage policies, risk management measures, and governance performance metrics; and including AI project audit procedures in internal audit plans. This "embedded" implementation approach both reduces resistance to organizational change and ensures AI governance remains consistent with the overall corporate governance framework.
8.2 Phased Implementation Blueprint
Considering the resource constraints and varying organizational maturity levels of Taiwanese listed enterprises, we recommend a three-phase implementation strategy. Phase 1: Foundation Building (Quarters 1-2) — Complete AI system inventory, establish a Model Inventory, designate an AI governance officer, and formulate AI usage policies. The goal of this phase is "knowing what we have." Phase 2: Process Establishment (Quarters 3-4) — Establish model risk assessment processes, implement data classification systems, introduce model lifecycle management processes, and initiate independent validation of high-risk models. The goal of this phase is "managing what we are doing well." Phase 3: Maturity Optimization (Quarters 5-8) — Deploy automated monitoring tools, establish governance performance metrics (KPI/KRI), pursue ISO/IEC 42001 certification, and integrate AI governance into ESG reporting. The goal of this phase is "continuously improving how well we are doing."
8.3 Common Implementation Obstacles and Countermeasures
Taiwanese enterprises frequently encounter the following obstacles in AI governance implementation. Obstacle 1: Insufficient senior leadership awareness. The board and management have not yet recognized the urgency of AI governance. Countermeasure: Use international regulatory trends (EU AI Act penalties, supply chain compliance requirements) and peer case studies as entry points for senior leadership awareness training. Obstacle 2: Governance talent shortage. The Taiwanese market lacks professionals who possess both AI technical knowledge and governance practical experience. Countermeasure: Develop internal cross-functional talent (having legal staff learn AI fundamentals, having data scientists learn compliance frameworks), and moderately engage external consultants to assist with system establishment. Obstacle 3: Governance viewed as a cost center. Business units worry that governance processes will slow down AI project progress. Countermeasure: Demonstrate governance investment ROI through concrete cases — the cost of avoiding a single model bias lawsuit far exceeds the investment in building a governance system. Obstacle 4: Cross-departmental coordination difficulties. AI governance inherently involves IT, legal, risk management, business, and other departments, making coordination costly. Countermeasure: Establish an AI Governance Office reporting directly to the CEO, granting it cross-departmental coordination authority.
9. Conclusion: From Compliance to Competitive Advantage
AI governance stands at a turning point from "optional" to "mandatory." The enactment of the EU AI Act, the rollout of NIST AI RMF, and the publication of ISO/IEC 42001 mark the institutionalization of global AI governance from self-regulation to external regulation. For Taiwanese listed enterprises, this is both compliance pressure and strategic opportunity.
9.1 The Mindset Shift from Defense to Offense
Most enterprises view AI governance as a defensive compliance cost — an unavoidable investment to avoid fines, lawsuits, and brand crises. But leading enterprises have already begun transforming AI governance into a source of competitive advantage. Robust AI governance translates to higher model quality and reliability, faster regulatory approval timelines, stronger customer trust, and deeper supply chain partnerships. Research by Jobin et al.[1] indicates that enterprises that internalize ethical principles into organizational culture demonstrate greater resilience in long-term technology adoption and market expansion.
9.2 Future Trends in AI Governance
Looking ahead, AI governance will exhibit three major trends. First, the parallel progression of regulatory globalization and fragmentation. More countries will introduce localized AI regulations, and enterprises will need the capability to navigate diverse regulatory environments. Second, the intelligentization of governance tools. AI will be used to govern AI — automated compliance detection, real-time bias monitoring, and intelligent auditing systems will gradually replace primarily manual governance processes. Third, the convergence of governance and innovation. The most successful enterprises will not view governance and innovation as opposing forces, but will embed governance mechanisms into the AI development process, making them an integral part of quality assurance rather than an external constraint. The vision proposed by Floridi et al.[5] in the AI4People framework — a society where AI promotes human flourishing — requires not only technological breakthroughs but also institutional innovation. Enterprise AI governance is precisely the micro-level practice of this institutional innovation.
9.3 Call to Action: Start Now
AI governance implementation need not be achieved all at once, but it must start now. We recommend that decision-makers at Taiwanese listed enterprises take the following three immediate actions. First, initiate an AI system inventory. Understand what AI systems the enterprise currently operates, who manages them, and what business decisions they serve — this is the starting point for all governance work. Second, designate an AI governance officer. Whether through a new position or by having an existing senior executive take on the role, the enterprise needs a clearly designated AI governance officer to drive the establishment and operation of the governance system. Third, launch board-level AI literacy training. Arrange an AI governance seminar for the board and senior management to ensure the organization's highest levels understand the implications, urgency, and investment value of AI governance. In an era where AI is reshaping the competitive landscape of industries, enterprises that take the lead in establishing responsible AI governance systems will gain first-mover advantages across three dimensions: regulatory compliance, market trust, and long-term value creation.



