Addis Ababa
Ethiopia
- Miriam Stankovich
- Angel Draev
- Ghazi Mabrouk
Intermediate
Event organizer(s)

Supported by

Description
This blended training course is open for policymakers, regulators, civil society leaders, and professionals aiming to strengthen their skills in AI governance. The online part of the course includes two instructor-led online sessions covering key AI concepts, ethics, cybersecurity, and global legal frameworks. These sessions will be followed by quizzes and reading materials.
The face-to-face part of the training will be held in Addis Ababa, Ethiopia and focuses on practical learning. Through design thinking labs, simulations, journey mapping, and prototyping, teams will explore AI system lifecycles, identify governance gaps, test oversight tools, and co-develop national strategies. Scenario exercises will address sector-specific challenges, risk mapping, and cross-border regulatory negotiations.
By the end of the course, participants will have co-created a five-year AI governance roadmap tailored to their institutional or national context, based on global best practices and international frameworks.
With the generous support of the Global Gateway initiative of the European Union, participation in this training is provided free of charge for selected applicants. This includes accommodation, meals and other organized activities in Addis Ababa, Ethiopia. Participants or their organizations will be responsible for covering their travel expenses to Addis Ababa and, if necessary, visa application costs.
This course is designed for:
- Policymakers and government officials
- Regulators
- Private sector professionals
- Civil society representatives
The course is limited to 30 participants.
Eligible applicants are invited to apply if they meet the following criteria:
- Completed a BSc or BA (or equivalent) in Social Sciences (e.g., Economics, Public Policy) or a related field such as Engineering or Political Science.
- Demonstrated interest or involvement in AI policy, ethics, or digital transformation.
- Fluent in English.
- Willing to complete the mandatory online pre-training, which includes two live sessions with quizzes and readings to assess understanding of AI governance, ethics, legal frameworks, and cybersecurity. Completion of this phase is required to attend the in-person course.
Government officials and policymakers from developing countries are highly encouraged to apply.
By the end of this course, participants will be able to:
- Build a foundational understanding of AI and its governance
- Identify and evaluate AI risks across the lifecycle
- Apply legal, ethical, and policy frameworks to AI oversight
- Design contextualized governance interventions
- Navigate global AI governance and promote policy coherence
- Translate learning into actionable roadmaps
This course uses a blended approach that combines online foundational learning with hands-on face-to-face sessions. It is designed to equip participants with the tools to govern AI responsibly within their institutional and national contexts. The methodology is based on design thinking, systems mapping, and experiential learning, with a strong focus on user-centered innovation, real-world problem solving, and collaborative policymaking.
- Online Pre-Training Phase (Mandatory for Face-to-Face Participation):
Participants must complete two live, instructor-led online sessions before attending the in-person training. These sessions provide the essential conceptual foundation for deeper engagement during the face-to-face component. - Face-to-Face Sessions:
The in-person training lasts five days and centers on hands-on, collaborative learning experiences. - Peer Learning and Collaborative Feedback:
Peer-to-peer exchange is integrated throughout both the online and in-person phases to foster a global community of AI governance practitioners.
Grading Matrix
Online Component (Pre-Training Phase)
- Participants must complete both live instructor-led sessions and achieve a minimum score of 70% on each quiz.
Face-to-Face Component: Assessment is based on active participation, collaboration, and final deliverables during the Addis Ababa training week.
- Group Work & Scenario Simulations: 60%
- AI Governance Roadmap & Final Reflection: 40%
Online phase
Online Session 1: Foundations and Ethical Challenges of AI Governance
Session date and time: 9 September 2025 from 13:00 to 15:00 CEST
This session provides a foundational understanding of artificial intelligence (AI) and introduces key concepts in ethical and responsible AI governance. It explores how AI systems function, where they are deployed, and why governance frameworks are critical to ensure fairness, transparency, and accountability.
Learning outcomes:
- Define AI and distinguish between different types: ANI, AGI, and Agentic Generative AI.
- Explain the AI lifecycle and value chain and identify common AI applications across sectors.
- Understand the need for AI governance and identify the roles of governments, the private sector, and civil society.
- Identify and analyze sources of bias in AI systems and the risks posed by lack of transparency (“black box” models).
- Explain ethical principles such as fairness, accountability, inclusivity, and human oversight in AI governance.
Online Session 2: AI Risk, Regulation, and Global Governance
Session date and time: 16 September 2025 from 13:00 to 15:00 CEST
This session covers the technical, legal, and institutional aspects of AI risk and regulation. It equips participants with an understanding of AI-specific cybersecurity threats, data protection frameworks, and the rapidly evolving landscape of global AI governance.
Learning outcomes:
- Identify key AI-related cybersecurity threats, including adversarial attacks, model inversion, and data poisoning.
- Explain the principles and importance of data governance and protection in AI systems.
- Describe national and regional AI-related laws.
- Compare risk-based and rights-based approaches to AI regulation.
- Understand the challenges of global AI governance and the roles of institutions such as the OECD, UNESCO, and Council of Europe (CoE)
Face-to-face phase
Day 1 (02 February 2026) – Foundations of AI Governance
Learning outcomes:
- Recall and apply core AI types and governance principles.
- Explain the importance of AI governance through interactive exercises.
- Identify sector-specific AI risks via empathy mapping.
- Simulate multi-stakeholder trade-offs in governance scenarios.
- Map the AI lifecycle to uncover governance gaps.
- Use journey mapping and personas to pinpoint oversight checkpoints.
- Give peer feedback to strengthen governance proposals.
- Articulate one concrete AI governance action.
Day 2 (03 February 2026) – Addressing AI Bias, Opacity, and Risks
Learning outcomes:
- Reinforce AI bias, security, and sustainability concepts.
- Visualize where bias and opacity arise across the AI lifecycle.
- Analyze real-world trust and regulatory challenges.
- Diagnose breaches and misuse scenarios to recommend safeguards.
- Propose sustainable AI policies to reduce environmental impact.
- Pitch green AI policy ideas balancing innovation and responsibility.
- Build a Risk Radar to prioritize AI governance challenges.
- Co-design transparency audits, risk registers, and reporting obligations.
- Apply design tools to develop inclusive governance strategies.
- Commit to one action addressing bias, risk, or sustainability.
Day 3 (04 February 2026) – Cybersecurity, Data Governance, and AI Safety
Learning outcomes:
- Reinforce AI cybersecurity threats and data governance fundamentals.
- Identify and classify AI-specific cyber risks using threat matrices.
- Assess sector exposure to AI-driven security threats.
- Apply data governance and safety frameworks to ethical dilemmas.
- Trace bias introduction through bias journey mapping.
- Develop mitigation strategies including audits and transparency.
- Design a sector-specific regulatory sandbox based on best practices.
- Prototype balanced governance frameworks across the AI lifecycle.
- Create an “AI Sandbox Playbook” adapted to local contexts.
- Reflect on adaptive regulation principles for real-world use.
Day 4 (05 February 2026) – From Principles to Practice: Building Trustworthy AI Governance
Learning outcomes:
- Reinforce ethical principles and their legal alignments.
- Evaluate AI systems against recognized ethical frameworks.
- Compare risk-based and rights-based governance approaches.
- Apply the EU AI Act and rights frameworks to high-risk AI.
- Prototype national AI governance components per international standards.
- Simulate international regulatory divergence and coordination.
- Collaborate to draft a joint declaration on trustworthy AI.
- Use design thinking to frame gaps and prototype policy solutions.
- Commit to one insight for advancing trustworthy AI in their context.
Day 5 (06 February 2026) – Wrap-Up and Action Planning: From Vision to Implementation
Learning outcomes:
- Reflect on key takeaways and tools for workplace application.
- Co-create a 5-year AI governance roadmap using design thinking.
- Collaborate to develop and refine national policy strategies.
- Present and justify AI governance models persuasively.
- Critically assess and improve peer roadmaps with structured criteria.
- Engage in iterative feedback to strengthen final designs.
- Commit to championing an AI governance change in their organization.
- Celebrate completion and join the global AI governance community.
Financial support available
ITU will cover training content, accommodation for 6 nights, meals and training activities.