Geneva
Switzerland
- Angel Draev
- Ghazi Mabrouk
Intermediate
Event Organizer(s)
Supported by
Description
This blended training course is designed for policymakers, regulators, civil society leaders, and professionals worldwide seeking to strengthen their skills in AI governance. It offers a comprehensive introduction to core AI concepts, ethics, cybersecurity considerations, and global legal and regulatory frameworks.
Through a combination of preparatory self-paced learning and in-person training, participants will engage in hands-on activities such as design thinking labs, simulations, journey mapping, and prototyping. These activities will enable participants to explore the AI system lifecycle, identify governance gaps, test oversight tools, and co-develop national strategies. Scenario-based exercises will address sector-specific challenges, risk mapping, and cross-border regulatory negotiations. By the end of the course, participants will have collaboratively developed a five-year AI governance roadmap tailored to their institutional or national context and aligned with international best practices.
The course is organized by the International Telecommunication Union (ITU) with financial support from the European Union’s Global Gateway initiative. Participation is free of charge for selected participants and includes accommodation, meals, and organized activities in Geneva, Switzerland. Travel and visa-related expenses, if applicable, remain the responsibility of the participants or their sponsoring organizations.
This course is designed for:
- Policymakers and government officials
- Regulators
- Private sector professionals
- Civil society representatives
The course is limited to 30 participants.
Eligible applicants are invited to apply if they meet the following criteria:
- Completed a BSc or BA (or equivalent) in Social Sciences (e.g., Economics, Public Policy) or a related field such as Engineering or Political Science.
- Demonstrated interest or involvement in AI policy, ethics, or digital transformation.
- Fluent in English.
- Willing to complete the mandatory online pre-training materials to strengthen their understanding of AI governance, ethics, legal frameworks, and cybersecurity. Completion of this phase is required to attend the in-person course.
Government officials and policymakers from developing countries are highly encouraged to apply.
By the end of this course, participants will be able to:
- Build a foundational understanding of AI and its governance
- Identify and evaluate AI risks across the lifecycle
- Apply legal, ethical, and policy frameworks to AI oversight
- Design contextualized governance interventions
- Navigate global AI governance and promote policy coherence
- Translate learning into actionable roadmaps
This course uses a blended approach that combines online foundational learning with hands-on face-to-face sessions. It is designed to equip participants with the tools to govern AI responsibly within their institutional and national contexts. The methodology is based on design thinking, systems mapping, and experiential learning, with a strong focus on user-centered innovation, real-world problem solving, and collaborative policymaking.
1. Self-paced sessions (TBC): Participants are required to complete 5 sessions of a self-paced course prior to attending the in-person component in Geneva. These 5 modules provide the conceptual grounding needed for deeper engagement during the face-to-face training.
- Structured Modules: The sessions cover foundational AI concepts, ethical frameworks, global governance models, and emerging cybersecurity and data governance risks, based on a synthesis of five original modules.
- Interactive Learning: Each session includes quizzes, applied exercises, and required readings to assess comprehension and prepare participants for scenario-based learning.
- Completion of quizzes and assignments is mandatory.
2. Face-to-face sessions: Held over five days in Geneva, the in-person training is designed around hands-on, collaborative experiences.
- Scenario Simulations and Role-Play: Participants engage in realistic policy dilemmas involving healthcare AI, algorithmic bias, sustainability trade-offs, data-driven discrimination, and cross-border AI deployment.
- Collaborative Tools and Labs: Teams use empathy maps, journey maps, sandbox boards, regulatory balance sheets, and risk radar templates to develop actionable governance interventions.
- Design Thinking and Prototyping: Sessions guide participants through iterative solution development—framing problems, mapping risks, co-designing safeguards, and testing governance strategies using international frameworks.
3. Peer learning and collaborative feedback: Peer-to-peer exchange is embedded throughout both the online and face-to-face phases to build a global community of AI governance practitioners.
- Gallery Walks, Roundtables, and Feedback Carousels: Teams receive structured feedback on their sandbox designs, risk mitigation strategies, and draft governance frameworks using scorecards, sticky notes, and structured evaluation tools.
- Capstone Project – Roadmap Development: On the final day, participants apply course content to co-develop a five-year AI governance roadmap tailored to their country or institution. These are presented for peer and instructor review, enabling refinement and collective learning.
Grading Matrix:
Self-paced modules (pre-training phase)
- Participants must complete the pre-training mandatory phase and achieve a minimum score of 80%.
Face-to-face component
Assessment is based on active participation, collaboration, and final deliverables during the Geneva training week.
- Group work & scenario simulations: 60%
- AI governance roadmap & final reflection: 40%
A total grade of 70% or more is required to receive the ITU Academy certificate.
Self-paced modules
Self-paced module 1: Introduction to AI and why governance matters
Duration: 25-30 minutes
This session introduces AI through everyday interactions across public services. Participants follow a public servant navigating an AI-enabled day, illustrating how AI already shapes real decisions and why governance becomes essential once systems affect citizens’ rights, opportunities, and trust.
Self-paced module 2: Bias, transparency, and accountability in AI systems
Duration: 25-30 minutes
This session follows four individuals whose experiences reveal how bias, opacity, and risk manifest across sectors such as hiring, healthcare, finance, public services, cybersecurity, and environmental sustainability. It demonstrates how flawed data and opaque models translate into real-world harm.
Self-paced module 3: AI and data governance
Duration: 25-30 minutes
This session examines data governance from the perspectives of patients, epidemiologists, policymakers, and private-sector partners. It shows how consent, data quality, protection, and sharing determine whether national AI systems strengthen trust or undermine legitimacy.
Self-paced module 4: AI regulation and global governance approaches
Duration: 25-30 minutes
This session illustrates how a single AI system is regulated differently across the European Union, Singapore, the United States, and the African Union. Participants explore how regulatory models vary across jurisdictions and why global AI governance remains fragmented.
Self-paced module 5: Ethical principles in AI governance
Duration: 25-30 minutes
This session brings an ethical dimension to AI governance by examining a public hospital deploying an AI screening tool. Through the perspectives of patients, doctors, civil society advocates, and hospital administrators, participants explore how ethical principles succeed or fail in real-world AI deployment.
Face-to-face training in Geneva
Day 1 (14 September 2026): Foundations of AI governance
Learning outcomes:
- Reinforce understanding of why AI governance matters
- Reflect on sector-specific governance risks
- Simulate multi-stakeholder perspectives in real-world AI governance scenarios
- Map the AI lifecycle and value chain,
- Use design thinking tools
- Engage in peer review and collaborative feedback,
- Articulate one concrete governance takeaway or action
Day 2 (15 September 2026): Addressing AI bias, opacity, and risks
Learning outcomes:
- Identify and visualize how different types of bias and opacity emerge
- Reflect on real-world governance challenges
- Analyze security breaches and misuse scenarios (e.g., deepfake scams)
- Develop policy responses for sustainable AI innovation
- Explore trade-offs between innovation, security, and environmental responsibility
- Construct an AI Risk Radar
- Co-design rapid governance interventions
- Deepen systems thinking through design tools
- Commit to one specific action to strengthen AI governance in your professional context
Day 3 (16 September 2026): Cybersecurity, data governance, and innovative AI governance mechanisms
Learning outcomes:
- Identify and classify AI-specific cyber risks
- Assess the exposure of your sector to various AI-driven cybersecurity threats
- Apply data governance and AI safety frameworks to resolve ethical dilemmas
- Trace how bias is introduced at different lifecycle stages
- Develop stakeholder-sensitive mitigation strategies
- Design a sector-specific regulatory sandbox
- Prototype governance frameworks that balance innovation, accountability, and rights-based safeguards
- Co-create and present an “AI Sandbox Playbook”
- Reflect on regulatory innovation through peer feedback
Day 4 (17 September 2026): From Principles to practice – building trustworthy AI governance
Learning outcomes:
- Evaluate the ethical performance of AI systems in your sector
- Distinguish between risk-based and rights-based governance approaches
- Apply both the EU AI Act and international rights-based frameworks
- Prototype core components of a national AI governance framework
- Examine and simulate international regulatory divergence
- Collaborate across simulated national and regional contexts (e.g., EU, USA/NIST, Singapore, Brazil, African Union)
- Utilize design thinking tools
- Articulate one actionable insight or commitment to advance trustworthy, rights-respecting AI governance within your agency or national context
Day 5 (18 September 2026): Wrap-up and action planning – from vision to implementation
Learning outcomes:
- Reflect on personal and institutional takeaways
- Apply design thinking to policy development
- Collaborate effectively in multidisciplinary teams
- Communicate and justify AI Governance models
- Evaluate and strengthen governance proposals
- Engage in iterative feedback and peer learning
- Commit to action and continued learning
- Celebrate completion and foster community
Financial support available
ITU will cover training content, accommodation for 6 nights, meals and training activities.










