Skip to main content
Registration
Coming soon
 - 
Event dates
 - 
Location
Global or multi-regional
,

Bangkok
Thailand

Training topics
Artificial intelligence
Training type
Face to Face
Languages
English
Tutors
  • Miriam Stankovich
Coordinators
  • Ghazi Mabrouk
  • Angel Draev
Course level

Intermediate

Duration
40 hours
Event email contact
ituacademy@itu.int
Funded
* See financial support section for details

Event organizer(s)

Supported by

Description

This face-to-face training course is designed for policymakers, regulators, civil society leaders, and professionals seeking to enhance their skills in AI governance. It provides a comprehensive introduction to AI concepts, ethics, cybersecurity, and global legal frameworks.

Participants will engage in hands-on activities such as design thinking labs, simulations, journey mapping, and prototyping to explore the AI system lifecycle, identify governance gaps, test oversight tools, and co-develop national strategies. Scenario-based exercises will address sector-specific challenges, risk mapping, and cross-border regulatory negotiations. By the end of the course, participants will have collaboratively developed a five-year AI governance roadmap tailored to their institutional or national context, aligned with international best practices.

The course is co-organised by the International Telecommunication Union (ITU) and the National Broadcasting and Telecommunications Commission (NBTC) of Thailand, with financial support from the European Union’s Global Gateway initiative. Participation is free of charge for selected applicants and includes accommodation, meals, and organized activities in Bangkok. Travel and visa-related expenses, if applicable, are the responsibility of participants or their sponsoring organizations.

This course is designed for:

  • Policymakers and government officials
  • Regulators
  • Private sector professionals
  • Civil society representatives

The course is limited to 30 participants.

Eligible applicants are invited to apply if they meet the following criteria:

  • Completed a BSc or BA (or equivalent) in Social Sciences (e.g., Economics, Public Policy) or a related field such as Engineering or Political Science.
  • Demonstrated interest or involvement in AI policy, ethics, or digital transformation.
  • Fluent in English.
  • Willing to complete the mandatory online pre-training materials to strengthen their understanding of AI governance, ethics, legal frameworks, and cybersecurity. Completion of this phase is required to attend the in-person course.

Government officials and policymakers from developing countries are highly encouraged to apply.

By the end of this course, participants will be able to:

  • Build a foundational understanding of AI and its governance
  • Identify and evaluate AI risks across the lifecycle
  • Apply legal, ethical, and policy frameworks to AI oversight
  • Design contextualized governance interventions
  • Navigate global AI governance and promote policy coherence
  • Translate learning into actionable roadmaps

This course uses a blended approach that combines online foundational learning with hands-on face-to-face sessions. It is designed to equip participants with the tools to govern AI responsibly within their institutional and national contexts. The methodology is based on design thinking, systems mapping, and experiential learning, with a strong focus on user-centred innovation, real-world problem solving, and collaborative policymaking.

Grading Matrix

Online Component (Pre-Training Phase)

  • Participants must complete the pre-training mandatory phase and achieve a minimum score of 70% on each quiz.

Face-to-Face Component

Assessment is based on active participation, collaboration, and final deliverables during the Bangkok training week.

  • Group Work & Scenario Simulations: 60%
  • AI Governance Roadmap & Final Reflection: 40%

A total grade of 70% or more is required to receive the ITU Academy certificate. 

Day 1 (15 December 2025) – Foundations of AI Governance
Learning outcomes:

  • Recall and apply core AI types and governance principles.
  • Explain the importance of AI governance through interactive exercises.
  • Identify sector-specific AI risks via empathy mapping.
  • Simulate multi-stakeholder trade-offs in governance scenarios.
  • Map the AI lifecycle to uncover governance gaps.
  • Use journey mapping and personas to pinpoint oversight checkpoints.
  • Give peer feedback to strengthen governance proposals.
  • Articulate one concrete AI governance action.

Day 2 (16 December 2025) – Addressing AI Bias, Opacity, and Risks
Learning outcomes:

  • Reinforce AI bias, security, and sustainability concepts.
  • Visualize where bias and opacity arise across the AI lifecycle.
  • Analyze real-world trust and regulatory challenges.
  • Diagnose breaches and misuse scenarios to recommend safeguards.
  • Propose sustainable AI policies to reduce environmental impact.
  • Pitch green AI policy ideas balancing innovation and responsibility.
  • Build a Risk Radar to prioritize AI governance challenges.
  • Co-design transparency audits, risk registers, and reporting obligations.
  • Apply design tools to develop inclusive governance strategies.
  • Commit to one action addressing bias, risk, or sustainability.

Day 3 (17 December 2025) – Cybersecurity, Data Governance, and AI Safety
Learning outcomes:

  • Reinforce AI cybersecurity threats and data governance fundamentals.
  • Identify and classify AI-specific cyber risks using threat matrices.
  • Assess sector exposure to AI-driven security threats.
  • Apply data governance and safety frameworks to ethical dilemmas.
  • Trace bias introduction through bias journey mapping.
  • Develop mitigation strategies including audits and transparency.
  • Design a sector-specific regulatory sandbox based on best practices.
  • Prototype balanced governance frameworks across the AI lifecycle.
  • Create an “AI Sandbox Playbook” adapted to local contexts.
  • Reflect on adaptive regulation principles for real-world use.

Day 4 (18 December 2025) – From Principles to Practice: Building Trustworthy AI Governance
Learning outcomes:

  • Reinforce ethical principles and their legal alignments.
  • Evaluate AI systems against recognized ethical frameworks.
  • Compare risk-based and rights-based governance approaches.
  • Apply the EU AI Act and rights frameworks to high-risk AI.
  • Prototype national AI governance components per international standards.
  • Simulate international regulatory divergence and coordination.
  • Collaborate to draft a joint declaration on trustworthy AI.
  • Use design thinking to frame gaps and prototype policy solutions.
  • Commit to one insight for advancing trustworthy AI in their context.

Day 5 (19 December 2025) – Wrap-Up and Action Planning: From Vision to Implementation
Learning outcomes:

  • Reflect on key takeaways and tools for workplace application.
  • Co-create a 5-year AI governance roadmap using design thinking.
  • Collaborate to develop and refine national policy strategies.
  • Present and justify AI governance models persuasively.
  • Critically assess and improve peer roadmaps with structured criteria.
  • Engage in iterative feedback to strengthen final designs.
  • Commit to championing an AI governance change in their organization.
  • Celebrate completion and join the global AI governance community.

 

Financial support available

ITU and the NTBC will cover training content, accommodation for 6 nights, meals and training activities.

Registration information

Unless specified otherwise, all ITU Academy training courses are open to all interested professionals, irrespective of their race, ethnicity, age, gender, religion, economic status and other diverse backgrounds. We strongly encourage registrations from female participants, and participants from developing countries. This includes least developed countries, small island developing states and landlocked developing countries.

Related documentation and links
Share in