Skip to main content
Registration
Coming soon
 - 
Event dates
 - 
Location
Global or multi-regional
,

Delhi
India

Training topics
Artificial intelligence
Training type
Face to Face
Languages
English
Coordinators
  • Angel Draev
  • Akanksha Sharma
  • Ghazi Mabrouk
Course level

Intermediate

Duration
42.5 hours
Event email contact
Ituacademy@itu.int
Price
$0.00

Event Organizer(s)

Supported by

Description

This blended training course is designed for policymakers, regulators, civil society leaders, and professionals globally seeking to enhance their skills in AI governance. It provides a comprehensive introduction to AI concepts, ethics, cybersecurity, and global legal frameworks.

Through preparatory self-paced and in-person training sessions, participants will engage in hands-on activities such as design thinking labs, simulations, journey mapping, and prototyping to explore the AI system lifecycle, identify governance gaps, test oversight tools, and co-develop national strategies. Scenario-based exercises will address sector-specific challenges, risk mapping, and cross-border regulatory negotiations. By the end of the course, participants will have collaboratively developed a five-year AI governance roadmap tailored to their institutional or national context, aligned with international best practices.

The course is co-organized by the International Telecommunication Union (ITU) with funding from the European Union’s Global Gateway initiative and the Ministry of Internal Affairs and Communications (MIC), Japan. Participation is free of charge for selected applicants and includes accommodation, meals, and organized activities in Delhi, India. Travel and visa-related expenses, if applicable, are the responsibility of participants or their sponsoring organizations.

This course is designed for:

  • Policymakers and government officials
  • Regulators
  • Private sector professionals
  • Civil society representatives

The course is limited to 30 participants.

Eligible applicants are invited to apply if they meet the following criteria:

  • Completed a BSc or BA (or equivalent) in Social Sciences (e.g., Economics, Public Policy) or a related field such as Engineering or Political Science.
  • Demonstrated interest or involvement in AI policy, ethics, or digital transformation.
  • Fluent in English.
  • Willing to complete the mandatory online pre-training materials to strengthen their understanding of AI governance, ethics, legal frameworks, and cybersecurity. Completion of this phase is required to attend the in-person course.

Government officials and policymakers from developing countries are highly encouraged to apply.

By the end of this course, participants will be able to:

  1. Build a Foundational Understanding of AI and Its Governance
  2. Identify and Evaluate AI Risks Across the Lifecycle
  3. Apply Legal, Ethical, and Policy Frameworks to AI Oversight
  4. Design Contextualized Governance Interventions
  5. Navigate Global AI Governance and Promote Policy Coherence
  6. Translate Learning into Actionable Roadmaps

This blended course combines online learning and five days of in-person training in Delhi, India to build practical AI governance skills.

Online (Self-paced – 5 sessions):
Covers AI fundamentals, ethics, global governance models, cybersecurity, and data risks. Includes quizzes and applied exercises (mandatory).

In-person (5 days):
Hands-on simulations and role-play on real policy challenges (healthcare AI, bias, sustainability, cross-border deployment). Participants use practical tools to design, test, and prototype governance solutions.

Peer Learning & Capstone:
Structured feedback sessions and a final group project to develop a five-year AI governance roadmap tailored to participants’ institutions or countries.

Self-paced sessions (Pre-Training Phase)

  • Participants must complete the pre-training mandatory phase and achieve a minimum score of 80%.

Face-to-face component

Assessment is based on active participation, collaboration, and final deliverables during the Delhi training week.

  • Group Work & Scenario Simulations: 60%
  • AI Governance Roadmap & Final Reflection: 40%

A total grade of 70% or more is required to receive the ITU Academy certificate. 

Self-Paced Course: Foundations of AI Governance and Responsible AI

Session 1: Introduction to AI and Why Governance Matters

Session 2: Bias, Transparency, and Accountability in AI Systems

Session 3: AI and Data Governance

Session 4: AI Regulation and Global Governance Approaches

Session 5: Ethical Principles in AI Governance


Face-to-face sessions

Day 1: Foundations of AI Governance

By the end of the day, participants will be able to:

  • Reflect on sector-specific governance risks by identifying how AI impacts various domains and stakeholder groups
  • Simulate multi-stakeholder perspectives in real-world AI governance scenarios
  • Map the AI lifecycle and value chain
  • Use design thinking tools
  • Articulate one concrete governance takeaway or action

Day 2: Addressing AI Bias, Opacity, and Risks

By the end of the day, participants will be able to:

  • Identify and visualize how different types of bias and opacity emerge at various stages of the AI lifecycle
  • Reflect on real-world governance challenges
  • Analyze security breaches and misuse scenarios
  • Develop policy responses for sustainable AI innovation
  • Explore trade-offs between innovation, security, and environmental responsibility,
  • Construct an AI Risk Radar for their national or institutional context, identifying and prioritizing risks (bias, transparency, security, sustainability) and linking these to sector-specific vulnerabilities.
  • Co-design rapid governance interventions
  • Deepen systems thinking through design tools

Day 3: Cybersecurity, Data Governance, AI Standards and Innovative AI Governance Mechanisms

By the end of the day, participants will be able to:

  • Identify and classify AI-specific cyber risks
  • Assess the exposure of their sector to various AI-driven cybersecurity threats
  • Apply data governance and AI safety frameworks to resolve ethical dilemmas
  • Trace how bias is introduced at different lifecycle stages
  • Develop stakeholder-sensitive mitigation strategies
  • Design a sector-specific regulatory sandbox, building on international best practices (e.g. UK, Singapore, Canada) and guided by tools from the World Bank’s AI governance resources.
  • Co-create and present an “AI Sandbox Playbook”, articulating entry/exit criteria, scope, safeguards, and transparency mechanisms adapted to their local context.
  • Reflect on regulatory innovation through peer feedback

Day 4: From Principles to Practice – Building Trustworthy AI Governance

By the end of the day, participants will be able to:

  • Evaluate the ethical performance of AI systems in their sector by mapping real-world examples against recognized frameworks (e.g. UNESCO, OECD, HUDERIA, FRAIA)
  • Distinguish between risk-based and rights-based governance approaches
  • Apply both the EU AI Act and international rights-based frameworks to evaluate oversight needs for high-risk AI systems
  • Prototype core components of a national AI governance framework
  • Examine and simulate international regulatory divergence
  • Collaborate across simulated national and regional contexts (e.g., EU, USA/NIST, Singapore, Brazil, African Union) to build consensus and draft a joint declaration on trustworthy AI for a global use case
  • Utilize design thinking tools such as governance gap framing, policy prototyping, and stakeholder convergence mapping to creatively respond to the challenges of regulating AI
  • Articulate one actionable insight or commitment to advance AI governance within their agency or national context

Day 5: Wrap-Up and Action Planning – From Vision to Implementation

By the end of this session, participants will be able to:

  • Apply design thinking to policy development: Use user-centered, iterative methods to co-create a 5-year AI governance roadmap that integrates risk and rights-based approaches, ethical safeguards, stakeholder engagement, and global alignment (e.g., with OECD, UNESCO, NIST, EU AI Act)
  • Communicate and justify AI Governance models:Present AI governance roadmaps clearly and persuasively, articulating the rationale for selected oversight tools, stakeholder strategies, and alignment with ethical and legal standards
  • Evaluate and strengthen governance proposals:Critically assess peer-developed roadmaps using structured criteria (e.g., clarity, feasibility, innovation, risk mitigation), and provide constructive feedback to improve design and implementation plans

Tutors

Nikola Neftenov
Miriam Stankovich
AI advisor

Registration information

Unless specified otherwise, all ITU Academy training courses are open to all interested professionals, irrespective of their race, ethnicity, age, gender, religion, economic status and other diverse backgrounds. We strongly encourage registrations from female participants, and participants from developing countries. This includes least developed countries, small island developing states and landlocked developing countries.

Related documentation and links
Share in

The registration to this course is not open yet.