Skip to main content
Registration
Opened
 - 
Event dates
 - 
Location
Global or multi-regional
Training topics
Artificial intelligence
Training type
Online instructor led
Languages
English
Tutors
  • Miriam Stankovich
Coordinators
  • Angel Draev
  • Ghazi Mabrouk
Course level

Intermediate

Duration
20 hours
Event email contact
ituacademy@itu.int
Price
$0.00

Event organizer(s)

ITU logo

Supported by

Image
Global Gateway logo

Description

This course offers a comprehensive exploration of the key principles, frameworks, and emerging risks shaping the future of responsible artificial intelligence (AI) governance. Designed for policymakers, regulators, and public officials, it provides practical tools to navigate the legal, ethical, and technical dimensions of AI while promoting trust, protecting rights, and aligning with global best practices.

By the end of the training, participants will be equipped to anticipate and mitigate AI-related risks, foster public trust, and develop inclusive, ethical governance strategies that reflect international norms. The course also supports a community of practice that connects local action with global momentum, enabling participants to play a leading role in shaping the future of AI governance.

With the generous support of the European Union’s Global Gateway initiative, participation in this training is offered free of charge to selected applicants.

This course is designed for:

  • Policymakers and government officials
  • Regulators
  • Private sector professionals
  • Civil society representatives

Applicants are invited to apply if they meet the following requirements:

  • Hold a Bachelor’s degree (BSc, BA, or equivalent) in Social Sciences or a related field.
  • Have a minimum of three years of professional experience in areas such as governance, technology policy, AI development, data protection, or cybersecurity.
  • Demonstrate a strong interest or involvement in AI policy, ethics, or digital transformation.
  • Are fluent in English.

Government officials and policymakers from developing countries are especially encouraged to apply.

By the end of this course, participants will be able to:

  • Build a foundational understanding of AI and its governance
  • Identify and evaluate AI risks across the lifecycle
  • Apply legal, ethical, and policy frameworks to AI oversight
  • Design contextualized governance interventions
  • Navigate global AI governance and promote policy coherence
  • Translate learning into actionable roadmaps

This course will include: 

  • Virtual sessions with expert instructors, featuring presentations, real-time Q&A, and interactive discussions with real-world case examples and policy insights.
  • Interactive Group Work
  • Debates & Role-Playing

Session Schedule:

Week 1:

  • Live session 1: Tuesday, 21 October 2025 | From 14:00 to 16:00 CEST
  • Live session 2: Thursday, 23 October 2025 | From 14:00 to 16:00 CEST

Week 2:

  • Live session 3: Tuesday, 28 October 2025 | From 14:00 to 16:00 CEST
  • Live session 4: Thursday, 30 October 2025 | From 14:00 to 16:00 CEST

Week 3: 

  • Live session 5: Tuesday, 04 November 2025 | From 14:00 to 16:00 CET
  • Live session 6: Thursday, 06 November 2025 | From 14:00 to 16:00 CET

Week 4: 

  • Live session 7: Tuesday, 11 November 2025 | From 14:00 to 16:00 CET
  • Live session 8: Thursday, 13 November 2025 | From 14:00 to 16:00 CET

Week 5: 

  • Live session 9: Tuesday, 18 November 2025 | From 14:00 to 16:00 CET
  • Live session 10: Thursday, 20 November 2025 | From 14:00 to 16:00 CET

For this course, a mix of theoretical and practical assessments will evaluate participants' understanding of AI governance principles.

Grading Breakdown

  • Assignments & Projects – 40%
    • Includes group work and participation in instructor-led discussions.
  • Knowledge Assessments / Quizzes – 60%
    • Short quizzes at the end of each module to reinforce learning.

Passing Criteria

  • Minimum total score of 70% is required to earn the ITU certificate.
  • Completion of all assignments and quizzes is mandatory.

Week 1: Module 1 – Foundations of AI Governance

Learning Outcomes:

  • Define artificial intelligence and distinguish between types, such as narrow AI and general AI
  • Understand key machine learning techniques: supervised, unsupervised, and reinforcement learning
  • Explain advanced AI methods like natural language processing and deep learning
  • Describe the full AI lifecycle from design to deployment, including ethical and accountability challenges
  • Identify major applications of AI across sectors such as healthcare, finance, and public services
  • Compare governance challenges in different sectors and propose appropriate frameworks
  • Apply AI governance concepts to real-world case studies using multi-stakeholder perspectives

Week 2: Module 2 – Addressing Bias, Opacity & Risk in AI

Learning Outcomes:

  • Explain the ethical and societal risks posed by AI bias, lack of transparency, and unintended consequences
  • Differentiate types of bias: data, algorithmic, and systemic; and explore mitigation strategies
  • Understand the importance of AI explainability and explore transparency techniques
  • Assess broader risks including discrimination, misinformation, job loss, and environmental impact
  • Explore governance strategies tailored to sectors such as healthcare and finance
  • Engage in role-playing and simulations to manage AI risks and develop balanced policies

Week 3: Module 3 – Securing AI: Cybersecurity, Data Governance & Safety

Learning Outcomes:

  • Identify AI-specific cybersecurity threats like adversarial attacks and data manipulation
  • Understand principles of data governance including privacy, security, and transparency
  • Evaluate the privacy implications of AI systems that collect and process personal data
  • Review global data protection regulations (e.g., GDPR, NIST Risk Management Framework)
  • Explore international standards and technologies for safe and privacy-preserving AI
  • Analyze public-private partnerships in AI safety and compare governance models across countries

Week 4: Module 4 – National, Regional & Global Initiatives in AI Governance

Learning Outcomes:

  • Compare national and international AI governance frameworks (e.g., EU AI Act, Singapore, U.S., Council of Europe)
  • Contrast risk-based and rights-based approaches to regulating AI
  • Understand the challenges of harmonizing regulations across borders and sectors
  • Examine new governance tools like regulatory sandboxes and multistakeholder forums
  • Evaluate global efforts such as UNESCO’s Ethics of AI, OECD AI Principles, and the African Union Strategy
  • Design strategies to strengthen governance through audits, human oversight, and stakeholder collaboration

Week 5: Module 5 – Ethical Principles & Governance Frameworks

Learning Outcomes:

  • Apply ethical principles like fairness, accountability, transparency, and human oversight in AI governance
  • Critically assess and improve existing AI governance models and standards
  • Implement tools for responsible AI: audits, certifications, and explainability measures
  • Develop inclusive governance strategies that represent diverse and marginalized communities
  • Promote sustainable AI by addressing environmental, economic, and social impacts
  • Use appropriate human oversight models based on the level of AI risk
  • Create a 5-year strategic roadmap for ethical and effective AI governance

Registration information

Unless specified otherwise, all ITU Academy training courses are open to all interested professionals, irrespective of their race, ethnicity, age, gender, religion, economic status and other diverse backgrounds. We strongly encourage registrations from female participants, and participants from developing countries. This includes least developed countries, small island developing states and landlocked developing countries.

Related documentation and links
Share in