São Paulo
Brazil
- Miriam Stankovich
- Ghazi Mabrouk
- Angel Draev
Intermediate
Description
This face-to-face training course is designed for policymakers, regulators, civil society leaders, and professionals seeking to enhance their skills in AI governance. It provides a comprehensive introduction to AI concepts, ethics, cybersecurity, and global legal frameworks.
Through preparatory online sessions and in-person training. participants will engage in hands-on activities such as design thinking labs, simulations, journey mapping, and prototyping to explore the AI system lifecycle, identify governance gaps, test oversight tools, and co-develop national strategies. Scenario-based exercises will address sector-specific challenges, risk mapping, and cross-border regulatory negotiations. By the end of the course, participants will have collaboratively developed a five-year AI governance roadmap tailored to their institutional or national context, aligned with international best practices.
The course is co-organised by the International Telecommunication Union (ITU) and the Regional Center for Studies on the Development of the Information Society (Cetic.br), with financial support from the European Union’s Global Gateway initiative. Participation is free of charge for selected applicants and includes accommodation, meals, and organized activities in São Paulo. Travel and visa-related expenses, if applicable, are the responsibility of participants or their sponsoring organizations.
This course is designed for:
- Policymakers and government officials
- Regulators
- Private sector professionals
- Civil society representatives
The course is limited to 30 participants.
Eligible applicants are invited to apply if they meet the following criteria:
- Completed a BSc or BA (or equivalent) in Social Sciences (e.g., Economics, Public Policy) or a related field such as Engineering or Political Science.
- Demonstrated interest or involvement in AI policy, ethics, or digital transformation.
- Fluent in English.
- Willing to complete the mandatory online pre-training materials to strengthen their understanding of AI governance, ethics, legal frameworks, and cybersecurity. Completion of this phase is required to attend the in-person course.
Government officials and policymakers from developing countries are highly encouraged to apply.
By the end of this course, participants will be able to:
- Build a foundational understanding of AI and its governance
- Identify and evaluate AI risks across the lifecycle
- Apply legal, ethical, and policy frameworks to AI oversight
- Design contextualized governance interventions
- Navigate global AI governance and promote policy coherence
- Translate learning into actionable roadmaps
This course uses a blended approach that combines online foundational learning with hands-on face-to-face sessions. It is designed to equip participants with the tools to govern AI responsibly within their institutional and national contexts. The methodology is based on design thinking, systems mapping, and experiential learning, with a strong focus on user-centred innovation, real-world problem solving, and collaborative policymaking.
Grading Matrix
Online Component (Pre-Training Phase)
- Participants must complete the pre-training mandatory phase and achieve a minimum score of 70% of the total grade.
Face-to-Face Component
Assessment is based on active participation, collaboration, and final deliverables during the São Paulo training week.
- Group Work & Scenario Simulations: 60%
- AI Governance Roadmap & Final Reflection: 40%
A total grade of 70% or more is required to receive the ITU Academy certificate.
Online Session 1 (Date and time TBC): Foundations and Ethical Challenges of AI Governance
Learning outcomes:
- Define AI and distinguish between different types: ANI, AGI, and Agentic Generative AI.
- Explain the AI lifecycle and value chain and identify common AI applications across sectors.
- Understand the need for AI governance and identify the roles of governments, the private sector, and civil society.
- Identify and analyze sources of bias in AI systems and the risks posed by lack of transparency (“black box” models).
- Explain ethical principles such as fairness, accountability, inclusivity, and human oversight in AI governance.
- Apply ethical reasoning to real-world cases of AI deployment and assess governance failures.
Online Session 2 (Date and time TBC): AI Risk, Regulation, and Global Governance
Learning outcomes:
- Identify key AI-related cybersecurity threats, including adversarial attacks, model inversion, and data poisoning.
- Explain the principles and importance of data governance and protection in AI systems.
- Describe national and regional AI-related laws.
- Compare risk-based and rights-based approaches to AI regulation.
- Understand the challenges of global AI governance and the roles of institutions such as the OECD, UNESCO, and Council of Europe (CoE)
Day 1 (09 March 2026): Foundations of AI Governance
Learning outcomes:
- Recall and apply key concepts from Module 1
- Reinforce their understanding of why AI governance matters
- Reflect on sector-specific governance risks
- Simulate multi-stakeholder perspectives
- Map the AI lifecycle and value chain
- Use design thinking tools
- Engage in peer review and collaborative feedback
- Articulate one concrete governance takeaway or action
Day 2 (10 March 2026): Addressing AI Bias, Opacity, and Risks
Learning outcomes:
- Revisit and reinforce core concepts from Module 2
- Identify and visualize how different types of bias and opacity emerge
- Reflect on real-world governance challenges, articulating how AI deployment in their sectors may lead to public trust issues and regulatory dilemmas.
- Analyze security breaches and misuse scenarios
- Develop policy responses for sustainable AI innovation
- Explore trade-offs between innovation, security, and environmental responsibility
- Construct an AI Risk Radar
- Co-design rapid governance interventions
- Deepen systems thinking through design tools
- Commit to one specific action to strengthen AI governance
Day 3 (11 March 2026): Cybersecurity, Data Governance, and AI Safety
Learning outcomes:
- Reinforce key concepts from Module 3
- Identify and classify AI-specific cyber risks
- Assess the exposure of their sector
- Apply data governance and AI safety frameworks
- Trace how bias is introduced at different lifecycle stages
- Develop stakeholder-sensitive mitigation strategies
- Design a sector-specific regulatory sandbox
- Prototype governance frameworks
- Co-create and present an “AI Sandbox Playbook”
- Reflect on regulatory innovation through peer feedback
Day 4 (12 March 2026): From Principles to Practice – Building Trustworthy AI Governance
Learning outcomes:
- Revisit and reinforce key concepts from Modules 4 and 5
- Evaluate the ethical performance of AI systems
- Distinguish between risk-based and rights-based governance approaches
- Apply both the EU AI Act and international rights-based frameworks
- Prototype core components of a national AI governance framework
- Examine and simulate international regulatory divergence
- Collaborate across simulated national and regional contexts
- Utilize design thinking tools
- Articulate one actionable insight or commitment
Day 5 (13 March 2026): Wrap-Up and Action Planning – From Vision to Implementation
Learning outcomes:
- Reflect on personal and institutional takeaways
- Apply design thinking to policy development
- Collaborate effectively in multidisciplinary teams
- Communicate and justify AI Governance models
- Evaluate and strengthen governance proposals
- Engage in iterative feedback and peer learning
- Commit to action and continued learning
- Celebrate completion and foster community
Financial support available
ITU will cover training content, accommodation for 6 nights, meals and training activities.