2025 July 10 T09-13 – Campus Biotech Innovation Park, Av. de Sécheron 15, 1202 Genève, Switzerland (CH)
To join online: Link to Workshop
Context, Scope and Objectives
This workshop has been conceived and organized by a group of people from the University of Geneva, the University of Zurich and the MPAI (https://mpai.community/), in collaboration with members of the Institute for AI International Governance of Tsinghua University, I-AIIG). The workshop is open to invited scholars and experts in the fields of AI related technical, social and legal research.
During the past few years, the theme of Global AI Governance has gained broad public and policy attention favoring a wide review of the impacts of current AI development path. We understand that AI’s underlying technologies and AI systems’ performance continue to develop at an exponential pace. This provides significant opportunities as well as substantial risks. Here we focus on risks, and particularly extreme or “existential” risks.
Following the arguments and calls made by the world’s most eminent AI experts, we share key Concerns regarding the development of Advanced AI, i.e.: Safety, Alignment, Transparency, Trustworthiness, Fairness.
Many efforts have already been made to address some of these Concerns, and we are fully supportive of them. However, there is a substantial gap between the efforts made to improve AI systems’ performance (the “AI race”) and efforts made to strengthen AI safety and to address the other Concerns. This is due to a variety of reasons such as the limited focus and investment by companies, IPR issues, underestimation of the risks by Governments, difficulty of managing Generative AI through regulatory frameworks, and the geopolitical context.
In addition, based on our knowledge, there is insufficient attention to integrate standardization in the AI safety field with the AI systems’ research and development cycle.
While we have some initial ideas on the necessary starting point to address this gap, the main objective of the workshop is to openly discuss those ideas, verify if there is consensus on them and, in case, find ways to address the risks that the rapid development of AI poses to humanity. The objective is in no way dismissive of the efforts undertaken by a plurality of actors in this domain: efforts that, in general, we fully support. We believe that our proposal is complementary and should help strengthen and expand the global action required to address the Concerns.
Based on the discussions from the workshop, we hope to draft an outcome document/paper to be published. This explains the format of the workshop below.
Workshop Agenda
09:00 – 9:15 | Welcome and registration of participants |
9:15-9:30 | Introduction by the organizers
Major Concerns and existing AI Governance Initiatives:
|
9:30 – 10:30 | Open discussion session I
How to address the Concerns in an effective, timely and proactive way?
|
10:30 – 11:00 | Coffee Break |
11:00 – 12:00 | Open discussion session II
Continued: participants are invited to express their views on the points outlined above. |
12:00 – 12:30 | Review of the discussion outcome
To be proposed by the Moderator and Secretary, for review and discussion by participants:
|
12:30 – 13:00 | To be presented by the Moderator and Secretary for approval
Conclusions and indication on the way forward. |
13:00 – 14:00 | Lunch |