Main Insight
Held at the European Parliament in Brussels, the dialogue focused on implementation and enforcement of AI governance mechanisms at this critical juncture in human history.
Takeaways from the fourth edition of The Athens Roundtable
March 28, 2023
In 2022, major technological developments in machine learning prompted wide-ranging dialogues on the role AI will play in humanity’s future. Some of these dialogues, such as in the case of copyright over synthetic images and civil rights claims regarding the use of recommender systems, have escalated to legal disputes. Others have evolved into discussions of normative and voluntary governance mechanisms, from corporate best practices to technical standards.



To address the latest AI breakthroughs, their implications, and appropriate governance responses, The Athens Roundtable on AI and the Rule of Law convened over 1,100 decision-makers from 112 countries in a two-day hybrid conference. The focus of this year’s edition was on the implementation and enforcement of laws, regulations, standards, and policies across AI systems’ industrial value chain.
Held at the European Parliament in Brussels, the Roundtable aimed to foster a forward-looking conversation at this critical juncture in history—marked by rapid advancement in AI capabilities, the widespread diffusion of large-language model products (such as ChatGPT), and the point at which general-purpose AI systems are beginning to be integrated into industrial and consumer technologies.
The discussions explored the impact of increasingly advanced AI capabilities on human rights and democratic values, as well as the development of legal and normative frameworks to mitigate the risks AI systems pose to human safety and fundamental rights. The conference exchanged insights on how these governance frameworks can complement each other to facilitate effective governance.
This year’s Roundtable was held under the auspices of H.E. the President of the Hellenic Republic Ms. AIkaterini Sakellaropoulou and was organized in partnership with the Patrick J. McGovern Foundation. Proudly, the event was co-hosted by prominent intergovernmental organizations, including the European Parliament, UNESCO, OECD, and the Council of Europe, as well as leading AI institutions and firms, including IEEE, Cravath, and Amazon Web Services. It was generously supported by the Jain Family Institute, Arnold & Porter, and Debevoise & Plimpton.
See below for a summary of the fourth edition’s takeaways:
- Increase enforcement capabilities: Regulators must provide clarity regarding compliance practices, shape stakeholders’ behavior, and foster trust. In fast-paced technological growth, regulatory sandboxes should be leveraged to anticipate regulatory needs for emerging AI technologies.
- Innovate governmental and intergovernmental institutions: Tackling the disruption in governance caused by large machine learning models will require a socio-technical approach coupled with methodologically-robust frameworks. Governments urgently need to develop policy instruments, and industry players should demonstrate good faith by establishing guidelines and terms of service that address the uncertainties of advanced AI’s future impact. For both the private and public sectors, fostering responsible AI uptake will require overcoming challenges in capacity building and market fragmentation.
- Adopt an industrial value chain approach: In a rapidly-evolving economic landscape powered by AI, it is crucial to align value- and risk-sharing across actors throughout the value chain—from upstream research and development to downstream deployment. Addressing the misalignment in compliance incentives is necessary at the micro-level (for organizational success), and macro-level (for AI governance). Internal soft-law governance mechanisms, such as principles, guidelines, codes of conduct, and standards, should be leveraged by the industry for preemptive compliance. As mentioned above, coalition-building is essential in developing such mechanisms.