Skip to content

Main Insight

Held at the European Parliament in Brussels, the dialogue focused on implementation and enforcement of AI governance mechanisms at this critical juncture in human history.

Takeaways from the fourth edition of The Athens Roundtable

March 28, 2023

In 2022, major technological developments in machine learning prompted wide-ranging dialogues on the role AI will play in humanity’s future. Some of these dialogues, such as in the case of copyright over synthetic images and civil rights claims regarding the use of recommender systems, have escalated to legal disputes. Others have evolved into discussions of normative and voluntary governance mechanisms, from corporate best practices to technical standards.

To address the latest AI breakthroughs, their implications, and appropriate governance responses, The Athens Roundtable on AI and the Rule of Law convened over 1,100 decision-makers from 112 countries in a two-day hybrid conference. The focus of this year’s edition was on the implementation and enforcement of laws, regulations, standards, and policies across AI systems’ industrial value chain.

Held at the European Parliament in Brussels, the Roundtable aimed to foster a forward-looking conversation at this critical juncture in history—marked by rapid advancement in AI capabilities, the widespread diffusion of large-language model products (such as ChatGPT), and the point at which general-purpose AI systems are beginning to be integrated into industrial and consumer technologies.

The discussions explored the impact of increasingly advanced AI capabilities on human rights and democratic values, as well as the development of legal and normative frameworks to mitigate the risks AI systems pose to human safety and fundamental rights. The conference exchanged insights on how these governance frameworks can complement each other to facilitate effective governance.

This year’s Roundtable was held under the auspices of H.E. the President of the Hellenic Republic Ms. AIkaterini Sakellaropoulou and was organized in partnership with the Patrick J. McGovern Foundation. Proudly, the event was co-hosted by prominent intergovernmental organizations, including the European Parliament, UNESCO, OECD, and the Council of Europe, as well as leading AI institutions and firms, including IEEE, Cravath, and Amazon Web Services. It was generously supported by the Jain Family Institute, Arnold & Porter, and Debevoise & Plimpton.

See below for a summary of the fourth edition’s takeaways:

  • Increase enforcement capabilities: Regulators must provide clarity regarding compliance practices, shape stakeholders’ behavior, and foster trust. In fast-paced technological growth, regulatory sandboxes should be leveraged to anticipate regulatory needs for emerging AI technologies.
  • Innovate governmental and intergovernmental institutions: Tackling the disruption in governance caused by large machine learning models will require a socio-technical approach coupled with methodologically-robust frameworks. Governments urgently need to develop policy instruments, and industry players should demonstrate good faith by establishing guidelines and terms of service that address the uncertainties of advanced AI’s future impact. For both the private and public sectors, fostering responsible AI uptake will require overcoming challenges in capacity building and market fragmentation.
  • Adopt an industrial value chain approach: In a rapidly-evolving economic landscape powered by AI, it is crucial to align value- and risk-sharing across actors throughout the value chain—from upstream research and development to downstream deployment. Addressing the misalignment in compliance incentives is necessary at the micro-level (for organizational success), and macro-level (for AI governance). Internal soft-law governance mechanisms, such as principles, guidelines, codes of conduct, and standards, should be leveraged by the industry for preemptive compliance. As mentioned above, coalition-building is essential in developing such mechanisms.

Related resources

TFS supports the approval of the EU AI Act

TFS supports the approval of the EU AI Act

The Future Society urges European Union Member States and the European Parliament to approve the EU AI Act.

Contribution to the G7 Hiroshima AI Process Code of Conduct

Contribution to the G7 Hiroshima AI Process Code of Conduct

TFS contributed to the United States and European Union’s public consultations on the G7 Hiroshima Artificial Intelligence Process Guiding Principles and Code of Conduct.

Our 2023 Highlights

Our 2023 Highlights

We recount our achievements and impact in 2023, spanning policy research, convenings, workshops, advocacy, community-building, report production, interviews, and expert group, conference, and workshop participation.

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable on AI and the Rule of Law will convene in Washington D.C. on November 30th and December 1st, 2023, to examine the risks associated with foundation models and generative AI, and explore governance mechanisms that could serve to reduce these risks.

A Blueprint for the European AI Office

A Blueprint for the European AI Office

We are releasing a blueprint for the proposed European AI Office, which puts forward design features that would enable the Office to implement and enforce the EU AI Act, with a focus on addressing transnational issues like general-purpose AI.

Model Protocol for Electronically Stored Information (ESI)

Model Protocol for Electronically Stored Information (ESI)

The Future Society, with support from IEEE, has developed a model protocol to assist parties seeking to establish the trustworthiness of advanced tools used to review electronically stored information (ESI) in legal discovery.