Skip to content

Main Insight

We share highlights from the fifth edition of The Athens Roundtable on AI and the Rule of Law, and we present 8 key recommendations that emerged from discussions.

Towards Effective Governance of Foundation Models and Generative AI

March 19, 2024

Today, we are publishing “Towards Effective Governance of Foundation Models and Generative AI,” a report sharing highlights and key recommendations from the fifth edition of The Athens Roundtable on AI, which took place on November 30th and December 1st, 2023 in Washington, D.C. The event brought together over 1,150 attendees in a two-day dialogue focused on governance mechanisms for foundation models and generative AI. Participants were encouraged to generate innovative “institutional solutions”—binding regulations, inclusive policy, standards-development processes, and robust enforcement mechanisms—to align the development and deployment of AI systems with the rule of law.

2023 marked a year in which AI governance climbed the agenda of policymakers and decision-makers worldwide. The release of technologies with increasingly general capabilities has generated intrigue and hype, and accelerated a concentration of power in big tech, triggering a societal-scale wake-up call. The growing threats to democratic processes and human rights presented by generative AI systems have prompted calls for regulation. We must collectively demand rigorous standards of safety, security, transparency, and oversight to ensure that these systems are developed and deployed responsibly.

In our report, we share highlights from the panels, fireside chats, and other dialogues at the fifth edition of The Athens Roundtable and present 8 key recommendations that emerged through discussions.

Key recommendations emerging from discussions

  1. Adopt comprehensive horizontal and vertical regulations: It is crucial that countries adopt legally binding requirements in the form of regulation to effectively shape the behavior of AI developers and deployers towards the public interest. Self- and soft-governance have not realized their promises regarding responsible AI and safety, especially when it comes to foundation models. Sector-specific and umbrella regulations should be adopted in a complementary manner across jurisdictions to fill the existing gap in AI governance. This approach allows for robust governance across the entire AI value chain, from design and development to monitoring, including for general-purpose foundation models that do not fit in any particular sector and may not be covered by current or future sectoral regulations.
  2. Strengthen the resilience of democratic institutions: There is an urgent need to build resilience in democratic institutions against disruptions from technological developments, notably of advanced general-purpose AI systems. Key elements in building resilience are: capacity-building, in the form of employee training and talent attraction and retention, across government institutions; institutional innovation to bring public sector structures and processes up to date; enforcement authority spanning oversight of the development and deployment of AI systems; and effective public participation. The latter is crucial to ensure that state institutions remain democratic, maintain citizens’ trust, and act in the public interest.
  3. Enhance coordination among civil society organizations (CSOs) to advance responsible AI policies: In a policy environment with heavy industry lobbying and many conflicting viewpoints, it will be crucial for CSOs to coordinate efforts in order to amplify promising policy recommendations. Key to this coordination will be ensuring that CSOs involved are demographically, culturally, and politically representative of the population at large, and that they consistently listen to the voices of the communities most impacted by emerging technologies.
  4. Invest in the development of methods to measure and evaluate foundation models’ capabilities, risks, and impacts: Measurement and evaluation methods play an indispensable role in understanding and monitoring technological capabilities, establishing safeguards to protect fundamental rights, and mitigating large-scale risks to society. However, current methods remain imperfect, and will require persistent development in the years to come. Governments should invest in multi-disciplinary efforts to develop measurement and evaluation methods, such as benchmarks, capability evaluations, red-teaming tests, auditing techniques, risk assessments, and impact assessments.
  5. Include global majority representation and impacted stakeholders in standard-setting initiatives: Many standard-setting initiatives still lack input from civil society organizations that represent impacted communities. Policymakers and leaders of such initiatives must strive to understand and address structural factors that have led to the under-representation or lack of participation by certain groups in international standard-setting efforts. Potential mechanisms to promote participation include remunerating underrepresented groups and restructuring internal processes to tangibly engage them, rather than provide mere formal representation.
  6. Develop and adopt liability frameworks for foundation models and generative AI: Liability frameworks must address the complex, evolving AI value chain to disincentivize potentially harmful behavior and mitigate risks. Companies that make foundation models available to downstream deployers across a range of domains benefit from a liability gap, where the causal chain between development choices and any harm caused by the model is currently overlooked. Regulation that establishes liability along the AI value chain is crucial to engender accountability and fairly distribute legal responsibility, avoiding liability being transferred exclusively onto deployers or users of AI systems.
  7. Develop and implement a set of regulatory mechanisms to operationalize safety by design in foundation models: Given the borderless character of the AI value chain, regulatory mechanisms must be interoperable across jurisdictions. Regulators should invest in regulatory sandbox programs to test and refine foundation models and corresponding regulatory safeguards before deployment.
  8. Create a special governance regime for dual-use foundation model release: Decisions regarding the release methods for dual-use foundation models should be scrutinized, as they pose societal risks. Exhaustive testing before release would be in the public interest for models at the frontier. Further discussion among stakeholders should identify model release methods that maximize the benefits of open science and innovation without sacrificing public safety.

The fifth edition in numbers

The fifth edition of The Athens Roundtable was organized by The Future Society and co-hosted by esteemed partners—the Institute for International Science and Technology Policy (IISTP), the NIST-NSF Institute for Trustworthy AI in Law & Society (TRAILS), UNESCO, OECD, the World Bank, IEEE, Homo Digitalis, the Center for AI and Digital Policy (CAIDP), Paul, Weiss LLP, Arnold & Porter, and the Patrick J. McGovern Foundation—and was proudly held under the aegis of the Greek Embassy to the United States.

Looking ahead

The Future Society will continue to facilitate dialogues and collaborations that aim to steer the development of AI in a manner that upholds fundamental rights and the rule of law.

Moving forward, we maintain one fundamental commitment with respect to The Athens Roundtable: To reexamine our current practices and assumptions, welcoming input and feedback from broad audiences, with particular attention paid to engaging underrepresented communities.

Team members

Related resources

Springtime Updates

Springtime Updates

A leadership transition, an Athens Roundtable report, and a few more big announcements!

Leadership transition announcement

Leadership transition announcement

Earlier this year, after an extensive search for The Future Society’s next Executive Director, the Board of Directors selected Nicolas Moës, former Director of EU AI Governance, to fill the role.

TFS feedback reflected in historic UN General Assembly resolution on AI

TFS feedback reflected in historic UN General Assembly resolution on AI

Earlier this year, we provided feedback on a draft UN General Assembly resolution on AI. Last week, an updated resolution that reflected our feedback, co-sponsored by more than 120 Member States, was adopted by consensus.

TFS joins U.S. NIST AI Safety Institute Consortium

TFS joins U.S. NIST AI Safety Institute Consortium

TFS is collaborating with the NIST in the AI Safety Institute Consortium to establish a new measurement science that will enable the identification of proven, scalable, and interoperable measurements and methodologies to promote development of trustworthy AI and its responsible use.

Launch event for the Journal of AI Law and Regulation (AIRe)

Launch event for the Journal of AI Law and Regulation (AIRe)

TFS is partnering with Lexxion Publisher to launch The Journal of AI Law and Regulation (AIRe), covering key legislative developments in AI governance and providing an impartial platform for discussing the role of law and regulation in managing the challenges and opportunities of AI's impact on society.

TFS provides input to UN initiatives on global AI governance

TFS provides input to UN initiatives on global AI governance

TFS contributed recommendations to two major United Nations initiatives on global governance of artificial intelligence, providing feedback on a draft UN General Assembly plenary resolution and responding to open calls for feedback on the UN AI Advisory Body's interim report.