Skip to content

Main Insight

The Fifth Edition of The Athens Roundtable on AI and the Rule of Law will convene in Washington D.C. on November 30th and December 1st, 2023, to examine the risks associated with foundation models and generative AI, and explore governance mechanisms that could serve to reduce these risks.

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

WASHINGTON, D.C. — On behalf of the organizer and co-hosts, it is a pleasure to extend an invitation to the Fifth Edition of The Athens Roundtable on AI and the Rule of Law. This year, the event will convene in Washington D.C. on November 30th and December 1st, 2023, to examine the risks associated with foundation models and generative AI, and explore governance mechanisms that could serve to reduce these risks. Speakers will include U.S. Senator Richard Blumenthal, U.S. Representative Yvette Clarke, Turing Award winner Yoshua Bengio, Tanzanian Parliamentarian Neema Lugangira, UN Secretary-General’s Envoy on Technology Amandeep Singh Gill, U.S. EEOC Commissioner Keith E. Sonderling, European MEP Dragoș Tudorache, Tawana Petty, Elham Tabassi, Rumman Chowdhury, Vilas Dhar, Gary Marcus, Yi Zeng, Irene Solaiman, among many other policymakers, technical experts, and thought leaders. More up-to-date information on the details of the Fifth Edition, including speakers, can be found here: https://www.aiathens.org/dialogue/fifth-edition.

About The Athens Roundtable on AI and the Rule of Law

The Athens Roundtable is the international, multi-stakeholder forum on artificial intelligence and the rule of law, focusing on legal, judicial, and compliance systems. The Athens Roundtable was co-founded in 2019 by The Future Society under the aegis of the Presidency of the Hellenic Republic. Over four editions, it has convened dialogues with over 4,000 stakeholders from 120 countries—including representatives from the European Parliament, UNESCO, OECD, Council of Europe, IEEE, and European Commission, as well as national governments, the private sector, and civil society.

AI governance now stands at the forefront of national policy and legislative initiatives. This year was marked by myriad U.S. congressional hearings and proposed bills, voluntary commitments by leading AI companies, and the Biden Administration’s sweeping executive order in November. Across the pond, the UK hosted the first AI Safety Summit, China introduced specific regulations for generative AI, and the EU is finalizing trilogue discussions on the AI Act. As regulatory efforts evolve around the world, courts face pivotal cases that could transform technology, individual autonomy, our collective truth, and the fabric of our democratic systems.

There is an urgent need to act on these policy windows and implement effective governance mechanisms to ensure that AI systems improve the welfare and well-being of people, contribute to positive, sustainable global economic activity, increase innovation and productivity, and help respond to key global challenges. It will be critical to reconcile technological advancements with robust governance frameworks that uphold the rule of law—now and in the future.

Fifth Edition Partners

In 2023, The Athens Roundtable will be organized by The Future Society and co-hosted by the Institute for International Science and Technology Policy (IISTP) at the Elliott School of International Affairs, the NIST-NSF Institute for Trustworthy AI in Law & Society (TRAILS), UNESCO, the OECD, the World Bank, IEEE, Homo Digitalis, the Center for AI and Digital Policy (CAIDP), Paul, Weiss LLP, Arnold & Porter, and the Patrick J. McGovern Foundation. It will proudly be held under the aegis of the Embassy of Greece in Washington D.C.

Public Event Details:

📅 Thursday, November 30th, and Friday, December 1st  

🕙 10:00am – 3:30pm EST (doors open 9am)

📍Jack Morton Auditorium (805 21st Street NW, Washington, D.C.)

Contact Information:

For media inquiries and/or queries about registration, please contact aiathens@thefuturesociety.org.

Related resources

A Blueprint for the European AI Office

A Blueprint for the European AI Office

We are releasing a blueprint for the proposed European AI Office, which puts forward design features that would enable the Office to implement and enforce the EU AI Act, with a focus on addressing transnational issues like general-purpose AI.

Model Protocol for Electronically Stored Information (ESI)

Model Protocol for Electronically Stored Information (ESI)

The Future Society, with support from IEEE, has developed a model protocol to assist parties seeking to establish the trustworthiness of advanced tools used to review electronically stored information (ESI) in legal discovery.

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

In this blueprint, we explain why a tiered approach makes sense in the EU AI Act and how to build a risk-based tiered regulatory regime for GPAI – the technicalities involved, which requirements should be imposed on their corresponding tiers, and how to enforce them.

Giving Agency to the AI Act

Giving Agency to the AI Act

Earlier this year, we conducted research comparing different institutional models for an EU-level body to oversee the implementation and enforcement of the AI Act. We're pleased to share our memo: Giving Agency to the AI Act.

Response to NIST Generative AI Public Working Group Request for Resources

Response to NIST Generative AI Public Working Group Request for Resources

TFS submitted a list of clauses to govern the development of general-purpose AI systems (GPAIS) to the U.S. NIST Generative AI Public Working Group (NIST GAI-PWG).

Response to U.S. OSTP Request for Information on National Priorities for AI

Response to U.S. OSTP Request for Information on National Priorities for AI

Our response put forward national priorities focused on security standards, measurement and evaluation frameworks, and an industry-wide code of conduct for GPAIS development.