Skip to content

Main Insight

We recount our achievements and impact in 2023, spanning policy research, convenings, workshops, advocacy, community-building, report production, interviews, and expert group, conference, and workshop participation.

Our 2023 Highlights

December 21, 2023

2023 was a year of seismic shifts in the AI governance landscape. As the year comes to a close, we look back on our impact and set our sights on the year ahead, which is bound to be consequential. 2024 will see elections affecting nearly half of the global population, the EU AI Act is anticipated to pass into law, U.S. regulatory efforts will be ramping up, and the UN High-Level Advisory Body on AI will produce its seminal report—and that’s just the tip of the iceberg.

We are currently fundraising and recruiting to meet ambitious goals for 2024. You can support us by donating, referring candidates to our job pages, or spreading the word about our work on social media.

Provided input to EU policymakers on governance mechanisms in the proposed AI Act

  • Through policy research, multi-stakeholder convenings, advisory, and meetings with policymakers, TFS provided concrete recommendations on drafts of the EU AI Act—the world’s first comprehensive AI legislation—to protect fundamental human values and uphold human rights and safety. Policymakers welcomed many of our recommendations. These include a special governance regime for general-purpose AI/foundation models, a central authority in the form of the European AI Office dedicated to the enforcement of the rules on GPAI models, and a dialogue mechanism between developers and authorities, including participation from civil society, to update Codes of Practice on an ongoing basis.

Engaged over 1,150 individuals on pressing AI governance challenges at the Fifth Edition of the Athens Roundtable on AI and the Rule of Law

  • TFS organized the Fifth Edition of The Athens Roundtable on AI and the Rule of Law in Washington D.C., convening diverse AI actors in an action-oriented, two-day dialogue on pressing governance challenges posed by foundation models and generative AI across legal jurisdictions. Participants included three U.S. Senators, two Representatives, Members of the European Parliament (MEPs), a Member of Parliament of Tanzania, as well as representatives of U.S. federal agencies, NIST, NSF, OECD, UNESCO, IEEE, World Bank, Google DeepMind, Anthropic, GitHub, Hugging Face, civil society organizations (e.g. FLI, Lawyers Hub Kenya, CSET, CAIDP) and academia (e.g. Stanford HAI, Chinese Academy of Sciences, Institute for Protein Design). Recordings of the event can be found here!

Advised and engaged policymakers through 100+ conversations

  • TFS held bilateral meetings or email exchanges with policymakers and policy actors in the EU and US to provide input and collect feedback on policy matters. We also provided more than ten oral briefings to development organizations working in the Global South.

Produced 20+ publications

  • TFS produced reports, briefings, and memos, and responded to requests for information (RFIs) by public institutions. This year, many of our publications outline recommendations for setting guardrails on the development of general-purpose AI and foundation models, and were coupled with outreach campaigns with policymakers, AI developers, and deployers. Most of our publications can be found on our Resources page.

Participated in 20+ expert working groups

  • TFS staff participated in expert working groups, including those of OECD.AI, UNESCO, Global Partnership on AI, Partnership on AI, World Economic Forum, Council of Europe, IEEE, NIST, CEN-CENELEC, UC Berkeley, and The Athens Roundtable’s Working and Reflection Groups, among others.

Advocated for policy priorities in 25+ conferences, workshops, and community coordination activities

  • TFS provided thought leadership in major AI policy conferences and workshops in North America, Europe, Asia, and Africa. Additionally, we have increased coordination with other AI policy civil society organizations and look forward to collaborating further in 2024.

Raised public awareness through 20+ media interviews

  • TFS raised awareness about our research and policy positions in interviews with TIME, the Wall Street Journal, The Economist, Wired, Le Monde, Al Jazeera, Die Welt, Radio France International, Global News Canada, TRT World, Tech Monitor, ZEIT Digital, Frankfurter Allgemeine Zeitung, Expressions, The Innovator, and various podcasts.

Related resources

TFS supports the approval of the EU AI Act

TFS supports the approval of the EU AI Act

The Future Society urges European Union Member States and the European Parliament to approve the EU AI Act.

Contribution to the G7 Hiroshima AI Process Code of Conduct

Contribution to the G7 Hiroshima AI Process Code of Conduct

TFS contributed to the United States and European Union’s public consultations on the G7 Hiroshima Artificial Intelligence Process Guiding Principles and Code of Conduct.

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable on AI and the Rule of Law will convene in Washington D.C. on November 30th and December 1st, 2023, to examine the risks associated with foundation models and generative AI, and explore governance mechanisms that could serve to reduce these risks.

A Blueprint for the European AI Office

A Blueprint for the European AI Office

We are releasing a blueprint for the proposed European AI Office, which puts forward design features that would enable the Office to implement and enforce the EU AI Act, with a focus on addressing transnational issues like general-purpose AI.

Model Protocol for Electronically Stored Information (ESI)

Model Protocol for Electronically Stored Information (ESI)

The Future Society, with support from IEEE, has developed a model protocol to assist parties seeking to establish the trustworthiness of advanced tools used to review electronically stored information (ESI) in legal discovery.

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

In this blueprint, we explain why a tiered approach makes sense in the EU AI Act and how to build a risk-based tiered regulatory regime for GPAI – the technicalities involved, which requirements should be imposed on their corresponding tiers, and how to enforce them.