Skip to content

Main Insight

TFS contributed to the United States and European Union’s public consultations on the G7 Hiroshima Artificial Intelligence Process Guiding Principles and Code of Conduct.

Contribution to the G7 Hiroshima AI Process Code of Conduct

January 23, 2024

In October 2023, TFS contributed to the United States and European Union’s public consultations on the G7 Hiroshima Artificial Intelligence Process Guiding Principles and Code of Conduct, which is intended to inform the actions of organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems.

The October 30th, 2023 version of the G7 Hiroshima Artificial Intelligence Process Code of Conduct incorporated changes reflecting TFS’s recommendations to include terminology related to systemic risks, risk prevention, and interoperability across international technical standards and frameworks. Our input’s impact assessment is available upon request by contacting info@thefuturesociety.org.

Our recommendations underscored the need for a comprehensive approach to managing the risks presented by the development and deployment of AI systems. This includes emphasizing the specification and mitigation of systemic risks and major accidents, implementing stringent know-your-customer processes, and prioritizing pre-registration and notification protocols for advanced AI systems. The recommendations stress the significance of proactive prevention and monitoring of risks, especially with regard to open-source AI models.

We further advocated for the establishment of international technical standards and protocols to ensure consistent measurements of AI system capabilities and trustworthiness. To enhance accountability and transparency, the proposed modifications emphasize the need for clear roles and responsibilities for quality management, external audits, and compliance with the established principles. We proposed regular reporting on compliance, along with the facilitation of external reviews by authorized entities.

The G7 Leaders’ Statement on the Hiroshima AI Process can be accessed here, and the draft Code of Conduct can be found here (U.S. Department of Commerce update forthcoming).

Related resources

TFS supports the approval of the EU AI Act

TFS supports the approval of the EU AI Act

The Future Society urges European Union Member States and the European Parliament to approve the EU AI Act.

Our 2023 Highlights

Our 2023 Highlights

We recount our achievements and impact in 2023, spanning policy research, convenings, workshops, advocacy, community-building, report production, interviews, and expert group, conference, and workshop participation.

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable on AI and the Rule of Law will convene in Washington D.C. on November 30th and December 1st, 2023, to examine the risks associated with foundation models and generative AI, and explore governance mechanisms that could serve to reduce these risks.

A Blueprint for the European AI Office

A Blueprint for the European AI Office

We are releasing a blueprint for the proposed European AI Office, which puts forward design features that would enable the Office to implement and enforce the EU AI Act, with a focus on addressing transnational issues like general-purpose AI.

Model Protocol for Electronically Stored Information (ESI)

Model Protocol for Electronically Stored Information (ESI)

The Future Society, with support from IEEE, has developed a model protocol to assist parties seeking to establish the trustworthiness of advanced tools used to review electronically stored information (ESI) in legal discovery.

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

In this blueprint, we explain why a tiered approach makes sense in the EU AI Act and how to build a risk-based tiered regulatory regime for GPAI – the technicalities involved, which requirements should be imposed on their corresponding tiers, and how to enforce them.