Skip to content

Main Insight

More than 300 participants and 50 speakers focused on ways AI can make business more productive, improve government efficiency and address many of the world’s most pressing problems.

OECD AI Group of Experts (AIGO)

February 10, 2019

As part of the OECD’s AI Group of Experts (AIGO), The Future Society participated in developing the OECD AI Principles, now adopted by the G20. This expert group is convened to advise the OECD on the development of ethical norms for governments, businesses, labor, and citizens’ use of AI. The Future Society continues to support the OECD AI policy team as a member of its Network of Experts on AI (‘ONE AI’).

The Organisation for Economic Cooperation and Development has created an expert group (AIGO) to provide guidance in scoping principles for artificial intelligence in society. The formation of the group is the latest step in the organization’s work on artificial intelligence to help governments, business, labor and the public maximize the benefits of AI and minimize its risks. The Future Society has been very active in these groups represented by Nicolas Miailhe and Cyrus Hodes.

The group is made up of experts from OECD member countries and think tanks, business, civil society and labor associations and other international organizations. In 2016 the OECD’s Committee on Digital Economy Policy began discussing the need for a Recommendation on AI principles by the OECD Council. The Committee decided in May 2018 to establish the expert group to scope AI principles that could be adopted in 2019.  See the list of expert group members.

“In the best traditions of the OECD, we are reaching out to a wide group of experts and thinkers to assist us in developing principles that will keep our countries competitive, guide the ethical progress of this fast-moving technology and share our knowledge with the broader world,” said Wonki Min, Vice Minister of Science and ICT of Korea who as the chair of the OECD’s Digital Economy Committee will head the expert group.

Developing AI principles is a natural outgrowth of the OECD’s work over the past two years on the multidisciplinary “Going Digital” and “Next Production Revolution” projects, which are examining the broad impact of new technologies on society. Like those two projects, the AI principles draws on expertise across the committees and directorates of the OECD under the coordination of the OECD Directorate for Science, Technology and Innovation.

Along with AI principles, the OECD is setting up an OECD Policy Observatory on AI. The Observatory will bring together committees from across the OECD as well as a range of other stakeholders. The goal will be to identify promising AI applications, map their economic and social impact and share the information as widely as possible.

Nineteen countries around the world are represented in the AI expert group. They are joined by representatives from the European Commission, business and labor groups and outside groups like the Institute of Electrical and Electronics Engineers, Massachusetts Institute of Technology and Harvard’s Berkman Klein Center, and the French Institute for Research in Computer Science and Automation (INRIA).

The expert group grew out of concepts debated at OECD events in 2016 and 2017, notably the conference titled “AI: Intelligent Machines, Smart Policies” of October 2017. Over two days of discussions, a consensus emerged that the far-reaching changes driven by AI systems offer dynamic opportunities for improving the public, economic and social sectors. More than 300 participants and 50 speakers focused on ways AI can make business more productive, improve government efficiency and address many of the world’s most pressing problems.

At the conference and in subsequent discussions, attention was focused on the best ways government and tech companies can build public trust in AI systems, which is seen as essential to taking full advantage of AI’s potential.

The diversity of representation on the expert group reflects the concept that AI’s global impact requires a global consensus. Many international organizations, from the European Union and the United Nations to the G7 and G20, are debating aspects of AI’s impact on work and the economy. National governments, particularly among the 36 OECD member countries, are developing their own strategies. Tech companies and labor also are working toward common positions to deal with the impact of AI on future jobs, ethical guidelines and reducing bias.

In keeping with the OECD’s commitment to cooperation with developing countries, the experts are also expected to identify ways to ensure that the benefits of AI are shared as widely as possible and that global standards are developed to ensure trust in AI.

Cyrus Hodes, Executive Director of the AI Initiative at The Future Society, and Adviser on AI to the United Arab Emirates said he expects the group to take full advantage of the OECD’s influence. “As an international body researching and examining societies’ economic challenges to propose practical recommendations to policymakers, the OECD is uniquely positioned to bring together its set of in-house, member and partner countries’ experts to tackle the impact of fast-moving emerging technologies, starting with the rise of artificial intelligence,” said Hodes. “This provides an invaluable tool for policymakers to adapt, embrace the upsides while mitigating risks of such powerful systems.”

At the same time, concerns have been expressed about the need for transparency and accountability in developing and deploying AI systems. “Artificial intelligence must put people and planet first,” said Christina Colclough, director of digitalization and trade at UNI Global Union, which represents workers. “Ethical AI discussions on a global scale are essential to guarantee a widespread, implemented and transparent solution.”

Achieving the right balance — and weighing the benefits of AI and mitigating the risks — requires the kind of open-minded debate at the heart of the expert group’s task and the overall efforts of the OECD to develop and share principles for AI in society. 

See also

Related resources

TFS supports the approval of the EU AI Act

TFS supports the approval of the EU AI Act

The Future Society urges European Union Member States and the European Parliament to approve the EU AI Act.

Contribution to the G7 Hiroshima AI Process Code of Conduct

Contribution to the G7 Hiroshima AI Process Code of Conduct

TFS contributed to the United States and European Union’s public consultations on the G7 Hiroshima Artificial Intelligence Process Guiding Principles and Code of Conduct.

Our 2023 Highlights

Our 2023 Highlights

We recount our achievements and impact in 2023, spanning policy research, convenings, workshops, advocacy, community-building, report production, interviews, and expert group, conference, and workshop participation.

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable on AI and the Rule of Law will convene in Washington D.C. on November 30th and December 1st, 2023, to examine the risks associated with foundation models and generative AI, and explore governance mechanisms that could serve to reduce these risks.

A Blueprint for the European AI Office

A Blueprint for the European AI Office

We are releasing a blueprint for the proposed European AI Office, which puts forward design features that would enable the Office to implement and enforce the EU AI Act, with a focus on addressing transnational issues like general-purpose AI.

Model Protocol for Electronically Stored Information (ESI)

Model Protocol for Electronically Stored Information (ESI)

The Future Society, with support from IEEE, has developed a model protocol to assist parties seeking to establish the trustworthiness of advanced tools used to review electronically stored information (ESI) in legal discovery.