AI systems present a governance challenge that is unprecedented in scale and complexity. Central to this challenge are the emergent properties of advanced AI systems: state-of-the-art, large-scale AI systems are beginning to demonstrate behaviors that even their developers did not design or predict. As systems become increasingly capable, it will become difficult to forecast how they could be misused, or the manners in which they will interact with society.

Severe and systemic power imbalances magnify the challenge of governance. State-of-the-art AI systems carry substantial commercial advantages, but their development requires enormous sums of capital—attracting primarily private rather than public sector or academic investment. As a result, leading AI and technology companies are racing ahead in research, using their resources to influence policymaking, and hastily deploying untested technology in the interest of securing (or maintaining) market dominance. Unless stronger oversight and more robust laws are implemented, we face the possibility that unfettered development of AI capabilities could lead to incidents of large-scale discrimination or harm.

Work within this thematic area includes developing corporate governance frameworks, contributing to safety-enhancing benchmarking and metrological methods, and providing advisory to leading international organizations (e.g. OECD, UNESCO, GPAI) and standards-setting bodies (e.g. NIST, IEEE) whose remits concern the governance of AI. Our scope is global, reflecting the need to coordinate across borders; we place an emphasis on corporate governance, reflecting the reality that companies are taking the lead in advancing AI, and the manner in which they design, test, and implement AI technologies can have significant consequences for society.

Beginning in 2023, we are orienting towards focusing on risks present in the research and development of advanced AI systems, as we believe that this is the stage of the AI lifecycle that is most tractable in terms of reducing the risk of large-scale incidents.

Endorsements

Related resources

Takeaways from the fourth edition of The Athens Roundtable

Takeaways from the fourth edition of The Athens Roundtable

Held at the European Parliament in Brussels, the dialogue focused on implementation and enforcement of AI governance mechanisms at this...

New year. New TFS.

New year. New TFS.

2022 was a year of transition and growth at The Future Society.

Progress with GPAI AI & Pandemic Response Subgroup at Paris Summit 2021

Progress with GPAI AI & Pandemic Response Subgroup at Paris Summit 2021

On November 12th, 2021, an update on our work on the 'AI-Powered Immediate Response to Pandemics' project was presented at...

Leveraging Responsible AI in the Banking Sector in Africa 

Leveraging Responsible AI in the Banking Sector in Africa 

Between May and August 2021, The Future Society collaborated with the bank Société Générale, its 16 branches in Africa, and...

Classifying AI systems used in COVID-19 response

Classifying AI systems used in COVID-19 response

To aid in our information-gathering efforts, starting today, we are inviting those associated with the development of AI systems used in...

Digital contact tracing against COVID-19: a governance framework to build trust

Digital contact tracing against COVID-19: a governance framework to build trust

Published in Oxford’s International Data Privacy Law Journal, our article offers an ethical and legal framework to govern the development...