Main Insight
In these turbulent times, civil society organizations like ours are doubling down on our efforts. Here are a handful of reasons why your financial support to our organization can make a significant difference.
Consider supporting our work
April 10, 2023
Over the past few weeks, the web has been abuzz with AI news. In these turbulent times, civil society organizations like ours are doubling down on efforts to ensure that human rights and fundamental values are not only defended but actively promoted. If you find yourself wondering how you can make a difference, you might consider funding our work. Here are a handful of reasons why your financial support to our organization can make a positive significant difference:
1. AI governance is important. Artificial intelligence is transforming the human experience. Some experts believe that AI will radically improve the world; some think it will open doors to incidents at extreme scales. Many experts, in fact, believe both scenarios are possible. One thing is certain: our future will be the result of human decisions to design, test, and deploy technology. This, we believe, is the collective action problem of our time!
2. AI systems are already causing harm. Some AI systems, such as facial recognition and predictive analytics, have long-standing histories of discriminatory outcomes. In recent years, newer technologies, such as generative text and image systems, have been harnessed to propagate misinformation and orchestrate large-scale cyberattacks. Within only the past few months, frontier language models have exhibited new characteristics, such as deception and power-seeking, which could introduce new risks and harms, while the growing integration of AI systems in technology is likely to make adverse impacts more destructive and widespread.
3. AI governance is tractable. Humans are behind the wheel—changes in human behaviors can steer the course of technological development toward a safer and more equitable future. Responsible investments in AI metrology and safety-oriented research, for example, can help us prepare better-informed regulatory frameworks before more powerful systems are deployed. Public-private cooperation can provide the resources and perspectives needed to tackle the complex task of overseeing increasingly capable AI systems. Industry-wide responsible practices and legislative action can slow the unfettered deployment of unsafe systems. Governments can establish verifiable, enforceable agreements to prevent large-scale hazards. It will require a concerted effort of researchers, policymakers, and companies to ensure that AI systems of the future align with human values and the rule of law.
4. AI governance may be a front-loaded challenge. Laws, policies, and norms have compounding effects. While crafting AI governance will be an ongoing process, getting governance mechanisms in place early is critical. This will necessitate investing heavily in the ‘upfront’ work—conducting research, developing policies and regulations, establishing standards and best practices, and engaging with stakeholders—needed to lay foundations for more robust laws, policies, and norms in the long term, even when many difficult questions remain unanswered. Although this front-loaded effort is onerous, failing to prepare for the technology of the future could prove far more costly.
5. Our work can have a long-term, positive impact. The institutions, laws, and norms we establish now will shape the development and use of AI systems for years to come. A proactive effort could steer technology towards benefiting society broadly. On the other hand, if we fail to prepare adequately or act with foresight, we risk stumbling into a future in which powerful AI systems cause irreversible damage to our social fabric. How we research, discuss, and craft governance frameworks in this early stage may reverberate for decades. This makes AI governance impactful, even if it can feel like just the first steps.
6. We take a prioritized, portfolio-based approach. Our resources are limited, but the challenges and opportunities in AI governance are vast. This is why we prioritize a ‘portfolio’ of work that tackles the most significant and tractable problems. By methodologically assessing the landscape of governance approaches, we aim to focus our efforts on the areas of the greatest potential impact. For example, we may address understudied gaps or build on emerging best practices with multiplier effects. A portfolio approach also allows us to balance diverse work streams, such as researching and advocating for safer industry practices, convening policymakers, companies, and civil society to hone in on fundamental governance challenges, and providing capacity-building and educational programs for judicial operators and the public at large.
7. We are independent. As an independent organization not beholden to companies or governments, we are uniquely well-positioned to work toward AI governance that serves the public interest. We aim to bring impartial analysis and judgment to governance discussions without profit motives or political pressures distorting our work. Our non-affiliated status enables us to function as a neutral convener and collaborator with external stakeholders, including companies, policymakers, and other actors working towards responsible stewardship of AI.
8. We face existing funding gaps. We have a number of potentially high-impact activities queued up, but due to funding constraints, we are currently unable to launch them. By funding our work, you could help us make progress on a range of open questions—and ultimately, help determine whether advanced AI is shaped by commercial interests alone or by a diversity of voices.
9. We are fighting an uphill battle. Corporations are pouring billions of dollars into AI, and their profit incentives can be misaligned with broader public interests. Independent nonprofit organizations are crucial players to ensure that governance solutions serve the public interest, and we rely upon philanthropic support to have an effect commensurate with other stakeholders’ influence.
10. We have a strong track record. Since our incorporation in 2016, TFS has been a major player in the production of foundational AI governance frameworks and policies. Here is a short list of some of our accomplishments:
- Leadership in developing the OECD AI Principles and Framework for the Classification of AI Systems, among other leading multilateral frameworks for the global governance of AI
- Convened over 5,000 regulators, policymakers, judicial operators, and AI developers through four editions of The Athens Roundtable on AI and the Rule of Law and the Global Governance of AI Roundtable at the World Government Summit
- Trained over 4,500 judicial operators through our Massive Online Open Course (MOOC) on AI and the Rule of Law
- Led advocacy for regulatory sandboxes to be introduced to the EU AI Act—adopted into European Parliament’s Committee on Industry, Research and Energy draft opinion on the EU AI Act.
- Completed 3 projects and helped commissioned 3 projects with the Global Partnership on AI, and led the development of 3 national AI strategies in the Global South
If you would like to make a financial contribution, please visit our Donate page. If would like to hear more about our plan for impact, please do not hesitate to reach out to us at donate@thefuturesociety.org.