Readers should know the beliefs of writers.
Joint Statement on AI Safety and Openness
October 31, 2023
We are at a critical juncture in AI governance. To mitigate current and future harms from AI systems, we need to embrace openness, transparency, and broad access. This needs to be a global priority.
Yes, openly available models come with risks and vulnerabilities — AI models can be abused by malicious actors or deployed by ill-equipped developers. However, we have seen time and time again that the same holds true for proprietary technologies — and that increasing public access and scrutiny makes technology safer, not more dangerous. The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst.
Further, history shows us that quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation. Open models can inform an open debate and improve policy making. If our objectives are safety, security and accountability, then openness and transparency are essential ingredients to get us there.
We are in the midst of a dynamic discourse about what ‘open’ signifies in the AI era. This important debate should not slow us down. Rather, it should speed us up, encouraging us to experiment, learn and develop new ways to leverage openness in a race to AI safety.
We need to invest in a spectrum of approaches — from open source to open science — that can serve as the bedrock for:
- Accelerating the understanding of AI capabilities risks and harms by enabling independent research, collaboration and knowledge sharing.
- Increasing public scrutiny and accountability by helping regulators adopt tools to monitor large scale AI systems.
- Lowering the barriers to entry for new players focused on creating responsible AI.
As signatories to this letter, we are a diverse group — scientists, policymakers, engineers, activists, entrepreneurs, educators and journalists. We represent different, and sometimes divergent, perspectives, including different views on how open source AI should be managed and released. However, there is one thing we strongly agree on: open, responsible and transparent approaches will be critical to keeping us safe and secure in the AI era.
When it comes to AI safety and security, openness is an antidote, not a poison.
Leave a Reply