Science

Securing Our Digital Future: A CERN for Open Source large-scale AI Research and its Safety

Petition is directed to
EU, USA, UK, Canada, Australia
3,627 supporters

Collection finished

3,627 supporters

Collection finished

  1. Launched March 2023
  2. Collection finished
  3. Submission on the 01 Jun 2023
  4. Dialog with recipient
  5. Decision

Join us in our urgent mission to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. This monumental initiative will secure our technological independence, empower global innovation, and ensure safety, while safeguarding our democratic principles for generations to come.

In an era of unparalleled technological advancements, humanity stands on the precipice of a new epoch characterized by the profound influence of artificial intelligence (AI) and its foundational models, such as GPT-4. The potential applications of these technologies are vast, spanning scientific research, education, governance, and small and medium-sized enterprises. To harness their full potential as tools for societal betterment, it is vital to democratize research on and access to them, lest we face severe repercussions for our collective future.

Increasingly, we are witnessing the emergence of a system wherein educational institutions, government agencies, and entire nations become dependent on a select few large corporations that operate with little transparency or public accountability. To secure our society's technological independence, foster innovation, and safeguard the democratic principles that underpin our way of life, we must act now.

We call upon the global community, particularly the European Union, the United States, the United Kingdom, Canada, and Australia, to collaborate on a monumental initiative: the establishment of an international, publicly funded, open-source supercomputing research facility. This facility, analogous to the CERN project in scale and impact, should house a diverse array of machines equipped with at least 100,000 high-performance state-of-the-art accelerators (GPUs or ASICs), operated by experts from the machine learning and supercomputing research community and overseen by democratically elected institutions in the participating nations.

This ambitious endeavor will provide a platform for researchers and institutions worldwide to access and refine advanced AI models, such as GPT-4, harnessing their capabilities for the greater good. By making these models open source and incorporating multimodal data (audio, video, text, and program code), we can significantly enrich academic research, enhance transparency, and ensure data security. Furthermore, granting researchers access to the underlying training data will enable them to understand precisely what these models learn and how they function, an impossibility when restricted by APIs.

Additionally, the open-source nature of this project will promote safety and security research, allowing potential risks to be identified and addressed more rapidly and transparently by the academic community and open-source enthusiasts. This is a vital step in ensuring the safety and reliability of AI technologies as they become increasingly integrated into our lives.

The proposed facility should feature AI Safety research labs with well-defined security levels, akin to those used in biological research labs, where high-risk developments can be conducted by internationally renowned experts in the field, backed by regulations from democratic institutions. The results of such safety research should be transparent and available for the research community and society at large. These AI Safety research labs should be capable of designing timely countermeasures by studying developments that, according to broad scientific consensus, would predictably have a significant negative impact on our societies.

Economically, this initiative will bring substantial benefits to small and medium-sized companies worldwide.

By providing access to large foundation models, businesses can fine-tune these models for their specific use cases while retaining full control over the weights and data. This approach will also appeal to government institutions seeking transparency and control over AI applications in their operations.

Reason

The importance of this endeavor cannot be overstated. We must act swiftly to secure the independence of academia and government institutions from the technological monopoly of large corporations such as Microsoft, OpenAI, and Google. Technologies like GPT-4 are too powerful and significant to be exclusively controlled by a select few.

In a world where machine learning expertise and resources for AI development become increasingly concentrated in large corporations, it is imperative that smaller enterprises, academic institutions, municipal administrations, and social organizations, as well as nation-states, assert their autonomy and refrain from relying solely on the benevolence of these powerful entities that are often driven by short-term profit interests and act without properly taking democratic institutions into their decision-making loop. We must take immediate and decisive action to secure the technological independence of our society, nurturing innovation while ensuring the safety of these developments and protecting the democratic principles that form the foundation of our way of life.

The recent proposition of decelerating AI research as a means to ensure safety and progress presents a misguided approach that might be detrimental to both objectives. It could create a breeding ground for obscure and potentially malicious corporate or state actors to make advancements in the dark while simultaneously curtailing the public research community's ability to scrutinize the safety aspects of advanced AI systems thoroughly. Rather than impeding the momentum of AI development and shifting its development into underground areas, a more judicious and efficacious approach would be to foster a better-organized, transparent, safety-aware, and collaborative research environment. The establishment of transparent open-source AI safety labs tied to the international large-scale AI research facility as described above, which employ eligible AI safety experts, have corresponding publicly funded compute resources, and act according to regulations issued by democratic institutions, will cover the safety aspect without dampening progress. By embracing this cooperative framework, we can simultaneously ensure progress and the responsible development of AI technology, safeguarding the well-being of our society and the integrity of democratic values.

We urge you to join us in this crucial campaign. Sign this petition and make your voice heard. Our collective digital future, the autonomy of our academic research, and the equilibrium of our global economy depend on our ability to act quickly and decisively.

Together, we can build a future where advanced AI technologies are accessible to all, and where innovation and progress are not constrained by the boundaries of a few powerful corporations. Let us seize this opportunity and build a brighter future for generations to come.

Thank you for your support, LAION e.V. from Hamburg
Question to the initiator

Link to the petition

Image with QR code

Tear-off slip with QR code

download (PDF)

This petition has been translated into the following languages

News

Progressing towards a possibly brighter future with AI, one thing truly stands in our way, AI with its unknown reaches falling to the hands of large powers of ill intent. We've been dealing with this throughout history but now the game board is different and the full capabilities of these tools are quite literally beyond grasp. If things go well the biggest danger are these Powers of ill intent utilizing these super tools to inflict unspeakable horrors on the people

Continuing AI capailities advancments is catastrophic primarily not because of the biases the models might have or jobs people might lose but because, as half of ML researchers believe, there's at least a 10% chance of AI-induced existential catastrophe. We have to ensure the goals of first highly capable AI align with human values, but

This petition has been translated into the following languages

Help us to strengthen citizen participation. We want to support your petition to get the attention it deserves while remaining an independent platform.

Donate now