European lawmakers, Nobel laureates, former heads of state, and AI experts called for binding international AI rules.
They launched the initiative Monday at the UN’s 80th General Assembly in New York.
The campaign urges governments to agree by 2026 on “red lines” banning the most dangerous AI uses.
Signatories include Enrico Letta, Mary Robinson, Brando Benifei, Sergey Lagodinsky, ten Nobel laureates, and tech leaders from OpenAI and Google.
They warned that unchecked AI could cause engineered pandemics, mass disinformation, human rights abuses, and loss of human control.
Over 200 prominent figures and 70 organisations from politics, science, human rights, and industry have backed the initiative.
AI Threats and Mental Health Concerns
Recent studies found chatbots like ChatGPT, Claude, and Gemini gave inconsistent or unsafe responses to suicide-related questions.
Researchers warned these gaps could worsen mental health crises, with some deaths already linked to AI conversations.
Supporters argue these risks highlight the urgent need for clear, enforceable rules.
Maria Ressa warned AI could trigger “epistemic chaos” and systematic human rights violations.
Yoshua Bengio stressed that developing ever more powerful AI models outpaces society’s ability to manage risks.
Toward a Binding Global Treaty
Signatories called for an independent body to enforce AI rules internationally.
They suggested prohibiting AI from launching nuclear attacks, performing mass surveillance, or impersonating humans.
They criticized fragmented national and EU AI regulations as insufficient for cross-border technology.
Backers hope governments start treaty negotiations by the end of 2026 to ensure worldwide enforcement.
Ahmet Üzümcü warned failing to act could inflict “irreversible damages to humanity.”