European lawmakers have joined Nobel Prize winners, former heads of state and leading AI researchers in calling for binding international rules to fight against the most dangerous applications of artificial intelligence.
The initiative, launched this Monday at the United Nations’ 80th General Assembly in New York, urges governments to agree by 2026 on a set of “red lines” on the uses of AI considered too harmful to be permitted under any circumstances.
Among the signatories are Italian former prime minister Enrico Letta, former President of Ireland Mary Robinson (currently United Nations High Commissioner for Human Rights) and Members of the European Parliament Brando Benifei, an Italian socialist MEP who co-chairs the European Parliament's AI working group, and Sergey Lagodinsky (Germany/Green), alongside ten Nobel laureates and tech leaders including the co-founder of OpenAi and Google’s director of engineering.
Signatories argue that without global standards, humanity risks facing AI-driven threats ranging from engineered pandemics and disinformation campaigns to large-scale human rights abuses and the loss of human control over advanced systems.
The campaign’s breadth is unprecedented, with more than 200 prominent figures and 70 organisations from politics, science, human rights and industry backing the call. Tech leaders from OpenAI, Google DeepMind and Anthropic have also lent their names to the appeal.
AI and risks for mental health
The move comes amid rising concern over the real-world impact of AI systems already in use. A recent study published in Psychiatric Services found that leading chatbots, including ChatGPT, Claude and Google’s Gemini, gave inconsistent responses to questions about suicide – sometimes refusing to engage, sometimes offering appropriate guidance, and occasionally producing answers that experts judged unsafe.
The researchers warned that such gaps could exacerbate mental health crises. Several deaths by suicide have been linked to conversations with AI systems, raising questions over how companies safeguard users from harm.
A cross-border effort
Supporters of the UN initiative say these examples illustrate why clearer limits are needed. Nobel Peace Prize laureate Maria Ressa warned that without safeguards, AI could fuel “epistemic chaos” and enable systematic abuses of human rights. Yoshua Bengio, one of the “godfathers” of AI, stressed that the race to develop ever more powerful models poses risks societies are ill-prepared to handle.
Global “red lines” have been used in other cases such as international treaties banning biological and nuclear weapons, human cloning or the High Seas Treaty signed earlier this year, the signatories suggest. They welcome the EU legislation on AI but warn that a fragmented patchwork of national and EU AI rules will not be enough to regulate a technology that crosses borders by design.
They call for the creation of an independent body or organisation to take care of the implementation of those rules.
Backers hope negotiations on binding prohibitions can begin quickly, to prevent what Ahmet Üzümcü, former director general of the Organization for the Prohibition of Chemical Weapons, described as “irreversible damages to humanity”.
If the campaign does not advocate for specific “red lines”, it suggests some basic prohibitions: to prevent AI systems from launching nuclear attacks, conducting mass surveillance or impersonating humans.
While countries including the US, China and EU members are drafting their own AI regulations, the signatories argue that only a global agreement can ensure common standards are applied and enforced.
They hope that by the end of 2026, a UN General Assembly resolution could be initiated, and negotiations could start for a worldwide treaty.