
The Regulatory Challenge of AI: Why science diplomacy is a good idea
Since 2025, the OSCE has a Special Representative on Artificial Intelligence (AI) to address challenges provided by AI to enhance security, human rights and governance in the OSCE region, and to promote parliamentary dialogue on AI governance, ethics and regulation. In its chairpersonship of the OSCE in 2026, Switzerland has put a strong focus on shifting from reactive to anticipatory diplomacy to reduce the risks and impact of technologies on crises and conflicts. A two-day conference in Geneva starting on May 7, 2026 on “Anticipating technologies – for a safe and humane future” will focus on ensuring that innovation remains human-centered and consistent with OSCE values and principles. The overall frame is that of science diplomacy, using scientific cooperation to address the challenge of AI.
Regulatory challenge
It is generally perceived that AI can be beneficial, which is an argument for development and innovation. AI also poses risks that can cause harm. Principles like a right to safety, health or labour that stem from the Universal Declaration of Human Rights and are incorporated in many constitutions, are perceived to be under pressure. Both innovation and risk response are motivators for regulation. Calls for regulation concern for example self-regulation by AI developers, liability, and even non-proliferation in a similar way as applied to nuclear technology. In the OSCE region three views on digital sovereignty and regulation exist: rights-based (prioritizing freedom of speech and open markets, as seen in the EU), market-oriented (favouring free markets and minimal state regulation, exemplified by the US), and centralized (emphasizing digital self-sufficiency and state control, characteristic of Russia). Seen this way, aims of regulation are related to innovation and risk-response in a space that is defined by a general regulatory approach.
Controversies
Next to sovereignty, views on policy development, on the scale of risks, and commercial interests influence positions on regulation. These views may be highly contradictory.
Regarding the development policy, arguments pro and con open (transparency on source code, design choices and training methods) or closed source are based on considerations like speed of innovation and credibility of the model (favoured by open source), commercial considerations on protection of investments (favoured by closed source, although open source may sometimes weaken dominance of closed source models), controllability (favoured by open source), and security (favoured by open source, although advocates of closed source put forward that an open tool, once released, cannot be controlled or recalled anymore and enables misuse).
Regarding innovation, one view suggests that any kind of regulation will hamper innovation. On the other hand it is argued that for example regulation on regulatory sandboxes, which are controlled spaces for AI development and testing to minimize risk of generating for example Child Sexual Abuse material (CSAM), may foster innovation because it protects developers from liability risks.
Regarding so-called systemic risks, we observe dramatic announcements by companies on capabilities of AI, for example on impact on employment. CEOs of companies like OpenAI, Anthropic and Meta predict superhuman intelligence that makes much if not all of human labour obsolete. Contrarily, CEO Arthur Mensch of leading French AI company Mistral qualifies the threat with dystopic scenarios that stem from rampant development in combination with the call for regulation as sowing panic, manipulation, and a trick and facade by large developers to protect their monopoly of the market with closed source systems. After all, only tech giants would be able to make the effort to comply with complex regulation.
Political and ideological viewpoints
Finally, political agendas and ideological positions drive regulation. Because many AI-tools are developed in and disseminated from the US, the position of the US on regulation has a large impact also outside the US. In particular American political and ideological considerations pose a challenge for other actors when American AI is freely available. For example, the US-government and tech companies generally favour that regulation should be minimal, based on the opinion that it will otherwise hamper commerce and innovation. The views of key stakeholders are outlined in greater detail below.
United States
Combined reading of the United States National Security Strategy (NSS), the Pax Silica initiative, the executive order on AI of July 2025 and the Cyber strategy published in 2026 reveals that the US considers AI as an instrument to gain technological, economic and military dominance. Regulation will have to serve this agenda, in particular by providing a freeway for innovation. As put by vice-president Vance: “The AI future is not going to be won by hand-wringing about safety. It will be won by building.” Even legislation on regulatory sandboxes is absent. Ideological motivation strengthens resistance to regulation. The government proclaims that reference to diversity, equality, climate change and disinformation should be removed from AI. According to president Trump: “the American people do not want woke Marxist lunacy in AI-models, and neither do other countries.” Regulation on disinformation as implemented by the EU hampers the possibility to influence elections in favour of far-right parties and is heavily attacked by both tech companies and the government.
Some US states issued legislation related to direct harm, which is under attack by the Trump administration that advocates for federal legislation. Critics argue though, that this vague and unspecified call for federal legislation by big tech and the government is a cover-up for doing nothing at all. Dario Amodei, CEO of leading AI company Anthropic, is a rare advocate in favour of greater regulation. For example, he has called for self – regulation in the form of training demands based on a ‘Constitution’ of rules and values that AI should learn to respect to prevent harm. He also says that two red lines should be the development of mass surveillance and biological arms. Government regulation is part of his proposal, for which the New York RAISE legislation is an example. This prescribes measures for risk management for large AI companies and penalties in case of incidents that cause harm. It does not cover specific (specified) prohibited activities like the European AI act does.
The government recognizes that military application of AI builds on the outputs of commercial AI innovation coming out of America’s private sector. The government aims to break AI company Anthropic because of its position on regulation and because of its recent opposition to unrestricted appliance of its AI by the Pentagon. This way of ’killing the chicken to scare the monkeys’ signals to the community that to support its strategic agenda the government will enforce some form of state control over AI companies.
European Union
For its part, the EU prioritizes protection of the rights of individuals. EU regulation applies to both open and closed source models to manage similar risks but appeared to indeed have rewarded the arguments that open source may deserve a lighter form of regulation then closed source. The AI Act contains some provisions to facilitate innovation (art. 57-59; sandboxes). Regulation by the AI Act prohibits manipulative or deceptive AI, exploitation of vulnerabilities based on age, disability or social or economic situation and social scoring, spreading hate and child abuse. Also it requires risk mitigation related to for example election manipulation, employment, and healthcare. In combination with the Digital Services Act it also applies to AI that is embedded in social media. An adaption of legislation via the so-called Omnibus proposal to relax regulation in order to loosen restrictive regulation and thus promote innovation receives criticism for giving in on protection of rights of individuals.
Russian Federation
The Russian Federation, by which AI leadership is considered an instrument of power, seems to move from a more pragmatic approach of sharing technology to ‘join in’ towards a closed system where AI is positioned in a context of preservation of sovereignty and should be based on Russian culture and traditional values. Legislation is under development. Priority is given to state control in order to protect national security above protection of the individual. The former amounts to control by the FSB, russification of data centres, training models and data for certain critical sectors, and restrictions for foreign AI. A closed system approach and keeping out influences from abroad can be perceived to hamper innovation though. There is no specific regulation for protection of individuals against harm yet. For companies that develop AI a civil liability insurance is mandatory to obtain coverage to compensate for potential harm to life and property caused by AI systems.
Supranational regulation
Military application of AI in autonomous weapons systems is addressed in the framework of the United Nations Convention on Conventional Weapons (CCW) since 2013, driven by the concern that (human) accountability gets lost when algorithms take over warfare. Traditionally, Russia is perceived to obstruct the process of regulation (and restrictions) for strategic purposes. More recently the position taken by the US moves in a similar direction for similar reasons. European countries participate individually and not the EU as an entity.
Scientific cooperation
Views on digital sovereignty, potential impact of AI, commercial interests of AI companies, political agendas on AI as an instrument of power, ideological positions and human rights concerns appear in regulatory approaches towards AI. While impact of AI may be considerable and is transnational, a shared factual image on the impact of AI is lacking to large extent even among experts.
Scientific cooperation can help to create an objective reference on risks of AI and on claims for a balanced approach of regulation in the light innovation, safety and human rights. Firstly, it can resolve controversies. These can concern for example opinions whether closed or open source favours safety, if AI can become autonomous and take things over and how to handle that, that dystopic capabilities of AI will soon emerge or not, and types of regulation that both facilitate innovation and protection of rights of individuals, all in the light of ‘constitutional’ arguments. As the American AI-expert Fei-Fei Li has said: we need science, not science fiction. Secondly it may soften the effect of ideological positions. A very recent example is coverage on a possible U-turn by the American government, incited by the introduction of Anthropics’ model Mythos and related risks of large-scale cyberattacks. The instalment by the US government of government oversight in the light of public safety would be a radical policy change on regulation of AI.
Marcel van Kooten holds a degree in International Law and works as an information strategy consultant


Comments
* Your email address will not be published