by Gérald Santucci
“You were given the choice between war and dishonor. You chose dishonor, and you will have war.”
Winston Churchill (to Neville Chamberlain directly after Munich)
Now that the Paris Artificial Intelligence (AI) Action Summit, held on 10-11 February 2025 at the Grand Palais in Paris, is behind us the time has come to draw a first assessment. Here follow our key take-aways.
No compass
First, we should regret the lack of will from all major countries to acknowledge the legacy of the previous similar events. Indeed, the AI Action Summit, co-chaired with India and signed by 61 countries, including China, was the third AI Summit organized so far. It followed the AI Safety Summit 2023 (1-2 November 2023), hosted by the United Kingdom at Bletchley Park (the birthplace of the digital, programmable computer) and signed by 28 major countries and the European Union, and the AI Seoul Summit (21-22 May 2024), co-hosted by the Republic of Korea and the United Kingdom and signed by 10 countries and the European Union. The impression prevails that each of these summits, which are basically international convenings of senior government officials, tech executives, civil society, and researchers to discuss the safety and policy implications of the world’s advanced AI models, has been a sort of political posturing without at the end a genuine pledge to achieve common goals for humanity.
Global vs. International
Second, the US and the UK have refused to sign the Paris AI Action Summit Declaration, in a blow to hopes for a concerted approach at global level to developing and regulating the technology and its usages. UK “hadn’t been able to agree all parts of the leaders’ declaration” and would “only ever sign up to initiatives that are in UK national interests”. Let’s recall that the UK had previously been a champion of the idea of AI safety, with then prime minister Rishi Sunak holding the world’s first AI Safety Summit only 16 months ago. Apparently, the new UK Government doesn’t care much about the risk to undercut its hard-won credibility as a world leader for safe, ethical and trustworthy AI innovation. Though it rejected the suggestion that Britain was trying to curry favor with the US, but in most chancelleries the idea is floating that UK considers it has little strategic room but to follow the US, hence not taking an overly restrictive approach to the development of the technology, in order to keep the commitment from US AI firms to engage with UK regulators. Not surprisingly, US Vice President JD Vance told delegates in Paris that too much regulation of AI could “kill a transformative industry just as it’s taking off” and that AI was “an opportunity that the Trump administration will not squander”. He added that “pro-growth AI policies” should be prioritized over safety and leaders in Europe should especially “look to this new frontier with optimism, rather than trepidation.” In other words, Vice-President Vance reiterated the Trump administration’s commitment to keeping AI development in the United States “free from ideological bias” and ensuring that American citizens’ right to free speech remains “protected”.
In this context, it was clear that phrases in the Elysée communiqué such as “sustainable and inclusive AI” would be unacceptable by the Trump administration.
The Paris AI Action Summit has become a key battleground for international AI governance, exposing sharp divides in global AI strategy: the US, under President Donald Trump, promotes a hands-off, pro-innovation policy; Europe is pushing for stricter AI regulations while boosting public investment in the sector; China is rapidly expanding its AI capabilities through state-backed tech giants, seeking global leadership in AI standard-setting.
The Volatility of AI Policy Nouns and Verbs
Third, it’s interesting to pay attention to the nouns and verbs that were heralded at each of the three AI Summits so far. The Bletchley Declaration put the cursor on “safety”: “We affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.” It mentioned the “significant risks” posed by AI. The AI Seoul Summit reinforced international commitment to “safe” AI development and added “innovation” and “inclusivity” to the agenda of the AI summit series. Some criticism was voiced that the addition of topics other than AI safety was leading to a dilution of what had made the UK AI Safety Summit unique among a crowded landscape of international AI diplomatic initiatives – attendance was indeed down and news coverage a bit lower. However, if the UK AI Safety Summit’s achievement was establishing the idea of an AI Safety Institute, the Seoul AI Summit marked the moment that the idea reached significant international scale as a cooperative effort.
The Paris AI Action Summit has “identified priorities and launched concrete actions to advance the public interest, to close all digital divides, to accelerate progress towards the Sustainable Development Goals (SDG) and to protect human rights, fundamental freedoms and the environment and to promote social justice by ensuring equitable access to the benefits of AI for all people.” Signatories pledged “to foster inclusive AI as a critical driver of inclusive growth. Corporate action addressing AI’s workplace impact must align governance, social dialogue, innovation, trust, fairness, and public interest…” Companies committed indeed to the following objectives: promoting Social Dialogue; investing in Human Capital; ensuring Occupational Safety, Health, Autonomy, and Dignity; ensuring Non-Discrimination in the Labor Market; protecting Worker Privacy; promoting Productivity and Inclusiveness Across Companies and Value Chains. French President Macron did not hesitate to tell investors and tech companies attending the summit “to choose Europe and France for AI”, adding that the European AI strategy would be “a unique opportunity for Europe to accelerate in the technology.”
On her side, European Commission president, Ursula von der Leyen, said in a speech that she wants “Europe to be one of the leading AI continents. And this means embracing a way of life where AI is everywhere. AI can help us boost our competitiveness, protect our security, shore up public health, and make access to knowledge and information more democratic.” She welcomed the European AI Champions Initiative that pledges EUR 150 billion EUR from providers, investors and industry and announced that the European Commission, with its InvestAI initiative, “can top up by EUR 50 billion”, which means a total of EUR 200 billion for AI investments in Europe – “it will be the largest public-private partnership in the world for the development of trustworthy AI.”
Science vs. Geopolitics
When “big data” appeared some 20 years ago with its “Vs” (validity, value, variability, variety, velocity, veracity, versatility, visibility, volatility, volume and vulnerability), it was expected that decision making would be based on scientific evidence. Thanks to the Internet of Things and Artificial Intelligence, big-data-based processes would confer more persuasion and significance to knowledge or decisions.
This dream seems to have been destroyed.
Concerns over AI’s fast-paced evolution, and its potential risks, loomed over the Paris summit, particularly as nations wrestle with how to regulate a technology increasingly entwined with defense, cybersecurity, and economic competition. Even the European Commission has begun to soften its tone towards regulation: “AI needs the confidence of people and has to be safe … Safety is in the interest of business (but) we have to make it easier, we have to cut red tape.” The last phrase, though quite understandable, might be perceived by other countries as an admission of weakness in front of the US power play and (less and less) hidden threats.
On 23 January 2025, President Donald Trump signed an Executive Order eliminating “harmful Biden Administration AI policies … that hinder AI innovation and impose onerous and unnecessary government control over the development of AI” and “enhancing America’s global AI dominance.” At the same time, he announced the Stargate Initiative, a $500 billion private sector deal, spearheaded by tech giants OpenAI, SoftBank, and Oracle, to expand US artificial intelligence infrastructure. Stargate is said to represent the largest AI infrastructure project in history. But the longstanding feud and personal animosity between Elon Musk (the CEO of Tesla, SpaceX and xAI, the richest person in the world, and a Trump ally) and Sam Altman (the CEO of OpenAI) exposed itself in front of the world’s media. (Altman and Musk co-founded OpenAI but later split over its direction. While Musk argues that OpenAI has strayed from its nonprofit roots, Altman insists the company must evolve to secure the funding required for AI advancements.) In February 2025, the ChatGPT-maker’s CEO dismissed a $97.4 billion bid from a Musk-led consortium.
What we have seen since the beginning of 2025 is the largest tech US companies swearing allegiance to the President of the United States, and their leaders – a restricted group of billionaires and oligarchs that control the world’s largest digital platforms, social networks and traditional media – carrying a new political ideology in the form of “a technosolutionism that privileges technological solutions, including whimsical ones, and even sometimes a political reorganization, without necessarily listening to science.”[1] On 11 February 2025, Elon Musk provocatively paraded his 4-year-old son Lil X around the Oval Office as President Trump signed an Executive Order requiring federal agencies to cooperate with the Elon Musk-led Department of Government Efficiency (DOGE) and the effort to slash costs. (Compared to this, the iconic original photo of President Kennedy seated at his desk in the Oval Office while John, Jr., peers through a ‘secret door’ at the front of the desk is just a good laugh.)
At the Paris AI Action Summit, in response to President Trump saying in his inauguration address that the US will “drill, baby, drill” for oil and gas under his leadership, President Macron replied that in Europe “it’s plug, baby, plug. Electricity is available.”
A bit earlier, UK Premier had said the AI industry “needs a government that is on their side, one that won’t sit back and let opportunities slip through its fingers … In a world of fierce competition, we cannot stand by. We must move fast and take action to win the global race … Our plan will make Britain the world leader.”
So, it becomes sadly obvious that showbiz politics is today dominating science-based decision making, national interests are dominating international cooperation, corporate interests are dominating the search for the common good, fierce innovation is dominating ethics, and so forth.
The Paris AI Action Summit could have been the defining moment when a fair balance would be found between AI innovation and AI governance and ethics. In IoT Council we dearly believe that the combination of IoT and AI – AIoT – offers tremendous opportunities for sustainable economic growth, prosperity, human health and wellbeing, but also for the good of all living species, nature and biodiversity. It would be irresponsible, turning our back to science and cooperation and getting excessively entangled in the dangerous realm of unleashed AI, that these opportunities are eventually lost while all sorts of AI risks (malicious use risks, risks from malfunctions, and systemic risks) would have full ‘freedom of expression’.
With respect to AI, 2025 could be the year of all dangers.
[1] Source: “Intelligence artificielle, innovation, recherche… la science dans l’étau des tensions géopolitiques”, in Le Monde, 10 February 2025.
