Gérald Santucci ; Geneviève Fieux-Castagnet
January 2025
“Hastily we swipe
Up down left and right,
Virtual voyeurs in a collective gripe.
Numbing as we scroll
Mindless
Down the rabbit hole…
Forgetting, the deeper we fall,
This desperate interconnectedness
That binds us all.”
Lara Srivastava and Rob van Kranenburg,
Empathy, The Little Coffee Book of Robot
Design, 2024 (to be published)
We live in a time of unparalleled global turmoil. From the impacts of climate change, biodiversity loss and ecosystem collapse, to pandemics, involuntary migration, technological acceleration, cyber-attacks, geopolitical conflicts, societal polarization, and the spread of artificial intelligence (AI) misinformation and disinformation, today’s leaders in government, industry and civil society are confronted with entirely new categories of challenges. In this context, the rapidly evolving agenda of issues that touches upon various aspects of AI development and deployment requires the close attention of all people who are concerned with its safety1 and governance.
In the mid-2000s, three disruptive elements converged to create the AI boom and, consequently, its ubiquity and its inherent risks. Indeed, algorithms known as convolutional neural networks (CNN) met the power of modern-day graphics processing units (GPUs) and the availability of big data. From a distance, Europe is relatively strong in algorithms, U.S. is relatively strong in chips and software, China is relatively strong in big data. Such specific strengths among countries are currently a reason to compete fiercely – we believe that they should rather be a reason to stimulate coopetition.
Boston Consulting Group (BCG) evaluated the readiness of different countries to effectively integrate AI as well as the exposure of different sectors to it within them. The study highlights that most countries are not ready to unlock the full value of AI or manage the disruption from it. With only few “pioneers” and a relatively small number of contenders, there is a risk of a global AI polarization – which is confirmed by the proliferation of international/ multilateral / regional / national / local initiatives aiming at governing and regulating AI.
Out of 73 economies assessed by BCG, only five – Canada, Mainland China, Singapore, the UK, and the United States (US) – are categorized as “AI pioneers”. They have reached a high level of readiness by blending elements like investment and infrastructure, turning disruption into a lasting competitive edge. In principle, they would be in a unique position to guide the world forward in innovation, talent development and AI regulation and ethics.
The US-China race for technological advantage
It seems that the two biggest “AI pioneers”, US and China, are rushing in an escalating cycle of confrontation sounding like an alarm for the whole world and humanity. Indeed, the US and China have a long history of economic rivalry, but since 2017 the Trump and Biden administrations’ challenges to China’s business activities, and subsequent retaliatory tariffs and export bans by China, have triggered a full blown economic and trade war.
In December 2024, the US Commerce Department (DoC) has expanded the list of Chinese technology companies subject to export controls to include many that make equipment used for computer chips, chipmaking tools and software. The so-called “entity list” now includes 140 companies, for example the Beijing Institute of Technology, the Beijing Computational Science Research center, Chinese chip firms Piotech and SiCarrier, which are all based in mainland China though some are Chinese-owned businesses in Japan, South Korea and Singapore. (The addition to the “entity list” means that export licenses are denied for any US company trying to do business with the targeted Chinese companies.) In response, China has decided to strengthen export controls on critical dual-use items to the US, such as gallium, which is used in semiconductors (China accounts for 94% of gallium’s world’s production) and germanium, which is used in infrared technology, fiber optic cables and solar cells (China accounts for 83% of germanium’s world production). As a result of these moves, the Chinese and US economies are in danger of decoupling, which poses a grave danger to the whole world economy – these “back and forth curbs” could create chain disruption as well as inflationary pressures because of their consequences on global trade.
President-elect Trump’s nomination of podcaster and former PayPal chief operating officer David Sachs to be his White House AI and crypto ‘czar’, en passant a close friend of Elon Musk, is probably not good news for those who believe that legal guardrails should be established and enforced, both nationally and globally, to ensure AI safety and regulation.
The US-Chinese rivalry could prove to be catastrophic at the moment when people around the globe are reacting with both fascination and fear to the rapid and largely unpredictable deployment of Artificial Intelligence (AI). Over the last few years there has been a polarized discourse between the “AI accelerationists”2 and the “AI doomers”3. Even if the reality is probably more nuanced, the bitterness and brutality of the rivalry between the two biggest “AI pioneers” conveys the risk of a headlong rush that ignores the need for an ethical use and relevant governance of AI while unleashing innovation in all directions without limits.
The innovation and trade wars that are looming ahead, and which will inevitably spiral out of control towards all major countries of the world, in particular in the European Union (EU), dismiss the efforts of international and multilateral organizations to address AI safety and/or find a fair balance between innovation and ethics.
OpenAI vs. xAI: A ding-dong battle
Another aggravating factor concerns the intra-US war that is looming ahead in the domain of generative artificial intelligence between Sam Altman’s OpenAI, which started the AI boom in late 2022 with the release of its online chatbot, ChatGPT, and Elon Musk’s xAI, maker of a series of large language models (LLMs) called Grok. OpenAI, valued at $157 billion in October 2024, has about 1,700 employees and expects $3.7 billion in sales in 2024; it is financially supported by big organizations such as Microsoft, the chipmaker Nvidia, the tech conglomerate SoftBank, and the United Arab Emirates investment firm MGX. xAI, which is only two years
old and can benefit from familial ties with the social media platform X and the car company Tesla, is reported to be worth $50 billion. xAI brought recently a new supercomputer dubbed “Colossus” online designed to train its Grok. This new data center houses 100,000 Nvidia benchmark Hopper H100 processors, more than any other individual AI compute cluster. Elon Musk estimates this strategic lighthouse project could eventually earn Tesla $1 trillion in profits annually!
The fact remains, however, that the harsh, “lawfare” (rather than fair) competition between the two start-ups OpenAI and xAI is likely to generate a race for always more innovation in AI without much regard to the issues of safety and governance. It is not trivial that the cutthroat competition between Altman and Musk takes place less than two years after an open letter with signatures from hundreds of the biggest names in the tech industry, including Elon Musk, urged the world’s leading AI labs to pause the training of new super-powerful systems for six months due to “profound risks to society and humanity”. Today, even while he raises billions for xAI, Elon Musk says “that could be 10% to 20%, that AI goes bad”. In other words, we can expect
further extraordinary leaps in AI technology with OpenAI, xAI and other startups developing their own faster, smarter products with applications in various fields:
- transportation (e.g., autonomous vehicles, “robotaxis”),
- manufacturing (e.g., industrial robots),
- healthcare (e.g., early detection of diseases, personalized treatment),
- education (e.g., personalized learning, automation of admissions or grading
assignments), - law (e.g., document analysis, predictive analytics, e-discovery to support evidence
gathering), - media (e.g., content creation and curation, instant customer service enhancing audience
engagement, fight against fake news), - customer service (e.g., exceptional, round-the-clock, personalized customer support).
It would be naïve to believe that these developments will enhance humans’ conversations with
technology without snubbing the way the technology might evolve.
Regulation! Regulation! Regulation! And so what?
Regulatory efforts can be traced in many countries and regions of the world, notably the EU, China, the US, Brazil, Canada, India, Israel, Japan, Saudi Arabia, South Korea, UAE, and the UK. In addition, global initiatives have been launched in the UN, OECD, G7, G20, AI safety summits (e.g., the Bletchley Declaration of 1-2 November 2023, signed by 28 countries including China and the US), and other frameworks aiming for international alignment.
On 10 and 11 February 2025, France will host the Artificial Intelligence (AI) Action Summit, bringing together Heads of State and Government, leaders of international organizations, CEOs of small and large companies, representatives of academia, non-governmental organizations, artists and members of civil society. After the Bletchley Declaration, this could be another opportunity “to strive, to seek, to find, and not to yield” (to quote Alfred, Lord Tennyson), i.e. to come to terms with cultural differences among countries and move beyond them in order to understand each other better without surrendering commonly acknowledged human values.
Also of particular relevance is the Council of Europe Framework Convention on Artificial Intelligence and human rights, democracy and the rule of law, opened for signature on 5 September 2024, already signed in particular by the US and the EU, which aims “to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law, while being conducive to technological progress and innovation.”
The 2023 open letter and the many initiatives taken by government at all levels to regulate in some way AI technology, and by companies to propose proper codes of conduct, could be read optimistically as the signal that there exists across the globe sufficient awareness of the broad array of inherent risks that can arise in AI/MMLs, a true cooperative spirit to mitigate these risks, and hence a genuine commitment to exchanging information, sharing knowledge, and deploying the right governance strategies.
The EU Artificial Intelligence has indeed an unquestionably disruptive value as the world’s first legislative effort to regulate AI. When creating the legislative framework, without a precedent to refer to, the European Commission chose the product safety framework, making it part of the so-called harmonization legislation regulating products circulating in the single market. The text has evolved much since, making the final AI Act a rather hybrid form of legislation at the intersection between technical product safety legislation and legislation intended to protect fundamental rights.
The EU AI Act has established a complex governance model spanning supranational (EU) and national (Member States) level actors. Though complex, smooth collaboration between EU and national actors, with AI sector actors also in the loop (over 100 organizations have signed the European Commission’s artificial intelligence (AI) Pact, i.e. voluntary commitments that pave the way for compliance with the AI Act), can be expected to provide an effective implementation of the new legislation.
However, questions remain about the actual global impact of the EU regulatory approach (the “Brussels effect”4) and its articulation with the other international/ multilateral regulatory initiatives. Geopolitics (i.e. the global race for maintaining a technological advantage) and industry competition (i.e. the race for dominating the market by displaying a superior level intelligence) will produce a sort of centrifugal force to brake or curtail the impact of the initiatives aimed at improving AI safety and governance.
Moreover, two years after the AI enthusiasm generated by the release of ChatGPT, the mid- to long-term risks of AI have slipped down policymakers’ agendas, and we can now see that the focus is more on immediate concerns, all important ones but not “existential”, such as labor force disruption, “hallucinations” (false or misleading information), and surveillance.
“Technology for Good” should be the moral compass of humanity.
But let’s make it clear: in the current circumstances, even with the existing regulatory regimes, chances that this happens are dim.
The importance of context
We already mentioned geopolitics, competition, political agendas, but actually even if these realities were better acknowledged, understood and addressed, there would still be an enormous difficulty to provide AI safety, i.e. the ethics, governance, and regulation of AI are profoundly influenced by the cultural, political and economic contexts of each country/region of the world.
It is therefore essential to adopt a contextual approach in developing ethical principles and
regulatory frameworks that allows to meet the specific needs of every society. In Europe where the EU has been proactive in regulating AI, there is a strong emphasis on human rights. It is important to read and remember the first “Whereas” of the EU AI Act:
“The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation.”
In North America, though discussions are emerging about the need for strict regulation of AI, the approach is more liberal, putting innovation and entrepreneurship first and foremost. The emphasis is less on human rights, more on individual freedom and personal responsibility. In December 2024, US President-elect Donald Trump announced Federal Trade Commission (FTC) Commissioner Andrew Ferguson will chair the agency. The nomination supports what is expected to be a reversal of a majority of FTC policies and initiatives established in recent years under current FTC Chair Lina Khan. It is worth noting that President-elect Trump has vowed to roll back Joe Biden’s executive order in favor of innovation. “At the FTC, we will end Big Tech’s vendetta against competition and free speech,” Ferguson said. “We will make sure that America is the world’s technological leader and the best place for innovators to bring new ideas to life.” The proposed focus on innovation will likely lead to deregulation and a narrowed enforcement approach.
In China, the approach to AI regulation is centralized, authoritarian, emphasizing oversight and control. In other Asian countries like Japan and India, the approach is more collaborative, with a strong value placed on technology and economic progress. Ethical concerns may be less of a priority, particularly where economic development is prioritized. However, there is a growing awareness of the implications of AI on society.
In Latin America, where human relations and community values tend to prevail, AI regulation is still under development, with initiatives aimed at protecting citizens’ rights and at ensuring ethical use. Discussions on ethics often focus on social justice, digital inclusion and personal data protection.
In Africa, there is a wide cultural and linguistic diversity, with a strong emphasis on community values in many countries. AI regulation is still developing, with initiatives varying from country to country. Often, countries face challenges of infrastructure and capacity to implement regulations. Ethical concerns include fair access to technologies, data protection and the impact of AI on employment and economic development.
Besides the cultural differences, the wealth factor is also important as rich countries worry about AI displacing human workers (“robots are taking over our jobs”) while countries with lean economies face shortages of skilled experts. Therefore, the cultural and economic differences between regions of the world are such that a common global approach looks like an unattainable goal. Yet it should be emphasized that the vision of bridging cultures to reach global harmony in AI development and deployment is a compulsory quest for the “inaccessible star” (to parody Belgian singer Jacques Brel).
The first objective: Building Trust by agreeing to the definitions of Ethics and Values
Leaving politics by the wayside, there is still hope that a compromise could eventually be struck between the AI “accelerationists” and the AI “doomers”. For this to happen, the global scientific community and civil societies should be mobilized around a common goal – the wellbeing of humanity. This is urgent if we refuse to overlook Prof Geoffrey Hinton’s prediction that there is a 10% to 20% chance that AI would lead to human extinction within the next three decades. (Interestingly, this confirms a recent forecast by Elon Musk.)
On June 10th, 1963, when he pronounced his Commencement Address at Washington, D.C.’s American University, in the context of the Cold War and the limited nuclear-test-ban treaty, John Kennedy said the following words that were prophetic in today’s context:
“So, let us not be blind to our differences – but let us also direct attention to our common interests and to the means by which those differences can be resolved. And if we cannot end now our differences, at least we can help make the world safe for diversity. For, in the final analysis, our most basic common link is that we all inhabit this small planet. We all breathe the same air. We all cherish our children’s future. And we are all mortal.”
The worst is never certain, but in the end, it occurs frequently, so it is urgent not to drop arms but rather try to look beyond the existing forces that cultivate an historic pessimism and fatalism by enabling people on Earth to keep their demands alive.
In particular, both scientists and internationally recognized experts from civil society should hold conversations, irrespective of their cultural and geographical contexts, in order to arrive at a series of “red lines” regarding the limits on what AI systems should be allowed to do, considering what is politically feasible, ethically legitimate, socially acceptable.
The priority here, we believe, is to agree some key definitions: What is Ethics? What are the Values that people on the planet would include into it? We will come back to these questions in a separate paper in the context of France’s upcoming AI Action Summit.
- We use here the word “safety” as a handy shortcut to cover the risks and complexities of AI – data protection,
privacy, security, intellectual property rights, bias, information accuracy, energy consumption, and potential
threats to humanity. ↩︎ - Marc ANDREESSEN: “The development and proliferation of AI – far from a risk that we should fear – is a
moral obligation that we have to ourselves, to our children, and to our future” ↩︎ - Yuval HARARI: “AI isn’t a tool – it’s an agent. The biggest threat of AI is that we are summoning to Earth
countless new powerful agents that are potentially more intelligent and imaginative than us, and that we don’t fully understand or control”. ↩︎ - Like the EU General Data Protection Regulation, the AI Act has a wide territorial reach, impacting operators
within and outside the EU. It provides for significant sanctions, including high financial penalties and a strong
regulatory enforcement framework. Some experts believe that in the near future substantial parts of the AI
Act will become the “gold standard” for global AI regulation, making an early understanding of its requirements critical for organizations everywhere (hence the importance of the AI PACT). ↩︎
