Back to issue
Share on :
Viewed 68 times
02 April 2024

ARTIFICIAL INTELLIGENCE: regulate or innovate, should we really choose?
Article from ❤️

We often hear the adage that "the USA innovates, China copies and Europe regulates". This is a simplistic view that is all too often conveyed, and would lead one to believe that the USA and China produce little or no regulation, or that Europe is content to legislate without creating or innovating. However, this cliché reflects a common feeling that innovation and regulation are antagonistic, or even irreconcilable. The reality is obviously more complex, as the specific case of artificial intelligence (AI) demonstrates.


AI at the heart of all human activities

Questions surrounding the precise definition of what AI is - from the creation of the term at the 1956 Dartmouth Conference to the dualism between symbolist and connectionist approaches1 - are numerous, complex and sometimes a little futile. Today, AI is more than just a scientific concept. It's an object that involves technological, political, societal, geopolitical, geographical, legal, anthropological and ethical issues.

Over the past decade or so, AI technologies have been making inroads into all sectors of activity. Public safety (augmented video surveillance systems that analyze human behavior), healthcare (diagnostic assistance), education (via learning analytics, which enable personalized learning paths) or even the fight against tax fraud (for example, the automated detection of undeclared swimming pools based on satellite images) are all examples that rely on the exploitation of AI systems.

The year 2023 was an inflection point for the adoption of AI technologies within organizations of all types (large corporations, SMEs, public administrations, etc.). The irruption of new generative AI technologies, and more specifically the ChatGPT (OpenAI) "phenomenon", has indeed been a determining factor. The specific features of generative AI stem from a combination of three factors:

  • The generative nature that provides the ability to produce text, responses, images, code, 3D rendering, etc. ;
  • The foundation model, which can be trained on vast quantities of data, enabling it to be used for a wide range of tasks;
  • The user interface, which offers the possibility of interacting in natural language in a chat interface via a prompt, making multiple applications of these complex models accessible to the widest possible audience.

Major challenges for freedom

The transformative power of AI technologies and their benefits are now well documented. However, the fact that they are becoming increasingly interwoven with human activities and processes raises a number of questions. These are intrinsically linked to the specific features of machine learning. Indeed, AI systems are based on learning from data, and in ever greater quantities (even if more and more research is now being carried out to develop "frugal" AI).

This data can be protected in a number of ways: literary and artistic property, personal data, data relating to industrial or business secrets, and so on. The very development of AI systems, which will rely on vast quantities of data, raises the central question of transparency. This need for transparency stems essentially from the fact that a large part of the know-how of AI system designers lies in the way they collect, select and pre-process data.

Thus, there is a tension between the desire of AI system designers to preserve their secrets and manufacturing processes, and their sources on the one hand. And on the other, the ethical requirement to communicate openly about these elements, with the possibility of allowing individuals to object to the processing of their data.

AI systems are also known to generate discriminatory biases, or even amplify them. These can lead to one gender being favored over another in the automatic analysis of CVs, to certain patients being less well diagnosed on the basis of their ethnic origin, to people being suspected of fraud on the basis of their place of residence, or to certain individuals not being correctly identified in images on the basis of their skin color. In a world steeped in Anglo-Saxon culture, it is also critical that AI "assets" (datasets, models, etc.) reflect pluralism and cultural specificities, at the risk of standardizing and impoverishing the representations produced.

AI systems also present risks from a safety point of view. This risk is twofold. These technologies have the power to multiply attacks (e.g. massive, high-quality phishing, large-scale disinformation campaigns, etc.), while at the same time presenting new vulnerabilities, again due to their particularity of being based on machine learning. For example, these systems can be the target of cyber-attacks aimed at hijacking the correct operation of the system by presenting it with corrupted inputs, introducing backdoors by "poisoning" training data, or "exfiltrating" information through meticulous model querying (reconstruction of data used for training, model theft, membership inference attack, etc.)2. While the reality of such attacks may still seem remote, there is no doubt that they will develop ever more rapidly as AI systems become more widely adopted.

A final issue, not specific to AI technologies but amplified by these, is that of the delegation of power. Indeed, the increasing hybridization of tasks performed with the support of AI systems raises the question of control and free will. What role does the human being play in an automated process? This is a highly complex question, and requires us to consider the entire framework within which the task in question is carried out. Does the individual risk blindly trusting the system or, on the contrary, rejecting all its suggestions? What are the consequences of individual error? Do these consequences differ if the user has followed the suggestion or opposed it? etc.

Regulation for better innovation

The effervescence of the subject has led many players to take a stance on its regulation. While the perception of what AI regulation should look like varies from one player to another, there is a broad consensus on the need for it. Sam Altman, CEO of OpenAI, told the US Congress that such regulation would be beneficial to the industry. Indeed, the use of AI technologies will only be able to develop if a framework of trust can provide a response to the issues at stake. The regulation of AI technologies has thus become a global issue. China has adopted the first regulation on generative AI3, the United States now has the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence Executive Order issued by the Biden administration4, and the G7 countries have adopted guiding principles5, to name but a few.

To date, there is no specific framework for AI technologies in France or Europe. Certain "horizontal" regulations find application, such as the General Data Protection Regulation (RGPD) when the data processed is personal. Sector-specific regulations are also involved, such as those relating to product safety, which may apply to the implementation of certain AI systems, such as medical devices for example. For the past ten years, however, Europe has been looking at ways of regulating the uses to which AI technologies can be put. These have culminated in the proposal for a European regulation on AI, the fruit of more than three years' work. Described as historic by European Commissioner Thierry Breton, the political agreement reached in December 2023 on the draft European regulation on AI will lead to its imminent adoption (which may already be the case as you read these lines).

The European AI Regulation therefore proposes a clear and ambitious framework that is directly in line with the technical capabilities of systems. However, the question of its durability, guaranteed by its updating methods, poses a major challenge, at the risk of seeing a disconnect forming between regulatory requirements and the constant evolution of AI industry practices.

Finally, we need to distinguish between regulation and regulation, the latter being the concrete, embodied implementation of the former. To meet the challenges at hand, regulation requires constant dialogue between the relevant authorities and all the players in the AI ecosystem, i.e. companies (major groups, start-ups, SMEs), professional federations, civil society, the world of research, political authorities, training institutes and so on.

Innovation is abundant, multiple and dispersed. However, it cannot be reduced to freedom from all rules. Its widespread dissemination, its translation into use by and for the greatest number, requires a framework of trust, based on standards and principles accepted and shared by all players. Without such a framework, ingenuity and inventiveness will struggle to become part of a viable societal and economic model.

So we need to ask ourselves how we can manage without restricting, how we can innovate and experiment without endangering. It is the role of regulation to strike an acceptable balance between constraint and freedom, in keeping with shared values. This tension reflects both our major democratic principles and a society's self-image.

The European AI Regulation

Initiated in 2021 by the European Commission, the AI Regulation or AI Act aims to frame artificial intelligence in a way that makes it trustworthy, human-centric, ethical, sustainable and inclusive. This text of European scope, i.e. applied uniformly across the continent is based on an approach defining a risk scale to classify AI systems according to four levels:

  • AI systems that are unacceptable and therefore prohibited;
  • high-risk AI systems;
  • AI systems with limited risk;
  • Minimal-risk AI systems.

The regulation mainly concerns the first two categories of systems, identifying those that cannot be deployed on European soil (e.g. social credit, subliminal manipulation or remote biometric identification) and those at high risk: AI systems used for medical devices, education, recruitment or law enforcement, for example. The latter must meet a number of requirements, such as :

  • include a risk management system;
  • implement data governance (ensuring in particular relevance, accuracy, representativeness and robustness);
  • maintain technical documentation;
  • be supervised by a human operator;
  • implement logging, supervision, transparency and cybersecurity measures, etc.

AI systems meeting these requirements will be able to display the CE mark and circulate freely on the European market.

Author

Docteur Télécom ParisTech 2011

Articles in this issue