Part 1 — An Economic Policy Based on Data Amongst the problems raised, we will notably discuss the possibility of the following occurring: - arbitrary skewing of algorithm results due to the manipulation of input data; - manipulation of data inputted during the learning period carried out by an AI algorithm; - creation of new attacks based on the weaknesses of current AI techniques. Safety is of clear concern to experts, but not uniquely. Collective awareness on the issue is required. Generally speaking, and more specifically in terms of AI, collective awareness must be considered from the outset of any process in order to avoid ‘patch’ culture, and safety should be considered from the design phase for technological products and solutions. This is one of the reasons why it is useful to call on the support of specialist actors, who are able to propose solutions thanks to their experience and expertise. It is especially critical since recent events continue to report on the occurrence of security breaches, both in terms of software and material products. The task of monitoring, foresight and study on the subject of safety and security issues posed by AI could be allocated to the ANSSI (Agence Nationale pour la Sécurité des Systèmes d’information —National Cybersecurity Agency), for which it could facilitate a skill network at State-level in the fields of cyber defense, defense and critical systems. Standardization One of the specific aspects of AI is the creation of de facto standards, notably of a technological nature: this is the case for deep learning for example, where technology such as TensorFlow (developed by Google) was adopted by an overwhelming market majority as soon as it was released, whether by individuals, startups or academics. Whilst these building blocks may avoid an ecosystem in which the same solutions are continually reinvented, they contribute to enforcing de facto standards. This situation could prove to be highly detrimental if members of GAFAM (Google, Apple, Facebook, Amazon, and Microsoft), who remain the beneficiaries, decide to recover all of the developments made in AI that they enable. As such, the greatest risk in terms of AI is not presented by the algorithms themselves, but rather by the technology (and human) “stack” which facilitates their implementation. In this context, standardization is not conceivable without maintaining very tight connections with the ecosystem as a whole: research, industry, innovation. This approach must consist of reducing the trend for monopolization and logic of confinement. It will notably concern the establishment and application of non- proprietary interoperability standards within a proactive and coordinated approach, as well as local outputs for personal and non-personal data production tools. 59

For a Meaningful AI - Report - Page 60 For a Meaningful AI - Report Page 59 Page 61