This could involve the implementation of dedicated and multiannual streamed budgets, which incorporate the potential for cost-savings in order to encourage examination of promising applications, study of impacts and the launch of pilot projects. This concerns increasing flexibility in order to seize upon transformations linked to AI within an adapted working mode and pace. This level of dedication makes it possible to move away from short-term needs; the multiannual, streamed element enables the evolving and responsive nature of AI to be broached, in contrast with annual scheduling tools, in as much as opportunities continually present themselves, projects come to fruition, fail and succeed. Lastly, incorporating the potential for cost-savings enables negative costs to be incentivized, so as to avoid favoring saving one euro next year, against saving 10 or even 100 times this amount over the following years. The vehicle for multiannual programming laws could be studied. Developing the reliability, safety and security of AI technology Metrology Public authorities must act in order to develop and implement standards, tests and measurement methods in a bid to make AI technology more secure, more reliable, useable and interoperable. In contrast to expert systems for which reliability and safety can be developed and tested by design (in theory in any case), systems which implement AI make decisions based on models built using data. In this way, protocols should be developed and incorporate new metrics in order to be applied to data, performance, interoperability, usability, safety and confidentiality. In this regard, responsibilities of the LNE (Laboratoire National de Métrologie et d’évaluation —French National Laboratory of Metrology and Testing) could be expanded, within the realms of its historical remit, for it to become the competent authority in terms of assessment (for metrology) in the field of AI, and to build test methods required in order to achieve this. Safety Whilst AI fosters the emergence of new opportunities, it also fosters the emergence of new threats. A case study on this topic was the subject of recent publications which showed that it was possible to arbitrarily skew results produced by certain models informed by neural networks, which poses a significant safety issue for critical Whilst AI fosters the applications. emergence of new The example of the driverless car is significant opportunities, it also in this regard: the existence of means used to fosters the emergence of skew its perception of the surroundings (deliberately causing poor interpretation of a new threats stop sign, for example) could cause severe incidents. Safety is therefore a significant subject, notably for critical systems and systems with a physical component capable of causing damage in the event of attack. 58

For a Meaningful AI - Report - Page 59 For a Meaningful AI - Report Page 58 Page 60