capacities to observe, understand and audit their performance and, on the other, through massive investment in research into their accountability. Next, the protection of our rights and freedoms needs to be adapted to accommodate the potential for abuse involved in the use of machine learning systems. Yet it appears that current legislation, which focuses on the protection of the individual, is not consistent with the logic introduced by these systems—i.e. the analysis of a considerable quantity of information for the purpose of identifying hidden trends and behavior—and their effect on groups of individuals. To bridge this gap, we need to create collective rights concerning data. Meanwhile, we need to ensure that organisations which deploy and utilize these systems remain legally responsible for any damages caused. Although the terms of this legislation concerning responsibility are still to be defined, the French Data Protection Act of 1978 and the GDPR (2018) have already established its principles. However, legislation cannot solve everything, partly because it takes much more time to generate law and norms than it does to generate code. It is therefore vital that the ‘architects’ of our digital society—the researchers, engineers and developers who are designing and commercializing this technology—do their own fair share in this mission by acting responsibly. This means that they should be fully aware of the potentially negative effects of their technology on society and that they should make positive efforts to limit these. In addition, given the important nature of the ethical questions that confront future developments in AI, it would be prudent to create a genuinely diverse and inclusive social forum for discussion, to enable us to democratically determine which forms of AI are appropriate for our society. Finally, it becomes more crucial to politicize the issues linked to technology in general and AI in particular, in view of the important part it plays in our lives. To this end, the proposed Chambre du futur (Chamber of the Future), announced by the President of the Republic in the context of the reform of the ESEC, the French Economic, Social and Environmental Council, needs to play a major role in the strictly political debate on artificial intelligence and its consequences. 1. Opening the ‘Black Box’ A large proportion of the ethical considerations raised by AI have to do with the obscure nature of this technology. In spite of its high performance in many domains, from translation to finance as well as the motor industry, it often proves extremely difficult to explain the decisions it makes in a way that the average person can understand. This is the notorious ‘black box problem’: it is possible to observe incoming data (input) and outgoing data (output) in algorithmic systems, but their internal operations are not very well understood (see inset). Nowadays, our ignorance is principally due to changes in the paradigm that is introduced by machine learning, in particular deep learning. In traditional computer programming, building an intelligent system consisted of writing out a deductive model by hand, i.e. the general rules from which conclusions are inferred in the processing of individual cases. Such models are by definition explainable, inasmuch as the rules which determine their decision-making are established in advance by a programmer, and it is possible to tell in each individual case which of the rules have been activated 114

For a Meaningful AI - Report - Page 114 For a Meaningful AI - Report Page 113 Page 115