Part 5 — What are the Ethics of AI? In addition to the banking and insurance sector, many other institutions—the courts, the police, the army, immigration—are beginning to make use of predictive analysis systems for a variety of purposes. In France, these scenarios remain largely hypothetical and the development of these initiatives is only at the experimental stage. However, certain foreign governments have already gone one step further; this is the case in Australia. In 2013, the Australian Customs and Border Protection Service installed a system for analyzing the terrorist threat posed by foreign passengers bound for Australia. This system, designed by IBM, cross-checks the data contained in passenger records against data held by the Australian Intelligence Services and social data available online, in order to establish risk profiles. Following their example, law enforcement agencies could, in the future, rely on algorithms to manage the deployment of their patrol units and armies could use LAWS (Lethal Autonomous Weapons Systems) in operational theatres abroad. Changes of this nature, be it in the fields of health, banking, insurance or more particularly in the context of sovereignty, raise important ethical questions. Predictive Policing Police departments, initially in the United States and currently in Europe, are exploring the possibilities of using predictive algorithms within the context of their activities. These methods, commonly known as predictive policing, relate to the application of techniques for the prediction and analysis of big data for the purposes of crime prevention. In reality, they refer to two distinct applications: the first consists of analyzing geographical data in order to identify crime ‘hotspots’ where offences and crimes are liable to take place so as to increase surveillance in these zones and thus maintain a deterrent force. The second application relates more to the analysis of social data and individual behavior, for the purposes of identifying victims or potential criminals and being able to act promptly. These two applications are already being deployed in several American cities; French and European police services and gendarmeries are looking into the possibility of adding them to the tools they use in crime prevention. The earliest research available on their impact in the United States would recommend proceeding with caution. Predictive policing and legal solutions are not only subject to important technical limits but may equally prove to be infringing fundamental liberties (privacy and the right to a fair trial). On a purely practical level, we need to bear in mind that, sophisticated as they are, these systems remain fallible; they are capable of making errors, with potentially disastrous consequences for the lives of the individuals they wrongly assess. The Propublica enquiry In May 2016, journalists from Propublica (an American investigative newspaper) revealed that the COMPAS algorithm used in the estimation of the risk of recidivism by the American legal system and developed by the Northpointe company =, was racist and inefficient. An analysis of the scores attributed to prisoners revealed that this algorithm systematically overestimated black American prisoners’ risk of recidivism at twice that of white Americans. In addition, the latter were often 123

For a Meaningful AI - Report - Page 123 For a Meaningful AI - Report Page 122 Page 124