represented as presenting a low risk, which was inconsistent with their actual rate of recidivism. This means that this algorithm resulted in the continued detention of black prisoners who would probably not have re-offended (false positives), whilst it allowed white potential re-offenders to go free (false negatives). Amongst other things, the Propublica enquiry reminds us that we are not all equal when it comes to these systems. Since the COMPAS algorithm was trained with data from police and judicial databases, it is liable to be biased and to reproduce the prejudices currently found in society. The absence of a critical distance in its use could lead to the entrenchment of discrimination in the law and the systematic dissemination of prejudice. We should also consider the impact of these solutions on those who may be required to implement them—in this case, judges and police officers. Indeed, the increased use of these technical solutions will lead to an increased pressure to standardize the decisions made by institutions: it is far easier for a judge to follow the recommendations of an algorithm which presents a prisoner as a danger to society than to look at the details of the prisoner’s record himself and ultimately decide to free him. It is easier for a police officer to follow a patrol route dictated by an algorithm than to object to it. In both cases, they would be obliged to defend their ‘discretionary’ decisions and in these circumstances, it would be preferable if their approaches or decisions were in line with standard procedure. However, the outcome of this move is very uncertain and there are concerns that it would raise increasing challenges to their individual responsibility. On the other hand, these systems would not be vulnerable to the strain of decision-making which sometimes results in judges freeing fewer prisoners at the end of the day than during the morning… Another danger linked to the proliferation of systems for predictive analysis is the increased threat of mass surveillance. For predictions to be as accurate as possible and to optimize decision-making, these systems need to have access to as much information as possible, at the expense of individual privacy. More fundamentally, these systems are liable to reduce individual autonomy by encouraging judges to detain prisoners who have already served their sentences or by organizing the systematic surveillance of populations in deprived areas. Regulating the use of predictive algorithms To prevent these situations arising, citizens should first of all be informed about their rights: in these two instances, the right to an effective remedy and the right to explanations concerning the processing of data on which surveillance is based. From this point of view, we need to remind ourselves that in 1978, the French Data Protection Act laid down the principle according to which ‘no court or other decision involving legal consequences for an individual can be taken solely on the basis of the automated processing of personal data intended to define the profile of the person concerned or to assess certain aspects of his personality’, adding that ‘an individual has the right to know and to challenge this information and the logic underlying the automated processing when these results are denied him’. These 124

For a Meaningful AI - Report - Page 124 For a Meaningful AI - Report Page 123 Page 125