Part 5 — What are the Ethics of AI? from a system of prior authorization is a major paradigm shift towards agility, allowing manufacturers the scope for innovation. In this case, it would be advisable to capitalize on this approach, which incorporates the right to support for innovation by making a real commitment to equal opportunities in innovation in a digital era. The guidelines adopted by the WP29, the Article 29 Data Protection Working Party, require a PIA to be carried out when data processing reveals a risk of discrimination or exclusion. This cornerstone in the social acceptability of AI is a matter for separate analysis. The PIA needs to be accompanied by a similar measure which can be applied in cases of discrimination, a discrimination impact assessment or DIA, to force creators of AI to consider the social consequences of the algorithms they produce. An approach similar to the one that led to the design of the free software made available by the French Data Protection Authority (CNIL)—to assist those with less experience in carrying out their PIA auto-evaluation—could preside over the DIA measure. France could promote a joint investment project—through the EU’s intervention or on the basis of voluntary partnerships with certain member states— to provide the necessary protocols and rights-free software. A line in investments could, in particular, be devoted to the engineering of this project (legal and operational support and facilitating the interface between the various competent authorities) so as to be able to implement the solutions identified by research. 3. Considering Collective Rights to Data Developments in AI have revealed a certain number of ‘blind spots’ in current (and future, with the advent of the GDPR) legislation regarding the protection of individuals. They stem from the fact that the French Data Protection Act, like the GDPR, deals solely with personal data. However, although the scope for protection offered by this legislation is potentially very broad, artificial Many of the issues intelligence does not merely harness personal data. Far from raised by the use of it: many of the issues raised by the use of algorithms now algorithms now constitute a ‘blind spot’ of the law. Legislation relating to data protection only regulates constitute a ‘blind artificial intelligence algorithms inasmuch as they are based spot’ of the law on personal data and/or their results apply directly to individuals. This holds good in a large proportion of cases: personal offers, recommended contents, etc. but, in practice, many purposes escape this legislation, despite the fact that these may have a significant impact on groups of individuals, and therefore on single individuals. For example, it has demonstrated that the statistical aggregates that prompt sending a greater number of police patrols or Amazon couriers to certain areas may have discriminatory consequences for certain sections of the population, due to a mechanism which reproduces social phenomena. From the point of view of developments in artificial intelligence, we could even simply ask ourselves whether the concept of personal data still has any real meaning. The pioneering work of Helen Nissenbaum teaches us, for example, that data is a contextual object which may provide information about several individuals or issues simultaneously. Especially since, within the context of deep learning, data is used on 121

For a Meaningful AI - Report - Page 121 For a Meaningful AI - Report Page 120 Page 122