which exclusively involve respect for all the legislation and regulations contained in company policies; we already expect computer experts to respect the law. The aim of teaching ethics is rather to pass on to the future architects of a digital society the conceptual tools they will need to be able to identify and confront the moral issues they will encounter—within the context of their professional activities—in a responsible fashion. In addition, bearing in mind the practical implications of questions raised concerning the protection of privacy, discrimination and intellectual property, they need to receive practical instruction so as to be equipped to make the connection between normative theories (professional ethics) and their application to particular circumstances. This requirement seems all the more necessary given that a significant proportion of the issues raised are not immediately apprehensible under the law. What can we do about the fact that recommendation algorithms are keeping users living in the security of comfortable filter bubbles, isolated from the realities of living in an ever more complex world? Should programmers work towards pluralism? From another angle, should the selection process for finding the best candidate to fill a post be reduced to merely looking at qualifications awarded by educational institutions and universities? In cases where standards are non-existent, are not mentioned or are insufficient, the developer has an increased moral responsibility. Far from finding immediate solutions, teaching ethics could nonetheless trigger a virtuous cycle: training specialists to be more responsible could lead to the development of more responsible technology. What should these courses contain? In order to be able to train specialists to be more responsible, the teaching of ethics—and the social sciences in general—should be included in all engineering and computer science course syllabuses. Ultimately, the aim would be to produce graduates with the necessary technical expertise to be able to develop efficient systems and the skills in social sciences necessary for understanding the impact of their developments on society and on its citizens. On the basis of these criteria, various course models could be designed. A major/minor system could be put in place in higher education establishments, allowing students to choose a core subject, computer science for example (major), and a second subject such as Law (minor). What about lawyers? We cannot leave the responsibility of ensuring that AI systems operate within the law to researchers and engineers alone. It is vital that legal professionals take on their fair share of this task. A precondition of this would be a genuine awareness of this issue within the legal profession and an alignment of the various courses available. Here again, the example of the major/minor system given above could be applied and the options changed to a major in Law and a minor in computer science. Introducing a Discrimination Impact Assessment In a certain number of cases, current European legislation requires operators who process personal data to first carry out an impact assessment to find out the potential impact of their activities on the rights and interests of those concerned: this is the privacy impact assessment or PIA. In this way, data carriers are responsible for self- assessing the impact of their activities, taking the appropriate corrective action and, in the event of an inspection, being able to demonstrate that all necessary measures have been taken to give them complete control over the process. This departure 120

For a Meaningful AI - Report - Page 120 For a Meaningful AI - Report Page 119 Page 121