Part 5 — What are the Ethics of AI? Artificial intelligence now affects every aspect of our social lives. Without always being aware of it, we interact on a daily basis with intelligent systems which optimize our journeys, create our favorite playlists and protect our inboxes from spam: they are our invisible workforce. At least, this is the role we have assigned to them: improving our lives, one task at a time. Recent progress in AI’s several fields (driverless cars, image recognition and virtual assistants) and its growing influence on our lives have placed it at the center of public debate. In recent years, many people have raised questions about AI’s actual capacity to work in the interests of our well-being and about the steps that need to be taken to ensure that this remains the case. This debate has principally taken the form of a broad discussion about the ethical issues involved in developing artificial intelligence technology and, more generally, in the use of algorithms. In different parts of the world, experts, regulators, academics, entrepreneurs and citizens are discussing and sharing information about the undesirable effects—current or potential—caused by their use and about ways to reduce them. Faced with the need to take respect for our values and social standards on board when addressing the potential offered by this technology, these discussions have logically drawn on the vocabulary of ethics. They occupy the available space between what has been made possible by AI and what is permitted by law, in order to discuss what is appropriate. However, ethics is clearly a branch of philosophy which devotes itself exclusively to the study Aside from these purely of this space by attempting to distinguish speculative considerations good from evil, the ideals to which we aspire and the paths which take us away concerning AI’s ‘existential from them. threats’ to humanity, debates Furthermore, aside from these purely tend to crystallise around the speculative considerations concerning AI’s ‘everyday’ algorithms ‘existential threats’ to humanity, debates tend to crystallize around the ‘everyday’ algorithms which organize our news feeds, help us decide what to buy and determine our training routines. In 2017, Kate Crawford, Cathy O’Neil and many others reminded us that we are not all equal before these algorithms and that their partiality has a real impact on our lives. Every day, invisibly, they influence our access to information, to culture, to employment or alternatively to credit. Consequently, if we hope to see new AI technology emerge that fits in with our values and social standards, we need to act now by mobilizing the scientific community, the public authorities, industry, the entrepreneurs and the organisations of civil society. Our mission has humbly attempted to suggest a few ways in which we can start building an ethical framework for the development of AI and to keep this discussion going in our society. These are based on five principles: In the first place, there needs to be greater transparency and auditability concerning autonomous systems. On the one hand we can achieve that by developing our 113
For a Meaningful AI - Report Page 112 Page 114