Defining artificial intelligence is no easy matter. Since the mid-20th century when it was first recognized as a specific field of research, AI has always been envisioned as an evolving boundary, rather than a settled research field. Fundamentally, it refers to a programme whose ambitious objective is to understand and reproduce human cognition; creating cognitive processes comparable to those found in human beings. Therefore, we are naturally dealing with a wide scope here, both in terms of the technical procedures that can be employed and the various disciplines that can be called upon: mathematics, information technology, cognitive sciences, etc. There is a great variety of approaches when it comes to AI: ontological, reinforcement learning, adversarial learning and neural networks, to name just a few. Most of them have been known for decades and many of the algorithms used today were developed in the ’60s and ’70s. Since the 1956 Dartmouth conference, artificial intelligence has alternated between periods of great enthusiasm and disillusionment, impressive progress and frustrating failures. Yet, it has relentlessly pushed back the limits of what was only thought to be achievable by human beings. Along the way, AI research has achieved significant successes: outperforming human beings in complex games (chess, Go), understanding natural language, etc. It has also played a critical role in the history of mathematics and information technology. Consider how many softwares that we now take for granted once represented a major breakthrough in AI: chess game apps, online translation programmes, etc. In recent years, AI has Its visionary nature makes AI one of the most fascinating scientific endeavors of our time; and as entered a new era, such its development has always been accompanied which gives rise to by the wildest, most alarming and far-fetched fantasies many hopes that have deeply colored the general population’s ideas about AI and the way researchers themselves relate to their own discipline. (Science) fiction, fantasy and mass projections have accompanied the development of artificial intelligence and sometimes influence its long-term objectives: evidence of this can be seen in the wealth of works of fiction on the subject, from 2001: A Space Odyssey to Her, Blade Runner and a significant proportion of literary science fiction. Finally, it is probably this relationship between fictional projections and scientific research which constitutes the essence of what is known as AI. Fantasies—often ethnocentric and based on underlying political ideologies—thus play a major role, albeit frequently disregarded, in the direction this discipline is evolving in. In recent years, artificial intelligence has entered a new era, which gives rise to many hopes. Most notably, this has been tied to the recent success of machine learning. Thanks to complex algorithms, increased computing power and the exponential growth of human and machine-generated data, various applications have been developed in translation, transport (driverless cars), health (cancer detection), etc. It is worth noting that progress in AI is taking place in a technological context marked by the datafication of the world which affects all sectors of our society and economy, the development robotics and the blockchain (the distributed ledger technology which enables transactions between two, or more, agents without the presence of a trusted third party or institution which most notably underlines cryptocurrencies such 4

For a Meaningful AI - Report - Page 5 For a Meaningful AI - Report Page 4 Page 6