Part 5 — What are the Ethics of AI? periodic revisions of systems (for example, in the case of self-learning technology); or alternatively in the addition of a means for self-destruction and abandonment of missions. Although international dialogue is intended to be pursued within the CCW, it is equally important that these issues should be the subject of an international ethical debate which brings together not only technical experts, but also civil society and NGOs. Establishing an observatory for the non-proliferation of autonomous weapons Unlike other types of dual technology where the situation is the reverse, in the field of AI the civil element brings the military element in its wake; this leads to problems concerning its appropriation and adaptation. Thus the issue of proliferation has to be addressed in a context where the technological building blocks required to build weapons are no longer supplied by the military, and where anyone with a grounding in AI can divert its purpose into building weapons for the arms trade. France has existing regulations which allow it to maintain control over military equipment. According to information published by the French General Directorate for International Relations and Strategy of the Ministry of Defense: The French system for monitoring military equipment is based on a general prohibitory principle, according to which the whole of the sector is subject to State control; the power behind this is the CIEEMG, the Inter-ministerial Committee for the Study of Military Equipment Exports. The CIEEMG brings together representatives from various ministries including those in charge of defense, foreign affairs and international development, and the economy and finance, who have the right to vote on rulings. It reports to the prime minister and is chaired by the SGDSN, the General Secretariat for Defense and National Security. It assesses all aspects of export initiatives, taking into particular consideration the impact of an individual export on regional peace and security, but also the internal situation of the country of final destination and the practices of the latter in terms of the respect of human rights, the risk of its misuse for the benefit of unauthorized end users, the need to protect the security of our troops and those of our allies or alternatively to control the transfer of the most sensitive technology. Concerning AI, the issue of proliferation needs to be addressed in a context where the technological building blocks required for the building of weapons are no longer supplied by the military but are developed by private stakeholders for purely civil applications. It should therefore be noted that anyone with a grounding in AI could divert its purpose into building weapons for the arms trade: at the moment, when a detection is made by an algorithm and this triggers a response from a computer, the additional complexity of turning this into a physical response serves no purpose. In this context, an observatory could be put in place—along the lines of the observatory for the non-proliferation of nuclear, biological and chemical weapons— which would have an ongoing prospective and monitoring role concerning lethal autonomous weapons and the threats they pose. 127

For a Meaningful AI - Report - Page 127 For a Meaningful AI - Report Page 126 Page 128