To: | European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) |
CC: | European Parliament (Secretariat) Members of the European Parliament European & international bodies of experts in AI European & international News Media |
6-Feb-2020, Athens, Greece
Subject: Open letter on Artificial Intelligence and the Prohibition of Autonomous Weapons Systems (Hellenic Informatics Union – General Assembly Resolution, Dec/2018)
Dear Mr/Mrs,
Hellenic Informatics Union (HIU) is the scientific and professional organization representing the Informatics sector graduates in Greece. Since 2001 we are working on promoting Informatics in every aspect of the society, economy and education, upholding the highest academic standards and our own first-ever Code of Ethics for the Informatics in Greece.
On December 2018, the General Assembly of the HIU unanimously voted in favor of the following intervention-proposal regarding the basic principles of Artificial Intelligence (AI), its implementation framework and the international ban on autonomous weapons systems.
Basic definitions and implementation framework
Artificial Intelligence (AI):
As a concept, AI it refers to the general ability of an algorithm to produce results that simulate the accuracy, reliability and cognitive value ("understanding") of solving a specialized or generic problem, which usually cannot be solved by simple mathematical or analytical methods and requires the assistance of a domain expert [1][2]. To this end, such an algorithm is often required to be capable of exemplifying, discovering and linking general concepts (abstraction), coding knowledge and drawing conclusions based on it, exploiting past "experience" (adapting to mistakes), etc. On a practical level, such algorithms make it possible to optimally solve "closed" cognitive problems, such as games like chess, but also much more complex real-world problems such as voice recognition, human language understanding, face recognition, handwriting recognition, analysis and automatic diagnosis in medical images, etc.
Autonomous Weapons Systems (AWS):
The AWS category includes weapons systems that incorporate to a greater or lesser extent the ability to correct or fully autonomously guide for optimal targeting and target destruction [3]. This capability encompasses a wide range of more or less "intelligent" functions, from the proximity fuze or depth charges of WWII to the fully autonomous cruise missiles with satellite / inertial / ground guidance and detonators with perforation measurement (at a specific point inside buildings).
In view of the above, it is clear that AI and AWS or "lethal" AWS (LAWS) are increasingly involved. Over the last three decades, digital technology and the ever-increasing computing resources available on ultra-small-scale circuits have allowed the implementation of increasingly complex, more demanding and more "intelligent" algorithms in modern weapons systems. At the same time, the complexity and speed of processing required make human intervention not only less necessary but often a “bottleneck” in decision-making processes, e.g. in the guidance system of a rocket traveling at 3-5 times the speed of sound towards the target. As a result, in recent years modern weapon systems have become increasingly autonomous [4][5], not only after the basic decision to use them, but also before that.
Unfortunately, indications are that the investment in the development of increasingly "smart" autonomous weapons systems will continue to increase in the coming years. Just last September (2018), US DARPA announced [6] the launch of a new $2 billion AI systems research and development program for better human-machine collaboration [7][8]. A similar $2.1 billion program has already been announced [9] by China in January (2018), while last year the government has launched a three-year plan [10][11] to upgrade the country to a world leader in AI. It is worth noting that in its long-term plan [12] named "A.I. Next", DARPA points out [13] the upgrade of AI from a passive decision support tool to a real human "partner" as a key objective - which in the case of weapons systems development (DARPA's main investment area over time), means a significant upgrade of the autonomy of these systems to a level of decision making without mediation or direct human control.
In addition to the key issues related to the ethical, social and legal dimensions of any weapon system use (e.g. weapons of mass destruction), modern technology allows almost 100% transfer of responsibility for the decision of deployment or not towards the "machine". Modern Unmanned Combat Aerial Vehicles (UCAVs) [14][15] have the ability to analyze the battlefield, identify targets, guide or launch their own missiles and destroy them, without the intervention of the human factor at any stage. The ethical and legal liability problems that arise in other non-combat areas in cases of malfunction, such as fully autonomous car driving systems or even standard braking or airbag control systems, in weapon systems become of multiple importance as they relate to the decision of using or not a by default lethal instrument. Removing the human factor from the control loop creates the potential for multiple and generally fatal consequences that still are extremely difficult to predict. Already, even very simple "passive" robotic systems, such as in the cases of Uber [16][17] and Amazon [18], have demonstrated the seriousness of the problem, both at the technical / design level, but mainly at the legal / moral level concerning responsibility of attribution (legal liability / moral hazard).
In 2018 there were, among other things, two public calls for an international ban on AWS / LAWS: The first in the form of an open letter [19][20] signed by some of the largest companies and private organizations (eg Google DeepMind), as well as a number of academic-research institutions from around the world. The second in the form of a formal statement / resolution [21] to the UN (August 2018) by the nearly 4,000 scientists who co-sign the aforementioned letter. Unfortunately, Greece has not voted in favor of the corresponding resolution, which was put before the European Parliament, and is one of the countries [22] that currently have no formal policy on AI and AWS.
As a scientific and professional Union, our core positions and Principles are defined by the Code of Conduct for Information Technology [23], the first to be introduced for this field in Greece. Following the international concern of academic, research, social, political and governmental bodies around the world and in view of the mobilization that takes place on this subject, HIU proposes the following positions as fundamental rules for the proper development and use of Artificial Intelligence, in particular related to AWS.
HIU proposals on Artificial Intelligence and AWS
Based on:
- The European Parliament resolution [24] of the 16/2/2017 on "Civil Law Rules on Robotics".
- The European Parliament resolution [25][26][27] of the 12/9/2018 on the international ban on autonomous weapons systems or "autonomous weapons ban".
- Actions in progress by UN [28][29] towards this direction.
- The open letter [30] of scientists and academics from around the world through the Future of Life Institute (FLI), which focuses on the problem of the "robotic weapons arms race" [31] in general.
- Collective actions of bodies and organizations for the proper use of Artificial Intelligence [32] and Robotics, such as "Campaign to Stop Killer Robots" [33][34].
- Generally accepted positions / conditions for the safe development of the relevant technology, such as I. Asimov's "Three Laws of Robotics" [35][36].
We propose the following:
Basic Principles of Artificial Intelligence & Robotics
1. The *purpose* of developing and implementing Artificial Intelligence & Robotics (AI&R) is not without a goal. It must serve the common good and protect life.
2. The *investment* in the development and implementation of AI&R must always be accompanied by assurance of transparency and proper direction in matters of ethical, humanitarian, social, legal and economic dimensions.
3. The *access* to the scientific, technological and productive dimension of AI&R must be equal for all, over time, as a human right to knowledge and goods.
4. The *results* of the development and implementation of AI&R should be equally relevant to society as a whole, with particular attention to issues of security, protection of personal data and respect for personal freedoms.
5. The *compliance* with the Law, especially the protection of life, the liability, the risk minimization and the damage control in case of malfunction (fail-safe) are the top priorities.
6. The *control* of AI&R systems must always be maintained by or to be assigned by priority to humans, in order to accomplish human-defined objectives.
7. The human *understanding* of the internal operations and the ability of auditing the decision-making processes in an AI&R system should be ensured to the maximum extent possible.
8. The *implementation* of AI&R systems in large-scale daily life should respect and ensure, to the maximum extent possible, the individual's personal choice as to whether or not use it.
9. The *integration* of AI&R scientific and technological achievements into practical applications and their use for peaceful purposes is an obligation of all.
10. The *self-improvement* and *self-protection* of AI&R systems, as a key aspect of their design, should always be subject to human assessment and ensure that they are carried out in a beneficial way.
As a scientific and professional organization, but also as plain citizens, we the members of HIU are at the disposal of the State, the Hellenic Parliament and the members of the European Parliament for further contribution to this extremely critical issue for the future generations.
Yours sincerely,
Hellenic Informatics Union (A.C. board)
President | Vice-President | General Secretary | Special Secretary | Financial Manager |
Dimitris Kiriakos | Marios Papadopoulos | Harris Georgiou | Fotis Alexakos | Lena Kapetanaki |
proedros(στο)epe.org.gr | antiproedros(στο)epe.org.gr | gen_grammateas(στο)epe.org.gr | eid_grammateas(στο)epe.org.gr | tamias(στο)epe.org.gr |
Hellenic Informatics Union, P.O. 13801, P.O. Box 10310, Athens, Greece
E-mail: info(στο)epe.org.gr – Tel/sms: (+30)6981.723690
References
[1]: www.britannica.com/technology/artificial-intelligence
[2]: en.wikipedia.org/wiki/Artificial_intelligence
[3]: en.wikipedia.org/wiki/Lethal_autonomous_weapon
[4]: "CODE Demonstrates Autonomy and Collaboration with Minimal Human Commands" (DARPA, 19/11/2018) – is.gd/owBO5B
[5]: www.defenseone.com/technology/2018/11/us-militarys-drone-swarm-strategy-just-passed-key-test/153007/
[7]: www.darpa.mil/news-events/2018-09-07
[8]: "...contextual reasoning in AI systems to create more trusting, collaborative partnerships between humans and machines..."
[9]: www.technologyreview.com/the-download/609892/beijing-is-getting-a-21-billion-ai-district/
[10]: www.technologyreview.com/s/609038/chinas-ai-awakening/
[11]: www.miit.gov.cn/n1146295/n1652858/n1652930/n3757016/c5960820/content.html
[12]: www.darpa.mil/work-with-us/ai-next-campaign
[13]: "...DARPA envisions a future in which machines are more than just tools that execute human-programmed rules or generalize from human-curated data sets. Rather, the machines DARPA envisions will function more as colleagues than as tools..."
[14]: en.wikipedia.org/wiki/Unmanned_combat_aerial_vehicle
[15]: Typically, Unmanned Combat Aerial Vehicles (UCAV) include both Remote Piloted Aircraft Systems (RPAS) and fully autonomous flying vehicles capable of locating, recognizing and attacking targets of their choice, without the human-pilot intervention from a distance. The second category of UCAV is mainly based on advanced AI systems and is one of the AWS / LAWS systems discussed here. The term "drone" is often identified with the term UCAV, but in civilian applications drones are usually RPAS (not fully autonomous).
[16]: www.economist.com/the-economist-explains/2018/05/29/why-ubers-self-driving-car-killed-a-pedestrian
[17]: www.nytimes.com/interactive/2018/03/20/us/self-driving-uber-pedestrian-killed.html
[19]: futureoflife.org/lethal-autonomous-weapons-pledge/
[20]: futureoflife.org/laws-pledge/
[21]: futureoflife.org/statement-to-united-nations-on-behalf-of-laws-open-letter-signatories/
[22]: futureoflife.org/ai-policy/
[23]: Code of Conduct for Information Technology (HIU), Jul.2016 (Greek) – is.gd/Zc16ri
[24]: www.europarl.europa.eu/sides/getDoc.do
[25]: www.europarl.europa.eu/sides/getDoc.do
[28]: www.theguardian.com/commentisfree/2018/apr/11/killer-robot-weapons-autonomous-ai-warfare-un
[29]: en.wikipedia.org/wiki/Artificial_intelligence_arms_race
[30]: futureoflife.org/open-letter-autonomous-weapons/
[31]: en.wikipedia.org/wiki/Artificial_intelligence_arms_race
[32]: futureoflife.org/ai-principles/
[33]: www.stopkillerrobots.org
[34]: en.wikipedia.org/wiki/Campaign_to_Stop_Killer_Robots