نوع مقاله : مقاله پژوهشی

نویسنده

دکترای حقوق بین‌الملل عمومی، مدرس گروه حقوق دانشگاه آزاد بوشهر، بوشهر، ایران

چکیده

هوش مصنوعی توانایی یک سیستم کامپیوتری در حلّ مشکلات و انجام وظایفی است که در‌ ‌صورت نبودن آن به هوش انسانی نیاز است. فناوری‌های هوش مصنوعی برای چندین دهه تکامل یافته‌اند. امروزه بسیاری از کشورها در حال توسعۀ هوش مصنوعی در برنامه‌های نظامی خود هستند که با چالش‌های حقوق بشری فراوانی به‌ویژه در زمینة حفظ حریم خصوصی به‌عنوان حقّی بنیادین در یک مخاصمه مواجه می‌باشند. این حریم با توسعه به فضای مجازی، متضمّن حریمه‌ خصوصی اطلاعاتی و حفاظت از داده‌ها می‌گردد؛ لذا این پژوهش به روش توصیفی-تحلیلی تمهیدات و خلأهای قانونی و سیاسی موجود در حقوق بین‌الملل بشردوستانه و حقوق بشر بین‌المللی جهت صیانت از حریم خصوصی در هوش مصنوعی نظامی برای طرف‌های متخاصم را احصا می‌نماید و به این نتیجه می‌رسد که علی‌رغم کاستی‌های مقررات حقوق بین‌الملل بشردوستانه و حقوق بشر، تخطی از امنیت ملی جهت حفظ حریم خصوصی اطلاعاتی، تفویض تعیین چهارچوب‌های حقوق بشر در زمینة هوش مصنوعی به‌عنوان بخشی از اختیارات شرکت‌های خصوصی، در کنار قانون‌گذاری رسمی و غیر‌رسمی می‌تواند خلأهای قانونی مربوط به استفاده از هوش مصنوعی در یک مخاصمه را پوشش دهد.
 

کلیدواژه‌ها

موضوعات

عنوان مقاله [English]

Protecting the Right to Informational Privacy against the Threats Caused by Military Artificial Intelligence

نویسنده [English]

  • Fereshteh Banafi

Phd.of Public International Law, Teacher at Law Group of Islamic Azad Unuversity of Booshehr, Booshehr, Iran

چکیده [English]

Artificial intelligence is the ability of a computer system to solve problems and perform tasks that would otherwise require human intelligence. Artificial intelligence technologies have evolved for decades. Today, many countries are going to develop artificial intelligence in their military programs. Using artificial intelligence for the military purpose will cause many human rights challenges, especially in the area of ​​privacy which is regarded as a fundamental right in a conflict. This privacy extends to cyberspace, to ensure informational privacy and protection of data. Therefore, this research descriptively analyzes the legal and political arrangements and gaps in international humanitarian law and international human rights to protect the privacy in military artificial intelligence between the parties in a conflict and concludes that despite the shortcomings of international humanitarian law and international human rights law, resisting  national security to protect informational privacy and delegate the definition of international human rights frameworks in the field of artificial intelligence as part of the authority of private companies besides formal and informal legislation can fill gaps in the rules governing the use of artificial intelligence in a conflict.
It should be noted that in recent years, the human rights community has been busy with digital rights, and especially with the effects of artificial intelligence technology, and there has been increasing attention to the relationship between international human rights laws and standards governing military artificial intelligence.With regard to the use of artificial intelligence, one cannot ignore the danger of constant tension between the purpose and nature of artificial intelligence on the one hand and its use for ethical decision-making in military matters, even with the presence of human control. It should be noted that the advantages of using artificial intelligence in the military field have great potentialities, but it may also create several challenges. For example, Artificial Intelligent technologies can facilitate autonomous operations, lead to more informed military decision-making, and increase the speed and scale of military operations. However, it may be unpredictable or vulnerable in some ways. Therefore, in addition to the benefits of artificial intelligence in military industries and lowering the cost of the physical presence of the them, threats caused by the use of artificial intelligence, especially in fully autonomous weapons, and the violation of informational privacy and the establishment of a system of responsibility and accountability for filling legal loopholes caused by the use of artificial intelligence is very necessary.
In this regard, the first possible danger from a military environment under the supervision of artificial intelligence in case of the silence of humanitarian and international human rights rules, data contamination and as a result the loss of digital, physical, political and community security and the distortion of the fundamental right to human dignity. The competition of countries in the use of artificial intelligence to upset the balance of power in the world community has created an increasing concern about the fall of rights and ethics. In this regard, suggestions can be made to amend the rules of international human rights in order to regulate the regulations of military artificial intelligence during the conflict. First, it is a violation of national security for information privacy. The mere fact that a national action is taken to protect national security cannot be a document of violation of fundamental human rights laws by a country. Second, international human rights standards in the field of artificial intelligence should be included in the statutes of private companies. Empowering employees as part of the authority of companies is one of the things that can limit the use of artificial intelligence outside the framework of human rights. And, the promotion of the rules of international humanitarian law whether formally or informally.
Informal legislation includes common understandings based on non-binding resolutions and declarations, guidelines and regulations of uniform professional conduct, the practices of industries, domestic laws and policies, civil society reports and political policies, and international and transnational dialogues. Also, redefining and amending official human rights treaties by international institutions can cover digital rights under the rules of international human rights and humanitarian rights. Despite this, although data protection and information privacy regimes are not applicable due to the exclusion of national security of countries, but by establishing informal norms and legislation in international humanitarian law, it is possible to help include the ethics of artificial intelligence in the contemporary laws of war. It is a key factor in human control, which is necessary to comply with international humanitarian law and to satisfy ethical concerns, as a basis for internationally agreed limits on independence in weapons systems. This research has tried to provide a strategy upon which helping the international community to strengthen the rules of humanitarian law and international human rights against the threats caused by the use of artificial intelligence in the military context in the field of violation of the right to informational privacy and accountability of those who violates it.

کلیدواژه‌ها [English]

  • Artificial Intelligence
  • Informational Privacy
  • International Humanitarian Law
  • Human Rights
  • War
مرادپیری، هادی ؛ خزایی، حمیدرضا. « نقش فناوری های نوین اطلاعاتی در جنگهای آینده»،  فصلنامه مطالعات قدرت نرم، سال 10، ش 23، (1399)
شریفی طرازکوهی، حسین؛ برمکی، جعفر، «چالشهای حقوقی قابلیت فضای سایبری در پرتو ماده 36 پروتکل یکم الحاقی 1977»، مجله حقوقی بین‌المللی، ش 62، (1399)
 
References
Ben-Shahar, Omri, “Data Pollution”, Journal of Legal Analysis, Vol. 11, (2019)
Chai, Junyi, et al., Machine Learning with Applications: Deep learning in computer vision: A critical review of emerging techniques and application scenarios, (Elsvier, vol. 6, 2021).
Crootof, Rebecca, "War Torts: Accountability for Autonomous Weapons", University of Pennsylvania Law Review, vol. 164, no 6, (2016).
Directive on Privacy and Electronic Communications, Official Journal L 201, (2002).
Fjeld, Jessica, et. al., "Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI", the Berkman Klein Center for Internet and Society at Harvard University, Research Publication No. 2020-1, (2020).
Freeze, Colin, "Fearing reprisals, Afghans rush to scrub digital presence after Taliban takeover", Globe & Mail Canada, (2021).
Ivey, Matthew,"the Ethical Midfield in Artificial Intelligence: Practical Reflections for National Security Lawyers", Georgetown Journal Legal Ethics, vol.33, n 1, (2020).
International Committee of the Red Cross, “Artificial Intelligence and machine learning in armed conflict: A human-centred approach”, International Review of the Red Cross, vol. 102, (2020).
International Council on Human Rights Policy (ICHRP), “Beyond Voluntarism: Human Rights and the Developing International Legal Obligations of Companies”, Geneva, Switzerland, (2002).
Leslie, David, et al., “Artificial Intelligence, Human Rights, Democracy, and the Rule of Law, a Primer”, the Council of Europe, and the Alan Turing Institute, (2021).
McDermott, Helen,”Application of the International Human Rights Law Framework in Cyber Space In Human Rights and 21st Century Challenges”, Oxford University Press, (2020).
McDonnell, Brett H., “Strategies for an Employee Role in Corporate Governance”, Wake Forest Law Review, Vol. 46, (2011).
Murray, Daragh, et al., Practitioners' guide to human rights law in armed conflict, (Oxford University Press, 2016).
Rejali, Saman, and Heiniger, Yannick, “the Role of Digital Technologies in Humanitarian Law, Policy and Action: Charting a Path Forward”, International Review of the Red Cross, vol. 102, (2020).
Shane, Scott, and Wakabayashi, Daisuke, “The Business of War’: Google Employees Protest Work for the Pentagon”, NewYork Times, (2018).
Stevenson, Alexandra, Facebook Admits It Was Used to Incite Violence in Myanmar, New York Times,( 6 November 2018) available at: www.nytimes.com/2018/11/06/technology/myanmar-facebook.html.
Thompson, Chengeta, “Are Autonomous Weapon Systems the Subject of Article 36 of Additional Protocol I to the Geneva Conventions?”, (2014). Available at: SSRN: http://dx.doi.org/10.2139/ssrn.2755182
Tucker, Partick, “What Google’s New Contract Reveals About the Pentagon’s Evolving Clouds”, Defense One, (2020).
 
Opinions
Court of Justice of the European Union, Grand Chamber, Opinion (1/15) pursuant to Article 218(11) Treaty on the Functioning of the European Union, (2017) para. 19.
Privacy International v Secretary of State for Foreign and Commonwealth Affairs, (2020).
 
DOCUMENTS
COM/2021/206final,https://eur-lex.europa.eu/legal content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206.
Charter of Fundamental Rights of the European Union, (2012).
Draft Agreement Between Canada and the European Union on the Transfer of Passenger Name Record data, (2014).
- European Union General Data Protection Regulation, (2016). 
International Committee of the Red Cross, "Artificial intelligence and machine learning in armed conflict: A human-centered approach", (June 6, 2019):
https://www.icrc.org/en/document/artificial-intelligence-and-machine-learning-armed-conflict-humancentred- approach
http://legal.un.org/ilc/documentation/english/reports/a_61_10.pdf. 
Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts art. 36, (June 8 1977)
Report of the United Nations High Commissioner for Human Rights about the Right to Privacy in the Digital Age, U.N. Doc. A/HRC/39/29, (2018).
 
Tanslated References into English
Muradpiri, Hadi; Khazaei, Hamidreza, “The Role of New Information Technologies in Future Wars”, Soft Power Studies, Year 10, Issue 23, (2019). [In Persian]
Sharifi Tarzkohi, Hossein; Barmaki, Jafar, “Legal challenges of cyber space capabilities in the light of Article 36 of the 1st Additional Protocol of 1977” International Law Review, Vol. 62, (2019). [In Persian]