نوع مقاله : مقاله پژوهشی

نویسنده

دانشیار گروه حقوق،ایلام،ایران

چکیده

باتوجه ‌به پیشرفت‌های چشمگیر در حوزۀ سیستم‌های هوش مصنوعی و تأثیرات گستردۀ آن‌ها در جنبه‌های مختلف زندگی، به‌نظر می‌رسد شناسایی شخصیت حقوقی برای این سامانه‌ها ضرورتی اجتناب‌ناپذیر باشد. مقالۀ حاضر به تحلیل تجربیات کشورهای مختلف در این زمینه و چالش‌های حقوقی مرتبط می‌پردازد. این پژوهش با بهره‌گیری از روش توصیفی-تحلیلی، به بررسی تغییرات قانونی در خصوص مسئولیت‌های مدنی ناشی از عملکرد هوش مصنوعی اختصاص دارد. مسائل مهمی نظیر مسئولیت مدنی ناشی از قراردادها، مالکیت داده‌ها و آثار قانونی استفاده از فناوری‌های هوش مصنوعی مورد تجزیه و تحلیل قرار می‌گیرد. همچنین، مقالۀ حاضر به بررسی ضرورت به‌روزرسانی قوانین موجود برای مواجهه با مسائل حقوقی جدید ناشی از این فناوری‌ها می‌پردازد. نتایج تحقیق بر لزوم اصلاح و به‌روزرسانی قوانین تأکید دارند تا بتوانند مسئولیت‌های حقوقی و مدنی هوش مصنوعی را شفاف‌تر تعیین کنند. درنهایت، پیشنهاد می‌شود که کشورهای مختلف به‌طور هماهنگ در تدوین قوانین بین‌المللی برای تعیین مسئولیت‌ها و حقوق قانونی هوش مصنوعی همکاری کنند و اصلاحات لازم را در قوانین داخلی خود لحاظ کنند.

کلیدواژه‌ها

موضوعات

عنوان مقاله [English]

Legal Challenges of Legal Personality and Civil Liability of Artificial Intelligence

نویسنده [English]

  • Parviz Bagheri

Associate Professor of Law, Ilam, Iran

چکیده [English]

With the rapid advancement of artificial intelligence (AI) and its increasing role in various sectors of society, the legal implications of AI's existence and actions have become a pressing issue. As AI systems take on more responsibilities in fields such as healthcare, finance, law, and transportation, the question of recognizing AI’s legal personality and determining its civil liability is more relevant than ever. This paper explores the legal challenges surrounding the recognition of AI’s legal personality and civil liability, highlighting the difficulties faced by legal systems in adapting to these new realities. The research uses a descriptive-analytical approach to assess the legal frameworks of several countries and analyze how AI-related legal issues are being addressed. The concept of legal personality traditionally applies to human beings and legal entities like corporations. However, AI, with its rapidly evolving capabilities, challenges this understanding. The need to determine whether AI should be recognized as a legal entity—capable of bearing rights and obligations—has become central to discussions of its legal status. Moreover, the civil liability associated with AI actions, especially in cases where harm is caused, presents complex questions for both legal practitioners and lawmakers. If an AI system causes damage through its actions, who should be held accountable? Is it the developer, the operator, the manufacturer, or the AI itself? This paper begins by examining the legal experiences of different countries in recognizing the legal personality of AI. It highlights the approaches taken by jurisdictions such as the European Union, the United States, Japan, and South Korea. These countries have developed various legal frameworks to address the issue of AI’s legal personality, with some granting limited legal rights and others refraining from doing so. The paper identifies the challenges these countries face in holding AI accountable for its actions, particularly in terms of civil liability. The inability of traditional legal systems to attribute responsibility to non-human entities has created significant legal ambiguity.One of the central issues addressed in the paper is the question of civil liability arising from AI actions. As AI systems become more autonomous, the risk of harm increases, particularly in areas like autonomous vehicles, robotics, and AI-based decision-making processes. When these systems cause harm, determining liability becomes a complex task. For example, in the case of an autonomous vehicle involved in an accident, it is unclear who should bear responsibility: the manufacturer, the developer of the AI software, the vehicle owner, or the AI system itself. The paper delves into how different legal systems have approached this issue, with some proposing that the manufacturer or developer should be liable, while others suggest that a new category of liability should be created for AI systems. The paper also explores the ownership of data as another key aspect of AI-related civil liability. AI systems often rely on vast amounts of data to make decisions, but questions about who owns this data and who is responsible for its misuse are significant legal challenges. As AI systems process personal and sensitive data, issues of privacy and data protection come to the forefront. Legal frameworks such as the European Union’s General Data Protection Regulation (GDPR) have started to address these issues, but further reforms are needed to accommodate the growing role of AI in data processing. Furthermore, the paper discusses the need for updating existing legal frameworks to reflect the challenges posed by AI. Many traditional legal systems are ill-equipped to handle the complexities introduced by autonomous and intelligent systems. For example, contract law, which governs the relationships between parties, is based on the assumption that the contracting parties are human beings or legal entities. However, when AI enters the equation, this assumption no longer holds. Should AI systems be allowed to enter into contracts? If so, who should be responsible for ensuring that the contract is executed appropriately? The paper suggests that new legal provisions are required to clarify these issues and provide guidelines for dealing with AI in the context of contracts. In addition to legal reform, the paper emphasizes the importance of ethical considerations in the development and regulation of AI systems. AI technologies should be designed and implemented with principles of fairness, transparency, and accountability in mind. Without clear legal and ethical standards, the risks associated with AI could outweigh its potential benefits. The paper argues that any legal framework addressing AI’s civil liability should take into account not only the legal implications but also the broader ethical concerns that arise from the deployment of AI systems. As AI systems become more integrated into society, it is essential to establish clear legal frameworks that can address the new challenges they present. The current legal systems, which were designed to deal with human and corporate actors, are not sufficient to address the unique issues posed by AI. Legal reform must not only update existing laws but also create new legal structures that can accommodate the challenges posed by autonomous systems. The paper suggests that international cooperation will be crucial in developing globally consistent legal standards for AI, particularly as AI systems operate across borders and involve complex, multi-jurisdictional issues. The paper concludes by advocating for a comprehensive and forward-looking approach to legal reform. It argues that recognizing AI as a legal entity capable of bearing rights and responsibilities is crucial for addressing the civil liability that arises from its actions. However, this recognition must be coupled with legal reforms that clarify who is responsible for AI’s actions and ensure that those harmed by AI systems have access to legal remedies. As AI continues to evolve, the legal frameworks that govern its use must evolve as well. In doing so, the law can ensure that the benefits of AI are maximized while minimizing the risks associated with its use.

کلیدواژه‌ها [English]

  • Legal personality of AI
  • Civil liability
  • Legal frameworks
  • Legal challenges
  • AI regulation
  • Accountability
  • Legal reform
  • Ethics in AI
  1. ـ السادات مکی، اکرم؛ السادات مکی، زهرا و کشکولیان، اسماعیل (۱۴۰۳). «بررسی مسئولیت ناشی از اعمال هوش مصنوعی در نظام حقوقی ایران». نشریۀ علمی فقه، حقوق و علوم جزا، شمارۀ 32، دورۀ 8، صص ۷۱-۷۹.
  2. ـ امینی منصور؛ عظیم نسب، راینی احمدرضا و کاظمی ،آذر شهریار (۱۳۹۸) «تحلیل فقهی مبانی مسئولیت مدنی سازمانهای ناظر بر ایمنی محصولات ناشی از ظهور فناوریهای نوین»، فقه س،۲۶، ش ۱ (پیاپی ۹۷)، صص102–81.
  3. ـ تخشید، زهرا (۱۴۰۰). «مقدمه‌ای بر چالش‌های هوش مصنوعی در حوزه مسئولیت مدنی». حقوق خصوصی، بدون شماره، بدون صفحات.
  4. ـ حاجی‌اسماعیلی، میلاد (1403). «چالش‌های مسئولیت مدنی هوش مصنوعی در نظام حقوقی ایران؛ با نگاهی به مقررات‌گذاری در اتحادیه اروپا»، دولت و حقوق، سال پنجم، شمارۀ 1، صص 98–
  5. ـ حسینی، الهام؛ السادات، محمد؛ امیری، مهدی و خیراللهی، محمدعلی (۱۴۰۳). «بررسی فقهی مسئولیت مدنی در فناوری هوش مصنوعی». مجلۀ حقوق خصوصی، شمارۀ 4، دورۀ 6، صص 29-13.
  6. ـ حکمت‌نیا، محمود؛ محمدی، مرتضی، و واثقی، محسن (1398). «مسئولیت مدنی ناشی از تولید ربات‌های مبتنی‌بر هوش مصنوعی خودمختار»، حقوق اسلامی، شمارۀ 60، صص 273–
  7. ـ ذاکری‌نیا، حانیه (۱۴۰۲). «ماهیت و مبنای مسئولیت مدنی ناشی از هوش مصنوعی در حقوق ایران و کشورهای اتحادیه اروپا»، مجلۀ حقوق خصوصی،شمارۀ 20، دورۀ ۱، صص 65-50.
  8. ـ ذاکری‌نیا، حانیه و غلامپور، زهرا (1403). «الگوریتم‌های معقول و متعارف و تقویت نظریۀ قابلیت انتساب مسئولیت مدنی هوش مصنوعی» حقوق فناوری‌های نوین، شمارۀ 9 (پاییز و زمستان)، صص 168–
  9. ـ رجبی، علی (۱۳۹۸). «ضمان در هوش مصنوعی» مطالعات فقه اقتصادی، شمارۀ ۲، دورۀ 10، صص5۵–4۵.
  10. ـ شهبازی‌نیا، مرتضی، ذوالقدر، محمدجواد. (۱۴۰۳). «امکان‌سنجی اعطای شخصیت حقوقی به هوش مصنوعی: ارائه پیشنهاد سیاستی به مقنن ایرانی»، سیاست علم و فناوری، شمارۀ ۳، دورۀ 17، صص۲4-1۲.
  11. ـ علی‌پناهی، مهدی؛ نصیران نجف‌آبادی، داوود و شیرانی، محمد (۱۴۰۳). «مسئولیت مدنی ناشی از استفاده هوش مصنوعی در اتحادیه اروپا»، مطالعات فقه اقتصادی، شمارۀ ۵، دورۀ 6، صص 20–
  12. ـ قیصری اطربی، زهرا؛ شاکری، زهرا و یوسفی صادقلو، امیر (۱۴۰۳). «مسئله اعطای شخصیت حقوقی به هوش مصنوعی»، مجلۀ پژوهش‌های تطبیقی اسلام، شمارۀ مهر ۱۴۰۳، صص 95-80.
  13. ـ مهتاب‌پور، محمد‌کاظم .(۱۴۰۰). «مبنای مسئولیت مدنی ارائه دهندگان خدمات حرفه‌ای در فقه اسلامی و حقوق ایران با مطالعه تطبیقی نظامهای حقوقی فرانسه و کامن‌لا»، دادگستری، شمارۀ ۱۱۵، دورۀ ۸۵ صص 306 –
  14. ـ ولی‌پور، علی و اسماعیلی، محسن (۱۴۰۰). «امکان‌سنجی مسئولیت مدنی هوش مصنوعی عمومی ناشی از ایجاد ضرر در حقوق مدنی». فصلنامۀ اندیشه حقوقی معاصر،شمارۀ 2، دورۀ 8، صص 306-285.
  15. ب- منابع لاتین

     

     

    1. Alipour, M.; Nasiran Najafabadi, D.; Shirani, M. (2024). Civil liability arising from the use of artificial intelligence in the European Union. Fiqh Economic Studies Journal, 6(5), 12-20. In Persian
    2. Al-Sadat Maki, A.; Al-Sadat Maki, Z.; Kashkoulian, E. (2024). A review of the liability resulting from artificial intelligence actions in Iranian legal system. Scientific Journal of Fiqh, Law, and Criminal Sciences, 8(32), 71-79. In Persian
    3. Amiri, M.; Azim Nasab, R.; Raieni, A.; Kazemi, A. Sh. (2019). Fiqh analysis of the foundations of civil liability of organizations overseeing the safety of products resulting from the emergence of new technologies. Fiqh Journal, 26(1), 81-102. In Persian
    4. Birhane, A., van Dijk, J., & Pasquale, F. (2024). Debunking Robot Rights Metaphysically, Ethically, and Legally. arXiv. https://arxiv.org/abs/2404.10072
    5. Bygrave, L. A. (2017). Internet Governance by Contract. Oxford University Press.
    6. Calo, R. (2021). Artificial Intelligence Policy: A Primer and Roadmap. University of California Law Review, 51(2), 410–415.
    7. Comin, M. (2015). John Dewey and the welfare state: Towards the history of the development of American democracy. Logos, 25(6), 152–161.
    8. Dehajiesmaeili, M. (2024). Challenges of civil liability of artificial intelligence in Iranian legal system: A look at regulations in the European Union. Government and Law Journal, 5(1), 81–98. In Persian
    9. to. (2025). AI legislation and regulation: Navigating the future of artificial intelligence. https://dev.to/siddharthbhalsod/ai-legislation-and-regulation-navigating-the-future-of-artificial-intelligence-4p65 (pp. 4–5)
    10. Dewey, J. (1926). The Historic Background of Corporate Legal Personality. Yale Law Journal, 35(6), 655–673. https://doi.org/10.2307/788782
    11. Ebers, M. (2021). Artificial Intelligence and Legal Liability: A European Perspective. In M. Ebers & S. Navas (Eds.), Algorithms and Law (pp. 195–218). Cambridge University Press.
    12. European Commission. (2023). Proposal for a Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act). Brussels.
    13. European Parliament. (2020). Report with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)) (pp. 1–25). European Parliament.
    14. Forrest, K. B. (2024). The Ethics and Challenges of Legal Personhood for AI. Yale Law Journal Forum, 133, 1–10. https://www.yalelawjournal.org/forum/the-ethics-and-challenges-of-legal-personhood-for-ai
    15. Gabov, A. V., & Khavanova, I. A. (2018). Evolution of robots and the 21st-century law. Tomsk State University Journal, (435), 215–233. https://doi.org/10.17223/15617793/435/28
    16. Gadzhiev, G. A., & Voynikas, E. A. (2018). Can a robot be a subject of law? (Search for legal forms to regulate the digital economy). Law: Journal of the Higher School of Economics, 4, 24–48. https://doi.org/10.17323/2072-8166.2018.4.24.48
    17. Gillespie, T. (2022). Artificial Intelligence and Law: Regulatory Challenges. Routledge.
    18. Gless, S., & Bertolini, A. (2022). Legal personality for artificial intelligence: A comparative perspective. Computer Law & Security Review, 44, 5–10.
    19. Hekmatnia, M.; Mohammadi, M.; Vateqi, M. (2019). Civil liability resulting from the production of autonomous artificial intelligence-based robots. Islamic Law Journal, 60, 249-273. In Persian
    20. Hosseini, E.; Al-Sadat, M.; Amiri, M.; Khairallahi, M. A. (2024). Fiqh analysis of civil liability in artificial intelligence technology. Private Law Journal, 6(4), 13-29. In Persian
    21. Iwai, K. (1999). Legal personality and business law: A new perspective on corporate law. Business and Society Review, 104(3), 253–277.
    22. (2025). Legal AI concepts: Navigating the future of law & technology. https://keymakr.com/blog/legal-ai-concepts-in-2024-navigating-the-future-of-law-and-technology/ (pp. 2, 4–6)
    23. Mahtabpour, M. K. (2021). The basis of civil liability of professional service providers in Islamic jurisprudence and Iranian law: A comparative study of French and Common Law systems. Judiciary, 85(115), 285-306. In Persian
    24. Pagallo, U. (2013). The Laws of Robots: Crimes, Contracts, and Torts. Springer.
    25. Porter, Z., Ryan, P., Morgan, P., Al-Qaddoumi, J., Twomey, B., McDermid, J., & Habli, I. (2023). Unravelling Responsibility for AI. arXiv. https://arxiv.org/abs/2308.02608
    26. Qaysari Atarbi, Z.; Shakari, Z.; Yousefi Sadeghlou, A. (2024). The issue of granting legal personality to artificial intelligence. Comparative Islamic Studies Journal, September 2024, 80-95. In Persian
    27. Rajabi, A. (2019). Warranty in artificial intelligence. Fiqh Economic Studies Journal, 10(2), 45-55. In Persian
    28. Schulz, W., Helberger, N., & van Drunen, M. (2023). International governance of AI: Current frameworks and challenges. Journal of International Law & Technology, 11(1), 35–40.
    29. Shahbazi-Nia, M.; Dolghadr, M. J. (2024). Feasibility of granting legal personality to artificial intelligence: A policy suggestion for the Iranian legislator. Science and Technology Policy, 17(3), 12-24. In Persian
    30. Springer, S. N. (2023). Hybrid theory of corporate legal personhood and its application to artificial intelligence. SN Social Sciences, 3(78), 1–13. https://link.springer.com/article/10.1007/s43545-023-00667-x.
    31. Stokes, S. (2021). Digital Copyright: Law and Practice (5th ed.). Hart Publishing.
    32. Takhshid, Z. (2021). Introduction to challenges of artificial intelligence in civil liability. Private Law Journal. In Persian
    33. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460. https://doi.org/10.1093/mind/LIX.236.433
    34. Turing, A. M. (1951). *Can Digital Computers Think?* BBC Radio Broadcast, 15 May 1951.
    35. UN System Chief Executives Board for Coordination. (2021). Principles on the Ethical Use of Artificial Intelligence. New York.
    36. Valipour, A.; Esmaeili, M. (2021). Feasibility of general civil liability for artificial intelligence arising from harm in civil law. Contemporary Legal Thought Quarterly, 8(2), 285-306. In Persian
    37. Valvoda, J., Thompson, A., Cotterell, R., & Teufel, S. (2023). The Ethics of Automating Legal Actors. arXiv. https://arxiv.org/abs/2312.00584
    38. Vasiliev, V., & Ibragimov, A. (2019). Liability of artificial intelligence in the age of robotics. Journal of Digital Law, 21(4), 234–246.
    39. Voynikas, E. (2020). Legal regulation of robotics and artificial intelligence in the European Union. Russian-Asian Legal Journal, 1, 50–54.
    40. Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Transparent, explainable, and accountable AI for robotics. Science Robotics, 2 (6), eaap6962.
    41. Ward, F. R. (2025). Towards a Theory of AI Personhood. arXiv. https://arxiv.org/abs/2501.13533
    42. Whittlestone, J., Nyrup, R., Alexandrova, A., Dihal, K., & Cave, S. (2019). The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions. In *Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 76–82).
    43. Zakerinia, H. (2023). Nature and basis of civil liability arising from artificial intelligence in Iran and European Union countries. Private Law Journal, 1(20), 50-65. In Persian
    44. Zakerinia, H.; Gholampour, Z. (2024). Reasonable and conventional algorithms and strengthening the theory of attribution of civil liability to artificial intelligence. New Technologies Law Journal, 9(Autumn and Winter), 156-168. In Persian