
International law and the problem of “legitimacy of purpose” in the context of the use of artificial intelligence in armed conflict
Abstract: This article’s purpose was to examine artificial intelligence’s (AI) military applications, analysing its current state and compliance with humanitarian law using dialectical and comparative-legal methods. The study assessed AI’s validity in military use, focusing on lethal autonomous weapons and their legal regulation across nations. The study addressed ethical dilemmas, including differentiating legitimate military targets from civilians and the necessity for new legal frameworks. The findings underscored the essential function of IHL principles, advocating for improved legislative frameworks to guarantee that AI systems comply with humanitarian standards and reduce dangers in armed conflict.
Keywords: military target – international humanitarian law – autonomous weapons systems – combatant – lethal weapons systems
DOI:10.58866/RNLG9337
Comments
* Your email address will not be published