|dc.description.abstract||Existing and emerging applications of artificial intelligence in armed conflicts and other systems reliant upon war algorithms and data span diverse areas. Natural persons may increasingly depend upon these technologies in decisions and activities related to killing combatants, destroying enemy installations, detaining adversaries, protecting civilians, undertaking missions at sea, conferring legal advice, and configuring logistics.
In intergovernmental debates on autonomous weapons, a normative impasse appears to have emerged. Some countries assert that existing law suffices, while several others call for new rules. Meanwhile, the vast majority of efforts by States to address relevant systems focus by and large on weapons, means, and methods of warfare. Partly as a result, the broad spectrum of other far-reaching applications is rarely brought into view.
One normatively grounded way to help identify and address relevant issues is to elaborate pathways that States, international organizations, non-state parties to armed conflict, and others may pursue to help secure greater respect for international law. In this commentary, I elaborate on three such pathways: forming and publicly expressing positions on key legal issues, taking measures relative to their own conduct, and taking steps relative to the behavior of others. None of these pathways is sufficient in itself, and there are no doubt many others that ought to be pursued. But each of the identified tracks is arguably necessary to ensure that international law is — or becomes — fit for purpose.||en_US