project
Designing International Law and Ethics into Military AI (DILEMA)
In order to leverage the potential benefits of AI technologies and human-machine partnerships in the military while abiding by the rule of law and ethical values, it is essential that technologies developed to assist in decision-making do not in reality substitute for human decisions and actions. How can we ensure that military AI technologies support but never replace critical judgement by human soldiers and thereby remain under human control? A team of three researchers will work in dialogue and together with consortium partners to address the ethical, legal, and technical dimensions of this question.
As a preliminary enquiry, research will be conducted on the foundational nature of the pivotal notion of human agency, so as to unpack the fundamental reasons why human control over military technologies must be guaranteed. Second, the project will identify where the role of human agents must be maintained, in particular to ensure legal compliance and accountability. It will map out which forms and degrees of human control and supervision should be exercised at which stages and over which categories of military functions and activities. Third, the project will analyse how to technically ensure that military technologies are designed and deployed within the ethical and legal boundaries identified.
Throughout the project, research findings will provide solid input for policy and regulation of military technologies involving AI. In particular, the research team will translate results into policy recommendations for national and international institutions, as well as technical standards and protocols for testing compliance and regulation.
Official project title: