Fostering Responsible AI in Security and Defence Domain
Tags
At the specialised event of the United Nations Institute of Disarmament Research (UNIDIR) held as part of the UN General Assembly’s 1st Committee meeting in New York, Dr Samuel Segun, Senior Researcher at the Global Center on AI Governance (GCG), participated in a round table discussion on AI ethics in the security and military domain.
Part of the event involved the launch of the Draft Guidelines for the Development of a National Strategy on AI in Security and Defence. Dr Segun shared insights on how best to ensure that the UNIDIR’s guidance documents on developing National AI Strategies for Defence and Security reflect a robust view.

In this abridged article, we share the key insights discussed by Dr Segun, focusing on three key areas: (i) accounting for the potential militarisation and abuse of open-source AI models, (ii) the human rights risks this poses, and (iii) the need for multi-stakeholder engagement.
Militarization and Abuse of Open-source AI Models
The militarization and abuse of open-source AI models presents a growing concern, particularly regarding civilian-purposed AI technologies being repurposed for security and military applications. Currently, open-source models are vulnerable to exploitation by state or non-state actors, violent extremists, and terrorist groups. This risk necessitates building open-source models with integrated trust and safety tools, guard-rails, and cybersecurity evaluations. The potential for militarization or abuse is compounded by the difficulty in exhaustively testing the robustness and safety of frontier or foundation models, given their generative nature and varying responses to prompts. Current adversarial checks and evaluations face limitations, leading to suggestions for multiple red teams, AI-based testing, and increased industry contributions to red team frameworks.
Human Rights Risks of Open-source AI models
The human rights risks are implicitly connected to the broader implications of AI militarization. Reference should be made to the International Human Rights Framework as guidance to mitigate abuse. There is also a need for legal reviews of new weapons, since they pose serious human rights risks when AI is deployed in security and defense contexts. The growing investment in military AI by economically advanced countries creates concerning disparities in military AI capabilities among States, potentially leading to power imbalances that could affect human rights protections. Additionally, the use of AI in policing raises additional human rights considerations that require careful examination.

Multi-stakeholder Engagement for Sustainable AI
The need for multi-stakeholder engagement emerges as a crucial approach to addressing the challenges the security and defence use of AI poses. There is a need to create a platform for inclusive dialogue that brings together diverse experts, including engineers, lawyers, ethicists, defense and security chiefs, civil society, and standardization bodies. This approach is essential for discussing shared responsibility and ensuring alignment with international law, legal and ethical standards. UNIDIR’s Roundtable for AI, Security and Ethics (RAISE) in partnership with Microsoft exemplifies this approach, aiming to guide policy conversations, build knowledge bases on good practices, and foster trust-building exercises among nations. There should also be an increase in the participation of private sector and academia, as they currently lead much of the investment and development in AI. Their participation is crucial for elevating academic discourse into practical conversations and developing effective national AI strategies for security and defense. This multi-stakeholder approach is particularly important for developing a common language between state and non-state actors and addressing the challenges posed by non-binding guidelines versus rules.
Final Thoughts
The rapid advancement and potential militarization of open-source AI models presents significant challenges that span technical vulnerabilities, human rights concerns, and governance gaps, particularly given the disparities in AI capabilities between nations and the risk of exploitation by malicious actors. These complex challenges necessitate a coordinated multi-stakeholder approach that brings together governments, industry leaders, academics, and civil society to develop robust safety measures, ethical frameworks, and regulatory standards that can effectively balance innovation with security while protecting human rights.
Author: Dr Samuel Segun