Bridging the AI Divide: Governance and Collaboration in Responsible AI in the Military Domain
Tags
Dr Rachel Adams, CEO of the Global Center on AI Governance, spoke on the 3rd Plenary Panel at the Responsible AI in the Military Domain (REAIM) Summit 2024, which took place in Seoul, South Korea. The Summit brought together a diverse set of stakeholders from around the world, and representatives of many governments. This short blog contains a summary of some of the key points from her talk.
As artificial intelligence (AI) becomes increasingly integrated into military operations worldwide, ensuring equitable and secure use of these technologies is crucial. The challenge is particularly pressing for nations at varying levels of AI integration and security environments. To address this, a collaborative approach is necessary to establish effective governance of AI in the military domain. Here’s a look at some of the key considerations.
Addressing Inequity in Access to Technology
One of the foremost challenges is ensuring equity in access to AI technology and knowledge. Currently, there is a significant disparity between countries with advanced AI capabilities and those without. This imbalance risks creating an AI arms race that disadvantages less developed nations, including many African countries, small island states, and landlocked low-income countries.
Access to Compute Resources: Promoting access to AI capabilities is essential, but simply increasing compute resources in less advanced countries may not be enough. While even many European countries lag behind the US and China in compute capabilities, donated compute resources might not bridge the gap significantly. This is because the scale and sophistication of resources in advanced nations set a high benchmark that donated resources alone may not meet.
Adapting AI Technologies: While non-AI-producing states might not need to develop foundational models from scratch, potentially adapting existing technologies for local contexts, many current foundational models exhibit biases towards non-Western contexts, necessitating significant adjustments to avoid harm. These adjustments require more skills and resources, which are in short supply. These biases can lead to adverse impacts in military contexts, such as escalating conflicts or marginalizing already vulnerable communities.
Data Sharing and Security: Sharing AI technologies and data presents unique risks and challenges in conflict-prone regions like Africa. Due diligence and legal safeguards are necessary to prevent misuse and avoid escalating conflicts. This requires a careful balance between promoting technological access and ensuring national security.
Dr Rachel Adams speaking at the REAIM Global Summit in Seoul. South Korea.
Engaging Diverse Stakeholders
Traditionally, the development of military governance protocols has been confined to governmental and military circles. However, AI introduces new dynamics that necessitate broader engagement.
Public Perception and Trust: In many African countries, there is a widespread concern about AI’s impact on jobs rather than military applications. Yet, distrust in military institutions may well exacerbate concerns about AI’s role in warfare if AI technologies are adopted. Building public trust and accountability in military operations is essential to addressing these concerns.
Involving Affected Communities: The impact of military AI on local communities, especially marginalized groups, must be considered. For instance, the use of AI in managing migration and asylum seekers highlights the need for inclusive governance that protects vulnerable populations, and is one of the gray zones often left out of governance protocols.
Representation and Accountability: Increasing female representation in military contexts and involving human rights organizations in governance discussions are crucial steps. This promotes the inclusion of diverse perspectives and accountability.
The Broader Context and the AI Divide
There are broader threats associated with the rapid development and use of AI which pose potential security risks, whether directly or indirectly. These include: disinformation, job displacement, and increasing global inequality between AI-rich and AI-poor countries. Addressing this divide requires international collaboration and support for less advanced nations to ensure they are not left behind in the AI race, and to crucially consider appropriate and proportionate AI use.
The Role of the African Union
The African Union (AU) has recently published an AI Continental Strategy that emphasizes the importance of a peaceful and prosperous continent (African Union, 2024). The strategy acknowledges the risks of AI misuse in military contexts, which is an important starting point for regional approaches to responsible AI governance in the military domain. Building on the AU’s Continental Strategy on AI, there are several concrete steps that can be taken to:
Establish a Research and Policy Center on AI and Peace: Create a center dedicated to researching and formulating policies on AI’s role in peace and security on the Continent. This center should engage various stakeholders, including human rights organizations, to develop comprehensive governance approaches.
Promote Regional Collaboration: Encourage regional bodies like the African Union to develop specific AI policies for use of AI in the military domain and collaborate with global institutions. This regional approach can address localized needs while contributing to broader global governance efforts.
Foster International Partnerships: Build partnerships between AI-rich and AI-poor nations to facilitate technology transfer, training, and capacity building. These partnerships should focus on addressing the AI divide and addressing local needs.
Author: Rachel Adams