What is AI governance? An African response

Tags
Responsible AI
AI Governance
AI Ethics
Africa
In the last month, I have sat in three high-level meetings on Africa’s response to AI where a number of recurring questions were raised about AI governance. 
  1. How does AI governance differ from AI regulation, AI ethics and responsible AI? 
  2. What does AI governance need to help us achieve, specifically here on the African continent? 
  3. How can it function to both enable the responsible acceleration of AI adoption across priority sectors and to help solve key social challenges, as well as protect human rights at risk? 
In all this, one thing is certain. AI cannot be left to its own devices. It needs to be governed.


So, what is AI governance?

Governance refers to a set of agreed upon values, norms, standards and rules, which apply to a particular phenomenon and determine what that phenomena can and can’t do. Increasingly, as new forms of governance are introduced - such as outcomes based governance - these rules of the game can also determine what a particular phenomenon should do. 
In practical terms, “governance” is the umbrella term which can include laws, strategies, policies, white papers, compliance mechanisms, as well as self-governing tools, like impact assessments, ethical principles and corporate policies. “Regulation” refers specifically to legal instruments which follow from binding law. 
Governance should be just and fair, and should uphold the rule of law. In this way, governance is based on, and advances, the values that matter to us as humans: justice, fairness, democracy, and so on. In the context of AI governance, these values are set out in existing and emerging frameworks of AI ethics, such as the UNESCO Recommendation on the Ethics of AI
Responsible AI is where AI ethics meets AI governance. It is the extension of ethical principles into formal guidance and standard setting which ensures the implementation of these principles in practice.

Should AI governance include regulation at this point?

Rigorous debates are being held about whether or not AI should be formally governed. The EU, for example, has trail blazed forward in establishing and approving the EU AI Act. But should Africa be doing the same?
Some of the concerns with regulation are that it may stymie AI innovation, something which AI strategies and policies are seeking to encourage. However, we should be clear on where and why regulation stymies innovation. Working with groups across Africa looking to scale AI innovations geared toward social outcomes, it became clear that some form of government-led guidance or approval is needed to provide surety to innovators. In cases, this may need to be regulated, especially in high-risk areas like healthcare which are already well-regulated. 
When regulation does stymie innovation is where the cost of compliance becomes too high for small and medium-sized businesses. Finding a balance between safeguarding rights at risk and allowing for more entrepreneurs to enter the market - especially to help counteract the dominance of foreign-owned tech in Africa - will be crucial for policy-makers. 
Importantly too, self-governance tools are unlikely to be enough. First, there is not enough clear guidance and consensus around how best to monitor and respond to AI-related risks to people and environments, so we cannot expect companies to self-govern effectively without such information or standard-setting. And second, self-governance has not been adequate to date; there are AI-related harms that are taking place, largely with impunity, and this cannot continue. 


Why is AI governance so important and what does it need to do?

Governance is the single most important factor in determining whether our future with AI will bring about human flourishing, or worsen global inequality. This is not to say it is the only factor influencing our shared future, but it is an immensely important one. Other factors include a sense of shared responsibility and solidarity, and human inspiration and creativity, which generate new imaginative potentials for what the future could be. But governance is supposed to be established at a macro level - national, regional or international. It applies to many people, companies and institutions at once, and in perpetuity. And it is often binding, that is, it must be complied with. (An exception here would be strategies and policies, which - as will be explained below, are often the first step in establishing governance regimes, and which set out national aspirations, rather than binding provisions). 
What AI governance needs to do has been better explained than what AI governance should entail. AI governance needs to ensure that the opportunity to use AI to advance the collective well-being of human societies and our environments is not lost, while safeguarding the design, development and use of this technology from harming people, communities and environments.
In the African context, this means that AI governance must help us:
  1. Equitably distribute the benefits of AI
  2. Protect and uphold human rights and human dignity, and the integrity of the natural environment, in the face of rapid AI development and adoption
  3. Develop national, regional and continental AI ecosystems which use African-centred AI to respond to African challenges
  4. Ensure African people and lands are not exploited in any stage of the AI value and supply chain, or AI lifecycle
  5. Provide for clear lines of accountability in the development and use of AI
  6. Build trust in AI technologies
What AI governance must entail depends on the particular challenges and opportunities it must address, as well as the given governance environment and ecosystem within any given jurisdiction, such as existing laws and policies, as well as oversight institutions and public perceptions.

AI Governance: A Practical Cycle 

While approaching AI governance can take many forms, a five-step approach is set out below.



Establish policy frameworks to set out executive level commitments to respond to AI to protect and advance society. A government policy indicates that this issue is important to the government, that it is taking it seriously and will invest public resources behind doing so in the best interests of its citizens.
  1. Review, amend and augment existing laws which may be impacted by AI. This might include, amending data protection laws to include clauses on automated decision-making, or whistleblowing and protected disclosure laws to allow for contract employees working in platform economies to blow the whistle. 
  2. Determine where significant regulatory gaps exist which cannot be addressed through legislative amendment. 
  3. Roll out tools to advance the implementation of the principles, commitments and provisions set out in AI policies or related laws. This might include reporting mechanisms, or tools like the UNESCO’s Ethical Impact Assessment. 
  4. Continually research new approaches to AI governance, and review and evaluate the efficacy of existing approaches to advance inclusive innovation and safeguard harms. 


The Future of AI Governance

AI is not the last frontier of technological advancement. Our governance capabilities must advance together with technological developments if we are to preserve the values that matter to us as human communities.
AI governance is not necessarily going to follow governance approaches we took before. We will have to break new ground and find new ways to address the complex challenges and possibilities of AI. 
As much as we need technological innovations to help solve the intractable challenges facing human communities, we also need governance innovations. And to support this, we need research to understand what AI governance needs to do, and involve. 
The global findings of the first edition of the Global Index on Responsible AI will be launched at the end of this month, to coincide with the UN AI for Good Summit. This innovative tool will allow researchers and policymakers to explore the governance innovations taking place across the world to make responsible AI a reality. 


Author: Rachel Adams

Sign up to our newsletter

Stay updated with the latest news and exciting updates from the center!

By submitting your details you are giving us permission to share your details with our organisation. We may also reach out to you with email content or via our newsletter. For more information please see our website terms & conditions.

By submitting your details you are giving us permission to share your details with our organisation. We may also reach out to you with email content or via our newsletter. For more information please see our website terms & conditions.

We're advancing local insights to create global impact on equitable AI governance through knowledge production and exchange.

© Global Center on AI Governance copyright 2024

We're advancing local insights to create global impact on equitable AI governance through knowledge production and exchange.

© Global Center on AI Governance copyright 2024

We're advancing local insights to create global impact on equitable AI governance through knowledge production and exchange.

© Global Center on AI Governance copyright 2024