Unlocking Human Centered AI: Building Inclusive Futures Through Community Engagement
Tags
In the realm of Artificial Intelligence, there’s a glaring truth: AI doesn’t understand the intricacies of human experience or the nuances of social context. So how do we harness AI’s potential to truly benefit humanity? The answer lies in tapping into our unique human ability to contextualize information, derive meaning from it and to envision a better world. This approach forms the foundation of researching human-centered technologies within communities
While the importance of consulting people and communities may seem obvious, it often takes a backseat in AI-driven solutions to social challenges. Decision-making processes tend to prioritize technical models over insights from those directly impacted, perpetuating exciting inequalities. Joy Buolamwini’s poignant reminder from her work ’Unmasking AI’ that AI cannot solve deeply ingrained societal issues underscores the necessity of addressing structural realities, particularly in African contexts often overlooked in AI discussions. By recognizing and addressing these challenges, researchers, policy makers and developers can pave the way for a more inclusive and equitable future.
At the Global Centre on AI Governance, we consider the following 4 points guiding our efforts to produce knowledge around human-centred technologies.
Recognising bias beyond the model
While technologies are frequently accused of perpetuating bias, it’s important to acknowledge that bias is an inherent aspect of human nature. Consequently, it finds its way into any technologies designed by and for humans. Recognising that technology can never be neutral is essential for accurately assessing both its capabilities and potential harms. This recognition lies at the heart of designing human-centered technologies, highlighting the critical importance of tailoring design choices to specific socio-cultural contexts.
Working with communities to ensure technologies meet their requirements can manifest in various ways and isn’t confined to physical locations like small villages, but with the digital search for information and contacts deemed useful. However, disparities in citation, such as favoring resources from the ‘Global North’ or prioritizing the work of men over women, perpetuate biases that influence design decisions. These biases can result in technologies that benefit some while disadvantaging others, highlighting the importance of addressing systemic inequalities in the design process.
Extending “the table”
To truly label technologies as human-centric, it’s imperative to ensure that underserved groups are not just represented but actively participate in their development. Broadening discussions beyond traditionally recognized voices at the decision-making ”table” is essential
This metaphorical table varies in shape and context, whether it involves academics, investors, engineers, politicians, civil servants or others, but it is quintessentially a space in which decisions are informed and made. Expanding its reach involves embracing discomfort, including grappling with diverse languages and jargon, to delve deeper into issues than standard protocols typically demanded.
People tend to have an understanding of technologies or a “sense” of them. They also have a vested interest in understanding how they might impact their lives. Yet, technology is often intentionally portrayed as impossible to comprehend by non-experts, leaving decision-makers in the comfortable position of being in exclusive spaces without being profoundly challenged to justify their interests and weigh them against those of the majority. The natural “feel” of technologies and algorithms reinforces the exclusivity of these spaces as well as the current informational infrastructures and related power imbalances in place is the natural “feel” of technologies and algorithms (Ruckenstein 2023:198)
Important to note is that “invitations” to participate can also serve as a democratic guise. While it is relatively easy to invite interest groups to discussions, their participation is contingent on a certain level of critical digital literacy, that is an understanding not only of how technologies work but of their potential social impact.
Learning from ethnographic principles
Although not all research can adhere to the traditional ethnographic approach, which necessitates prolonged immersive research periods, there are aspects of this methodology that can be incorporated into shorter research formats:
AI’s utility relies on understanding peoples’ lived experiences, day-to-day practices and knowledge-making. Consider a scenario where a healthcare professional is tasked with incorporating AI technology into their practice. To successfully integrate this technology, they must align its use with their understanding of the structural framework within which they operate, including the dynamics of the relationships and informational flow. It is crucial to illuminate these insights to counterbalance a purely technology-focused mindset. This includes capturing local definitions around technologies, as definitions hold power.
We realise that there lies great responsibility in portraying people and situations through data. Data certainly does not capture these experiences in full – rather, data are nuggets of diverse and layered realities that cannot be neatly packaged. Yet, despite their limitations, we rely on data to provide an empirical foundation for decision-making rather than relying solely on speculation
Reflexivity, an essential principle in ethnography, entails critically examining how one's own background and perspective influences their research. This process involves critically challenging assumptions one holds shaped by gender, upbringing, education, age and ability. We prioritize reflexivity not only in our own work but also in evaluating research and initiatives across the continent.
From “research subjects” to co-creators
There are different ways of involving those impacted by technologies into the research process, ranging from a more extractive approach to a more participatory one centered on fostering sustainable relationships of exchange. The choice between these methods must be made deliberately with careful consideration:
User-centred research offers valuable insights into technology usage, providing crucial understanding and potentially transformative insights. However, it can also become a superficial exercise. Big investors may be more focused on operational aspects of the technology itself such as brand perception or eye-tracking metrics over addressing inclusion barriers or anticipating risks for marginalized groups. User-centred research is thus only meaningful when it is designed to solicit opinions that might challenge existing assumptions and business models.
Participatory design seeks to understand peoples’ needs, preferences, and challenges. It is a research approach that incorporates different stakeholders in co-creating solutions and aims to ensure that technologies are tailored to meet the specific needs of users and that they are culturally and contextually appropriate. Drawing on this approach involves active choices in terms of who might be challenged to participate. Women who cannot afford childcare, for example, may not have their voices heard unless study designs account for this barrier.
Community-based research. Developing a technology that everyone appreciates is unrealistic. Instead, the aim of community-based research is to ensure that technologies are beneficial to those who need them and that those who do not have access do not suffer from further exclusion.