There is no shortage of enthusiasm about what artificial intelligence can do for Africa. The potential is real and, in places, already being realised. AI systems are delivering blood products to remote clinics by drone. Algorithms are analysing the cries of newborns to detect life-threatening conditions before a doctor is available. Researchers across thirty African countries are building language tools to ensure that the billion people who speak African languages are not left out of the AI era entirely.
But enthusiasm about what technology can do has a poor track record in Africa. The continent has seen waves of technological optimism — mobile money, e-government platforms, digital health systems — some of which genuinely transformed lives and some of which produced expensive tools that nobody used. The difference between those outcomes was rarely the technology itself. It was whether the technology was designed for the people who were supposed to use it, in the context where they were supposed to use it.
AI is no different. The question is not whether AI can help. It is whether the AI being built and deployed across Africa is being designed with sufficient understanding of the people it is meant to serve, the infrastructure it will operate within, and the cultural and social context that will determine whether communities trust and adopt it.
That is a design thinking question. From where we sit at Made by People — having spent over a decade conducting field research and building products across Africa — it is the most important question in the current AI conversation on the continent.
The question is not whether AI can help. It is whether the AI being built is designed with sufficient understanding of the people it is meant to serve. That is not a technology question. It is a design question
Why Human-Centered Design Determines What Works.
Human-centered design is built on a foundational premise: solutions work when they are shaped by a deep understanding of the people who will use them, not when they are built from assumptions about what those people need. This applies to every kind of product. It applies with particular force to AI.
AI systems learn from data. The data they learn from reflects the context in which it was collected — which populations were represented, which were not, which problems were measured, which were ignored. A system trained on data that does not represent the African users it will eventually serve will produce outputs that do not serve those users well. This is not a hypothetical risk. It is a documented pattern. And it has specific implications for Africa, where local datasets are often sparse, where infrastructure constraints shape how data can be collected, and where the populations most in need of better services are frequently the least represented in existing data.
The HCD response is not to avoid AI. It is to insist that the design of AI systems — the problem framing, the data selection, the interface design, and the processes through which communities are involved — follows the same principles that govern good design in any domain. Start with the user. Test in the real context. Iterate based on what you learn. Involve the community throughout.
| 1. | Context first | AI built without understanding the infrastructure, language, literacy level, and cultural context of its users will fail in the field regardless of how sophisticated the underlying model is. Field research before development is not optional — it is what makes the difference between a tool that gets adopted and one that gets abandoned. |
| 2. | Community involvement | Communities should be active participants in shaping the AI systems that affect them — in data collection, problem framing, and solution testing. This produces better systems and builds the trust that determines whether people actually use them. |
| 3. | Ethical data practice | Training data must represent the populations the system will serve. This means actively seeking diverse, locally collected datasets and integrating local expertise into model development — not as compliance, but as a core quality standard. |
Three African AI Initiatives That Get the Design Right.
The following examples are not simply impressive technology stories. They are illustrations of what happens when AI development is grounded in a genuine understanding of the context it will operate in. Each one demonstrates a specific HCD principle at work.
| Zipline Rwanda, Ghana, Nigeria, Kenya, Cote d'Ivoire |
| What it does Zipline is an autonomous drone delivery system that partners with African governments to deliver blood, vaccines, and essential medicines to health facilities that would otherwise struggle with reliable supply chains. The company launched its first operations in Rwanda in 2016 through a direct partnership with the Rwandan government, starting with emergency blood delivery to twenty hospitals. By 2026, Zipline had completed more than two million autonomous deliveries, was serving over 5,000 health facilities across five African countries, and had been linked to a 51% reduction in maternal deaths in Rwanda. A $150 million expansion commitment from the US State Department and up to $400 million in government utilisation fees will extend the network to approximately 15,000 facilities, potentially reaching 130 million people across the continent. The HCD angle Zipline's HCD credentials begin with a design decision that most technology companies would not make: before deploying a sophisticated AI and robotics system, the team spent extensive time understanding the actual constraints of African healthcare logistics — mountainous terrain, poor road conditions, unreliable cold-chain infrastructure, and the critical time sensitivity of blood and vaccine delivery. The pay-for-performance funding model aligns incentives around genuine adoption rather than pilot deployment. |
| Ubenwa Nigeria, with clinical partners across Africa, Canada, and Brazil |
| What it does Ubenwa is a Nigerian health-tech startup that uses machine learning to detect birth asphyxia — one of the top three causes of infant mortality globally, responsible for approximately 1.2 million newborn deaths annually — from the cry of a newborn. The diagnosis requires a ten-second audio recording and any smartphone. No blood work, no specialist equipment. The AI analyses the amplitude and frequency patterns in the cry sound to provide an instant assessment of whether the infant is at risk. Compared to the clinical alternative, Ubenwa is non-invasive, requires no specialist skill to operate, and costs approximately 95% less than conventional diagnostic methods. The company has raised over $2.5 million in pre-seed funding and works with clinical partners in multiple countries to refine its models and secure regulatory approvals. The HCD angle Ubenwa is a case study in problem-framing from context. The founder, Charles Onu, was driven by direct experience of the consequences of undetected birth asphyxia. Working with health NGOs in Nigeria, he observed how common undetected complications were in settings where specialist diagnostic equipment was unaffordable and unavailable. The design insight was to ask: what data is already present at birth, requires no equipment to collect, and could carry diagnostic information? The infant cry became the design brief. The ongoing challenge is ensuring training data avoids algorithmic bias through rigorous data practice. |
| Masakhane Pan-African — researchers from 30+ countries |
| What it does Masakhane — meaning "we build together" in isiZulu — is a grassroots research organisation whose mission is to strengthen natural language processing (NLP) research in African languages, for Africans, by Africans. Africa is home to over 2,000 languages, yet none of the top global internet languages are African. Masakhane has built a community of over 2,000 researchers across 30+ countries developing open-source translation models, datasets, and NLP tools for 38+ African languages. The Masakhane African Languages Hub, launched in 2025 and supported by global partners, is funding dataset development for 50 African languages with a goal of empowering one billion Africans with locally relevant AI tools by 2029. The HCD angle Masakhane addresses a core HCD failure in global AI: assuming a small number of high-resource languages can represent the world. Its participatory research model ensures datasets are developed through community input, with native speakers leading tool development. This approach recognises that AI systems must understand language, tone, idiom, and cultural context to serve users effectively. The people with the deepest contextual knowledge are positioned as central contributors to building the tools themselves. |
What Needs to Be True for AI to Fulfil Its Promise in Africa.
The three initiatives above share something beyond their technical achievement: they were all built by people who understood their problem context deeply, designed around real constraints rather than assuming them away, and involved the communities they were designing for as participants rather than subjects. That combination is not the norm in AI development for Africa. Four conditions need to hold more consistently.
1. Capacity must be built locally
AI systems designed without local expertise are AI systems that misunderstand their context. The most valuable investment in African AI is not the import of finished tools but the development of African data scientists, designers, and engineers who understand both the technology and the populations it will serve. Only 3% of the global AI talent pool currently resides in Africa. Closing that gap is the precondition for everything else.
2. Communities must be involved, not just consulted
There is a meaningful difference between consulting communities about a tool that has already been designed and involving communities in shaping the tool from the beginning. The latter produces better systems — because the problem framing is more accurate, the training data is more representative, and the interface reflects how people actually interact with information in their real context. It also produces systems that communities trust and adopt, which is the precondition for any impact at all.
3. Funding must be patient and locally oriented
Short-term project funding produces tools that are built, piloted, published, and abandoned. The AI initiatives generating sustained impact — Zipline's decade-long partnership with the Rwandan government, Masakhane's multi-year community building effort — are those backed by long-term investment in the ongoing relationship between system and community. Funders need to back initiatives with a genuine commitment to local capacity building and durable impact, not just impressive proof-of-concept results.
4. Ethics cannot be an afterthought.
AI systems trained on non-representative data produce outputs that systematically disadvantage the populations they most underrepresent. Ubenwa is candid about this risk in its own work. Masakhane was founded specifically to address it at a structural level. In Africa, where marginalised communities are most likely to be poorly represented in existing global datasets, this is not a minor risk. Ethical AI development means actively seeking diverse, locally collected training data, integrating local expertise in model development, and building in the evaluation processes needed to detect and correct bias over time.
The AI initiatives that will matter most in Africa over the next decade are not those with the most sophisticated models. They are those designed with the most honest understanding of the people they are meant to serve.
The Design Thinking Imperative
AI will not solve Africa's most pressing challenges on its own. No technology does. What transforms outcomes is technology that is designed well — built with a genuine understanding of context, tested with the people who will use it, iterated based on what those tests reveal, and deployed in a way that communities trust.
That is what human-centered design brings to AI development. Not a constraint on what is technically possible, but a discipline for ensuring that what is technically possible actually gets used. Zipline proved it works by asking what the actual logistics problem looked like in Rwanda's real geography. Ubenwa found its solution by asking what data was already present at the moment of a difficult birth. Masakhane is building it from the ground up by asking who should be doing this work and for whom.
In every domain where AI holds promise for Africa — healthcare, education, language, agriculture, financial services — the question that will determine outcomes is not what the model can do. It is whether the tool was designed for the person holding it.
That question is one Made by People has been asking across Africa for over a decade. It remains the right one.
Work With Made by People
Made by People is a human-centered design and software development consultancy working across Africa. We help organisations build AI-enabled tools, digital products, and services that are designed for the people who will use them — conducting field research, running co-design processes, and building with the rigor that complex, high-stakes contexts require.
If you are building an AI-driven solution in Africa and want to ensure it is designed to actually work in the field, reach us at hello@made.ke.

