This is a talk I gave at DevFest Mt.Kenya, in November, 2025.

Chris Achinga at DevFest Mt. Kenya Region speaking on AI ethics in Africa

AI Ethics in Africa: Building for Communities, Not Just Corporations — DevFest Mt. Kenya Region

What Do I Feel?

Ever seen a really good ad, could be about a music video that’s really cool, or maybe an application that helps you do some cool shit, then when you click on the link, you get that annoying message: “Not available in Your Country”, that’s exactly how I feel about most of these AI things that are happening.

a meme

a meme on geo fenced videos on youtube

Artificial intelligence is having a moment in Africa. Every week/month/day there’s a new startup, a new “AI for Good” pilot, a new deck promising “Africa’s leapfrog moment” with enough buzzwords to power a small data center. And yet, under the hype, there’s a real tension: are we building AI that serves African communities, or are we just the demo environment for other people’s products?

african map

tech themed african map

What Are These AI Ethics? In Africa?

AI has huge potential for Africa in healthcare, agriculture, and finance. Yet most conversations focus on Global North corporations and their bottom line. This perspective flips the script toward Community-First AI.

AI can drive Agenda 2063 and the SDGs, unlocking an estimated $1.5 trillion economic impact. But this becomes possible only when the ethical tension between corporate profit and equitable, community-driven development is addressed.

https://au.int/en/agenda2063/overview


The Promise Sold to Africa

Artificial Intelligence is promised as a tool for:

  • economic transformation
  • smarter farming
  • better schools
  • faster health diagnostics
  • donor proposals

But Africa risks becoming a testing ground for systems designed elsewhere and trained on data that doesn’t represent us.

Terms like digital colonialism and algorithmic colonization describe how African data, labour, and culture are extracted to fuel global AI systems with little benefit returned.

Less than 1% of global AI research funding reaches African institutions, while Kenyan content moderators cleaning traumatic training data are paid under $2/hour.

“AI ethics in Africa has to be grounded in our lived realities — not imported templates.”
— Dhesen Ramsamy


Why This Matters

Data Bias

Global AI systems are trained on mostly Western datasets. They may identify a poodle in Paris perfectly, but fail at identifying cassava leaf disease in Lusaka.

Only 2.8% of computer-vision training data includes African faces.

Ubuntu: Ethics Beyond the Individual

Ubuntu emphasizes community, relational accountability, and shared dignity.

Western ethics focus on individual harm.
Ubuntu says: if an AI system harms one person, the entire community’s dignity is affected.

High-Stakes Reality

AI systems determine access to:

  • loans
  • jobs
  • healthcare
  • government aid

A biased model here doesn’t misclassify animals — it misclassifies people.

Cultural and Structural Realities

  • Over 1,500 languages
  • Communal ownership traditions
  • Limited regulatory capacity
  • Infrastructure gaps (electricity, broadband)

African AI ethics must begin with justice, dignity, and sovereignty, not only privacy and fairness.


What Global AI Companies Get Wrong

The Infrastructure Paradox

Big Tech offers “cloud-native, always-on” solutions while:

  • 600M+ lack reliable electricity
  • 300M+ lack broadband

A high-tech tool becomes a digital brick if it cannot be powered.

Cultural Disconnect

Western-trained models interpret African faces, languages, and behaviors through foreign lenses:

  • Facial recognition misidentifies Black faces 10–100× more often
  • Credit algorithms classify informal earners as high risk
  • Chatbots respond in English legalese to rural users

Linguistic Exclusion

Only 2.8% of training data includes African faces; language datasets are even smaller.

Projects improving this include:

Invisible Labour

AI still runs on “digital sweatshops”:

  • Underpaid data labelers
  • Traumatized content moderators
  • No mental health support
  • No recognition or equity

Example: LLM content moderators in Kenya paid < $2/hour to review traumatic material.


Building Ethical, African-Centered AI With Google Tools

Gemini APIs

Principle: Localization & cultural relevance
Goal: Multilingual utility

  1. Build contextual chatbots in underserved African languages.
  2. Use multimodal capabilities for local tasks (crop disease detection, pest identification).

Dialogflow

Principle: Inclusion & accessibility
Goal: Build for low-bandwidth communities

  1. Create SMS/voice agents for users without smartphones or broadband.
  2. Build multilingual health triage systems via basic phones.

TensorFlow

Principle: Data sovereignty & bias mitigation
Goal: Contextual, accountable AI

  1. Train small, local models with ethically sourced datasets.
  2. Deploy via TensorFlow Lite for offline, edge-based operation.

Vertex AI

Principle: Governance & accountability
Goal: Ethical deployment and ongoing oversight

  1. Use Explainable AI and bias detection before launch.
  2. Manage MLOps pipelines for monitoring drift and enforcing human oversight.