Saturday, February 28, 2026
Custom Text
HomeCOLUMNISTSThe mirage of access: Why Nigeria's AI literacy push is a path...

The mirage of access: Why Nigeria’s AI literacy push is a path to cognitive dependency

-

The mirage of access: Until we shift our focus from maximizing access to global tools to prioritizing the development of local, critically-interrogated AI intelligence, our push for AI literacy will only serve to cement our status as passive intellectual consumers in the global knowledge economy. The next billion users deserve an AI that speaks their language, understands their history, and truly empowers their critical thought.

By Precious Ebere-Chinonso Obi

The EdTech discourse in Nigeria is trapped in a loop of predictable complaints: infrastructure gaps, power supply, and funding. Yet, as we obsess over merely connecting the “next billion” to the internet, we are dangerously overlooking a deeper, more insidious crisis: the threat of cognitive dependency fueled by generative Artificial Intelligence.

Building AI literacy is not simply about teaching our students how to use ChatGPT; it is about saving the Nigerian intellectual space from cultural and epistemological erasure.

- Advertisement -

The conventional wisdom dictates that the more Nigerians use global Large Language Models (LLMs), the better their “AI literacy.” This is a profound error.

The majority of LLMs, including those favoured in our tertiary institutions, are predominantly trained on vast corpora of Western and Northern hemispheric data.

This creates what I call the Localization Paradox: we celebrate the accessibility of a tool that, by its very nature, systematically marginalizes Nigerian context, history, and jurisprudence.

If a student uses a global LLM to research Nigerian political history or legal frameworks, the outputs are fundamentally skewed, incomplete, or filtered through a colonial-era lens. The perceived convenience of instant answers disguises the erosion of critical, localized inquiry.

We are inadvertently raising a generation whose understanding of their own world is perpetually mediated by algorithms designed elsewhere, perpetuating an epistemological colonization far more subtle and damaging than the textbook shortages of the past.

- Advertisement -

The solution, therefore, is not to expand consumption, but to demand deconstruction. True AI literacy for the Nigerian professional or student must pivot from using to interrogating. We must train our youth to be skilled bias mitigators and data auditors. Can the student identify the gaps in the LLM’s knowledge regarding the 1999 Constitution? Can they prompt-engineer the tool to prioritize data from the University of Ibadan’s archives over a random foreign blog post?

READ ALSO: Analysing HolonIQ’S recent “Education in 2030” report

This level of intellectual rigor is the actual benchmark for 21st-century competence, not simple copy-pasting.

Furthermore, the prevailing subscription-based model for EdTech AI is financially unviable and socially divisive. In an economy battered by inflation, asking parents to pay for monthly AI access is a non-starter, creating a sharp divide where only the affluent can afford true “literacy.” The sustainable, radical path forward lies in decentralized, sovereign AI.

Nigeria must aggressively invest in and champion local, open-source LLMs trained specifically on Nigerian academic, judicial, and cultural datasets. This isn’t just a technical exercise; it’s a nation-building imperative. It provides data sovereignty, addresses the localization paradox directly, and fosters a competitive ecosystem where our developers are creating the future, not just debugging foreign software.

The technical alibi: The cost of sovereignty

The political will to pursue a sovereign AI often falters at the sight of its technical cost. The argument that “it’s too hard to build our own” is the ultimate intellectual surrender. The obstacles are not theoretical; they are concrete and demand aggressive national coordination:

1. The linguistic data desert

While English boasts trillions of digital tokens, core Nigerian languages (Hausa, Igbo, Yoruba, Pidgin) remain critically low-resource. For many of our over 500 languages, 90% lack basic digitized texts. Furthermore, the structural complexity of tonal languages, like Yoruba, where diacritics are crucial for meaning, often breaks standard, non-localised tokenization systems.

We are not just missing data; we are missing the fundamental computational frameworks required to ingest the data we do have. Collecting, standardizing, and digitally annotating this linguistic wealth is a foundational investment that cannot be outsourced.

2. The computational chasm

Training a global foundation model requires tens of thousands of high-performance GPUs and millions of dollars in compute time. Nigeria currently lacks data centers capable of supporting AI training at this scale, forcing local researchers to rely on expensive, unreliable foreign cloud services.

This dependence is not just a high cost; it is a security vulnerability. True sovereignty requires sustained, subsidized access to domestic computational infrastructure, treated as a critical national utility like power or water. Until the government views a GPU farm with the same urgency as a national railway, we remain intellectually reliant on foreign subsidies.

3. Ethical and ownership complexity

The push to digitize oral and written traditions for model training immediately raises fraught ethical questions around data ownership, community consent, and compensation. Who owns the vast, newly digitized corpus of a minority language?

The developer who scraped it? The institution that funded the scanner? Or the community that generated the cultural knowledge? True sovereign AI must come bundled with robust, decentralized governance models that prevent the digital exploitation of local intellectual property, transforming data sources into equity stakeholders.

Curriculum redefinition: From consumers to codifiers

To counter cognitive dependency, AI literacy cannot be optional or limited to vocational coding schools. It must be a core component of civics, law, and history from secondary school onward.

1. Mandatory bias auditing and remediation

Students must be taught to systematically audit LLM outputs for socio-cultural bias, identifying where the model fails to represent Nigerian contexts, legal precedent, or historical narratives. This includes practical exercises in:

  • Contrarian prompting: Deliberately asking the LLM questions designed to expose its ignorance regarding low-resource languages, local governance (e.g., traditional justice systems), or obscure but critical historical figures.
  • Source attribution critique: Analyzing the geographical and institutional origin of the sources cited by grounded LLMs, and weighting the output based on its local authority.

2. The ethics of data ownership and governance

This module must move beyond abstract ethics and focus on practical data law. Students must understand that their digital footprint is a monetizable, geopolitical asset. The curriculum should cover:

  • API economics: Understanding how their use of a foreign LLM generates value (data) that is extracted and repurposed.
  • Consent and compensation frameworks: Designing simple community agreements for local data collection, turning linguistic contributions into equity or guaranteed access to the resulting model.

3. Contextual prompt engineering (for retrieval)

The goal is not to generate text, but to retrieve local truth. Students need to be experts in engineering prompts that bypass global defaults and force the model to prioritize Nigerian academic archives, judicial databases, and localized data stores.

This is the difference between asking “What is democracy?” (a universal prompt) and “What is the history of the zoning debate in the Nigerian PDP?” (a highly contextual prompt).

Until we shift our focus from maximizing access to global tools to prioritizing the development of local, critically-interrogated AI intelligence, our push for AI literacy will only serve to cement our status as passive intellectual consumers in the global knowledge economy. The next billion users deserve an AI that speaks their language, understands their history, and truly empowers their critical thought.

Anything less is a disservice.

- Advertisment -Custom Text
- Advertisment -Custom Text
Custom Text