IrokoBench: a new benchmark for African languages in the age of Large Language Models

dc.contributor.authorAdelani,D.I.
dc.contributor.authorOjo,J.
dc.contributor.authorAzime,I.A.
dc.contributor.authorZhuang,J.Y.
dc.contributor.authorAlabi,J.O.
dc.contributor.authorHe,X.
dc.contributor.authorOchieng,M.
dc.contributor.authorHooker,S.
dc.contributor.authorBukula,A.
dc.contributor.authorLee,E.-S.A.
dc.contributor.authorChukwuneke,C.
dc.contributor.authorBuzaaba,H.
dc.contributor.authorSibanda,B.
dc.contributor.authorKalipe,G.
dc.contributor.authorMukiibi,J.
dc.contributor.authorKabongo,S.
dc.contributor.authorYuehgoh,F.
dc.contributor.authorSetaka,M.
dc.contributor.authorNdolela,L.
dc.contributor.authorOdu,N.
dc.contributor.authorMabuya,R.
dc.contributor.authorMuhammad,S.H.
dc.contributor.authorOsei, Salomey
dc.contributor.authorSamb,S.
dc.contributor.authorGuge,T.K.
dc.contributor.authorSherman,T.V.
dc.contributor.authorStenetorp,P.
dc.date.accessioned2026-04-20T07:03:31Z
dc.date.available2026-04-20T07:03:31Z
dc.date.issued2025
dc.date.updated2026-04-20T07:03:30Z
dc.descriptionPonencia presentada en la 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics, celebrada en Albuquerque entre el 29 de abril y el 4 de mayo de 2025
dc.description.abstractDespite the widespread adoption of Large language models (LLMs), their remarkable capabilities remain limited to a few high-resource languages. Additionally, many low-resource languages (e.g., African languages) are often evaluated only on basic text classification tasks due to the lack of appropriate or comprehensive benchmarks outside of high-resource languages. In this paper, we introduce IrokoBench-a human-translated benchmark dataset for 17 typologically-diverse low-resource African languages covering three tasks: natural language inference (AfriXNLI), mathematical reasoning (AfriMGSM), and multi-choice knowledge-based question answering (AfriMMLU). We use IrokoBench to evaluate zero-shot, few-shot, and translate-test settings (where test sets are translated into English) across 10 open and six proprietary LLMs. Our evaluation reveals a significant performance gap between high-resource languages (such as English and French) and low-resource African languages. We observe a significant performance gap between open and proprietary models, with the highest performing open model, Gemma 2 27B only at 63% of the best-performing proprietary model GPT-4o performance. In addition, machine translating the test set to English before evaluation helped to close the gap for larger models that are English-centric, such as Gemma 2 27B and LLaMa 3.1 70B. These findings suggest that more efforts are needed to develop and adapt LLMs for African languages.en
dc.description.sponsorshipThis work was supported by Lacuna Fund, an initiative co-founded by The Rockefeller Foundation, Google.org, and Canada’s International Development Research Centre. Additional support was provided through compute credits from Oracle and the Cohere For AI Research Grant. We extend our gratitude to Charles Riley for facilitating our connection with the Vai translator for the MGSM data translation. We are also deeply thankful to OpenAI for granting API credits through their Researcher Access API program to Masakhane, enabling the evaluation of GPT-3.5 and GPT-4 LLMs. Similarly, we appreciate Google for providing GCP credits via the Gemma 2 Academic Program, which supported the Gemini-1.5-Pro inferenceen
dc.identifier.citationAdelani, D. I., Ojo, J., Azime, I. A., Zhuang, J. Y., Alabi, J. O., He, X., Ochieng, M., Hooker, S., Bukula, A., Lee, E.-S. A., Chukwuneke, C., Buzaaba, H., Sibanda, B., Kalipe, G., Mukiibi, J., Kabongo, S., Yuehgoh, F., Setaka, M., Ndolela, L., et al. (2025). IrokoBench: a new benchmark for African languages in the age of Large Language Models. Proceedings of the 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies: Long Papers, NAACL-HLT 2025, 1, 2732-2757. https://doi.org/10.18653/V1/2025.NAACL-LONG.139
dc.identifier.doi10.18653/V1/2025.NAACL-LONG.139
dc.identifier.isbn9798891761896
dc.identifier.urihttps://hdl.handle.net/20.500.14454/5693
dc.language.isoeng
dc.publisherAssociation for Computational Linguistics (ACL)
dc.rights©2025 Association for Computational Linguistics
dc.titleIrokoBench: a new benchmark for African languages in the age of Large Language Modelsen
dc.typeconference paper
dcterms.accessRightsopen access
oaire.citation.endPage2757
oaire.citation.startPage2732
oaire.citation.titleProceedings of the 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies: Long Papers, NAACL-HLT 2025
oaire.citation.volume1
oaire.licenseConditionhttps://creativecommons.org/licenses/by/4.0/
oaire.versionVoR
Archivos
Bloque original
Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
adelani_irokobench_2025.pdf
Tamaño:
1.15 MB
Formato:
Adobe Portable Document Format
Colecciones