Show simple item record

dc.contributor.authorMunyao, Charles
dc.contributor.authorNdia, John G.
dc.date.accessioned2025-09-25T12:36:47Z
dc.date.available2025-09-25T12:36:47Z
dc.date.issued2025
dc.identifier.issn2579-003X
dc.identifier.uri10.32604/jai.2025.069226
dc.identifier.urihttp://repository.mut.ac.ke:8080/xmlui/handle/123456789/6659
dc.description.abstractThe natural language processing (NLP) domain has witnessed significant advancements with the emergence of transformer-based models, which have reshaped the text understanding and generation landscape.While their capabilities are well recognized, there remains a limited systematic synthesis of how these models perform across tasks, scale efficiently, adapt to domains, and address ethical challenges. Therefore, the aim of this paper was to analyze the performance of transformer-based models across various NLP tasks, their scalability, domain adaptation, and the ethical implications of such models. This meta-analysis paper synthesizes findings from 25 peer-reviewed studies on NLP transformer-based models, adhering to the PRISMA framework. Relevant papers were sourced from electronic databases, including IEEE Xplore, Springer, ACM Digital Library, Elsevier, PubMed, and Google Scholar. The findings highlight the superior performance of transformers over conventional approaches, attributed to selfattention mechanisms and pre-trained language representations. Despite these advantages, challenges such as high computational costs, data bias, and hallucination persist. The study provides new perspectives by underscoring the necessity for future research to optimize transformer architectures for efficiency, address ethical AI concerns, and enhance generalization across languages.This paper contributes valuable insights into the current trends, limitations, and potential improvements in transformer-based models for NLP.en_US
dc.language.isoenen_US
dc.publisherJournal on Artificial Intelligenceen_US
dc.subjectNatural language processing; transformers; pretrained language representations; self-attention mechanisms; ethical AIen_US
dc.titleNatural Language Processing with Transformer-BasedModels: AMeta-Analysisen_US
dc.typeArticleen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record