International Journal on Science and Technology

E-ISSN: 2229-7677     Impact Factor: 9.88

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 17 Issue 1 January-March 2026 Submit your research before last 3 days of March to publish your research paper in the issue of January-March.

Low-Resource Adaptive Pretraining using Knowledge-Infused Curriculum Learning

Author(s) Mohan Siva Krishna Konakanchi
Country United States
Abstract In the era of large language models (LLMs), pretraining on vast corpora has become the cornerstone of achieving state-of-the-art performance across diverse tasks. However, for low-resource domains such as specialized scientific fields or underrepresented languages, the scarcity of data poses significant challenges. This paper introduces a novel framework for lowresource adaptive pretraining that leverages knowledge-infused curriculum learning to systematically infuse structured domain knowledge into the pretraining process. We develop a curriculumlearning pipeline that progressively escalates task complexity while embedding external knowledge graphs and ontologies, enabling efficient adaptation with minimal data. Furthermore, we propose a trust metric-based federated learning framework to ensure integrity and accountability in distributed training across data silos, mitigating risks associated with heterogeneous data sources. Finally, we build a comprehensive framework to quantify and optimize the inherent trade-off between model explainability and performance, providing actionable insights for deployment in high-stakes environments. Through extensive experiments on benchmark low-resource datasets, our approach demonstrates superior performance gains, enhanced trustworthiness, and balanced explainability without compromising efficacy. This work advances the paradigm of resource-efficient AI, paving the way for equitable access to advanced models in constrained settings.
Keywords Low-resource pretraining, Curriculum learning, Knowledge infusion, Federated learning, Trust metrics, Explainability-performance trade-off.
Field Engineering
Published In Volume 11, Issue 2, April-June 2020
Published On 2020-05-09
DOI https://doi.org/10.71097/IJSAT.v11.i2.9531
Short DOI https://doi.org/hbnx99

Share this