International Journal on Science and Technology
E-ISSN: 2229-7677
•
Impact Factor: 9.88
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 17 Issue 1
January-March 2026
Indexing Partners
Low-Resource Adaptive Pretraining using Knowledge-Infused Curriculum Learning
| Author(s) | Mohan Siva Krishna Konakanchi |
|---|---|
| Country | United States |
| Abstract | In the era of large language models (LLMs), pretraining on vast corpora has become the cornerstone of achieving state-of-the-art performance across diverse tasks. However, for low-resource domains such as specialized scientific fields or underrepresented languages, the scarcity of data poses significant challenges. This paper introduces a novel framework for lowresource adaptive pretraining that leverages knowledge-infused curriculum learning to systematically infuse structured domain knowledge into the pretraining process. We develop a curriculumlearning pipeline that progressively escalates task complexity while embedding external knowledge graphs and ontologies, enabling efficient adaptation with minimal data. Furthermore, we propose a trust metric-based federated learning framework to ensure integrity and accountability in distributed training across data silos, mitigating risks associated with heterogeneous data sources. Finally, we build a comprehensive framework to quantify and optimize the inherent trade-off between model explainability and performance, providing actionable insights for deployment in high-stakes environments. Through extensive experiments on benchmark low-resource datasets, our approach demonstrates superior performance gains, enhanced trustworthiness, and balanced explainability without compromising efficacy. This work advances the paradigm of resource-efficient AI, paving the way for equitable access to advanced models in constrained settings. |
| Keywords | Low-resource pretraining, Curriculum learning, Knowledge infusion, Federated learning, Trust metrics, Explainability-performance trade-off. |
| Field | Engineering |
| Published In | Volume 11, Issue 2, April-June 2020 |
| Published On | 2020-05-09 |
| DOI | https://doi.org/10.71097/IJSAT.v11.i2.9531 |
| Short DOI | https://doi.org/hbnx99 |
Share this

CrossRef DOI is assigned to each research paper published in our journal.
IJSAT DOI prefix is
10.71097/IJSAT
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.