International Journal on Science and Technology
E-ISSN: 2229-7677
•
Impact Factor: 9.88
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 16 Issue 4
October-December 2025
Indexing Partners
Intelligent Test Automation: A Multi-Agent LLM Framework for Dynamic Test Case Generation and Validation
| Author(s) | Pragati Kumari |
|---|---|
| Country | India |
| Abstract | Automated software testing is essential in modern software development, ensuring stability and resilience. This study describes a unique technique for using the capabilities of Large Language Models (LLMs) via a system of autonomous agents. These agents collaborate to dynamically generate, validate, and execute test cases based on specified requirements [1, 2]. By iteratively improving test cases via agent-to-agent communication, the system improves accuracy and effectiveness. Our implementation, which uses AutoGen and Python's unittest framework, shows how this method helps to maintain excellent software quality. Experimental evaluations across a variety of test scenarios demonstrate the versatility and efficiency of our framework, Intelligent Test Automation (ITA), emphasizing its promise for increasing automated software testing [3, 4]. |
| Keywords | Intelligent Test Automation (ITA), Large Language Models (LLMs), Multi-Agent Systems, Automated Software Testing, Test Case Generation |
| Field | Engineering |
| Published In | Volume 16, Issue 1, January-March 2025 |
| Published On | 2025-03-05 |
| DOI | https://doi.org/10.71097/IJSAT.v16.i1.2232 |
| Short DOI | https://doi.org/g869w7 |
Share this

CrossRef DOI is assigned to each research paper published in our journal.
IJSAT DOI prefix is
10.71097/IJSAT
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.