International Journal on Science and Technology

E-ISSN: 2229-7677     Impact Factor: 9.88

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 16 Issue 2 April-June 2025 Submit your research before last 3 days of June to publish your research paper in the issue of April-June.

Cost, Complexity, and Efficacy of Prompt Engineering Techniques for Large Language Models

Author(s) Milind Cherukuri
Country United States
Abstract This research investigates the impact of various prompt engineering techniques on the length, cost, complexity, and accuracy of responses from large language models (LLMs). By comparing direct prompting with zero-shot, few-shot, and chain-of-thought (CoT) methods on tasks like GSM8K and creative writing, I analyze the trade-offs between token usage and response quality. Results show that while zero-shot CoT prompting is highly effective and cost-efficient, other methods like Least-to-Most and Tree-of-Thought add significant length and complexity without proportional accuracy gains. Additionally, I discuss the financial implications, finding that GPT-4’s unique pricing structure
narrows the cost difference between manual/few-shot and zero-shot methods. Complexity analysis reveals that more intricate prompts often lead to convoluted outputs, challenging human review and implementation. Our findings guide the selection of prompt engineering strategies to optimize both performance and resource utilization
Field Engineering
Published In Volume 16, Issue 2, April-June 2025
Published On 2025-04-29
Cite This Cost, Complexity, and Efficacy of Prompt Engineering Techniques for Large Language Models - Milind Cherukuri - IJSAT Volume 16, Issue 2, April-June 2025. DOI 10.71097/IJSAT.v16.i2.2584
DOI https://doi.org/10.71097/IJSAT.v16.i2.2584
Short DOI https://doi.org/g9g7z7

Share this