International Journal on Science and Technology
E-ISSN: 2229-7677
•
Impact Factor: 9.88
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 16 Issue 4
October-December 2025
Indexing Partners
LIABILITY FOR HARM CAUSED BY AUTONOMOUS AI SYSTEMS
| Author(s) | Ms. PURNIMA GAUTAM, Ms. SOMYA SINGH |
|---|---|
| Country | India |
| Abstract | “Artificial intelligence is not inherently good or bad. It is a tool. The responsibility lies with us, the humans who build and use it.” Tim O’Reilly AI has emerged as the most powerful and widely used technology in today’s era. We can witness its presence in almost every field, reshaping industries, governance, law, and society, and transforming every sector, integrating critical domains such as healthcare, education, transportation, criminal justice, and national security etc. No doubt, it has also created numerous opportunities, innovations, progress, and development for the country's upliftment and betterment. But as we know that every coin has two sides, similarly, at the very same time, it has raised reflective legal, ethical, and policy challenges that become the pressing issue for today. From a legal perspective, Liability in the case of damage done by AI became the foundation issue for concern and needs immediate attention on the same. When AI systems such as self-driving cars or medical diagnostic tools cause harm, it is often unclear who should be held responsible: the developer, the manufacturer, the user, or the AI system itself. Not only this, there are n’ number of cases where ascertaining the liability became next to impossible. Data protection and privacy concerns also add fuel to the fire. Although policymakers are attempting to create safeguards, AI’s capacity often outperforms existing regulations. The ethical issue arises from the question of what happens if AI is biased and partial. What if their decisions are influenced by certain factors or prejudices? These biases can disproportionately disadvantage marginalized groups, undermining the principle of equality before the law. Related issues include transparency and explainability. The lack of interpretability poses serious risks in criminal sentencing, healthcare diagnostics, and financial decision-making. Self-sufficiency and human dignity are also at risk when AI begins to replace or significantly influence human policymaking and administrative functions. Additionally, it threatens to manipulate and influence public opinion, spread misinformation, and cause various other problems. Fairness, accountability, transparency, and respect for human dignity must be safeguarded. Both at the national and international levels, countries and international organizations are engaging in creating such effective and efficient policies and frameworks that emphasize transparency, safety, and accountability. In India, the NITI Aayog’s National Strategy for Artificial Intelligence (2018) and subsequent initiatives emphasize the development of AI for social good, particularly in healthcare, agriculture, and education. India’s Digital Personal Data Protection Act, 2023, further aims to regulate how data is collected and processed in AI systems. The United States has also adopted a more flexible, innovation-driven approach, exemplified by the AI Bill of Rights (2022), which outlines principles such as safe and effective systems. The European Union has taken a ground-breaking step with its AI Act (2024), which adopts a risk-based approach by categorizing AI systems into prohibited, high-risk, limited-risk, and minimal-risk categories. Artificial Intelligence poses an unprecedented opportunity and a significant challenge to humanity. The legal system will have to adapt to redefine liability, intellectual property rights, and criminalizing behaviour in a world with smart machines. AI’s ethical use demands frameworks to preserve dignity, autonomy, and equality. Additionally, policies at the national and international levels will need to find a balance between safeguarding innovation and fundamental rights. |
| Keywords | Artificial Intelligence (AI), legal liability, intellectual property rights (IPR), data protection and privacy, algorithmic bias, transparency and explainability, human dignity and autonomy, predictive policing, ethical governance, AI regulation, the EU AI Act 2024, the AI Bill of Rights, India’s Digital Personal Data Protection Act 2023, global AI governance, and responsible innovation. |
| Published In | Volume 16, Issue 4, October-December 2025 |
| Published On | 2025-11-03 |
| DOI | https://doi.org/10.71097/IJSAT.v16.i4.9209 |
| Short DOI | https://doi.org/g99qk3 |
Share this

CrossRef DOI is assigned to each research paper published in our journal.
IJSAT DOI prefix is
10.71097/IJSAT
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.