International Journal on Science and Technology
E-ISSN: 2229-7677
•
Impact Factor: 9.88
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 17 Issue 1
January-March 2026
Indexing Partners
Deepfake Video Detection using CNN based Architectures and Vision Transformer Model
| Author(s) | Ms. Sumedha Arya |
|---|---|
| Country | India |
| Abstract | Deepfake videos are becoming more advanced and realistic, that their detection is getting more challenging. It brings a deep concern for privacy and security on digital platforms. Current techniques perform well in closed environment but faces issues in real world scenario for deepfake detection. Therefore, in this work, we focus on detecting deepfake videos by first breaking each video into individual image frames and then analyzing those frames using deep learning models. We applied transfer learning using various CNN based pretrained models such as VGG, ResNet, DenseNet, MobileNet, EfficientNet along with Vision Transformer (ViT). Our results show that all models perform well, with accuracy values between 92.56% and 97.16% for CNN-based models. The Vision Transformer performs the best overall, achieving 99.00% accuracy by capturing global patterns and small manipulation details in deepfake videos. Overall, the study proves that transfer learning, is a strong and reliable approach for detecting deepfake videos even when the size of the dataset is less. |
| Keywords | Deepfake Detection, CNN, Vision Transformer, Transfer Learning, Video Frame Analysis |
| Published In | Volume 17, Issue 1, January-March 2026 |
| Published On | 2026-02-11 |
Share this

CrossRef DOI is assigned to each research paper published in our journal.
IJSAT DOI prefix is
10.71097/IJSAT
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.