Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
The rapid advancement of deepfake technology presents significant challenges in detecting highly convincing fake videos, posing risks such as misinformation, identity theft, and privacy violations. In response, this paper proposes an innovative approach to deepfake video detection by integrating features derived from ant colony optimization–particle swarm optimization (ACO-PSO) and deep learning techniques. The proposed methodology leverages ACO-PSO features and deep learning models to enhance detection accuracy and robustness. Features from ACO-PSO are extracted from the spatial and temporal characteristics of video frames, capturing subtle patterns indicative of deepfake manipulation. These features are then used to train a deep learning classifier to automatically distinguish between authentic and deepfake videos. Extensive experiments using comparative datasets demonstrate the superiority of the proposed method in terms of detection accuracy, robustness to manipulation techniques, and generalization to unseen data. The computational efficiency of the approach is also analyzed, highlighting its practical feasibility for real-time applications. The findings revealed that the proposed method achieved an accuracy of 98.91% and an F1 score of 99.12%, indicating remarkable success in deepfake detection. The integration of ACO-PSO features and deep learning enables comprehensive analysis, bolstering precision and resilience in detecting deepfake content. This approach addresses the challenges involved in facial forgery detection and contributes to safeguarding digital media integrity amid misinformation and manipulation.
The rapid advancement of deepfake technology presents significant challenges in detecting highly convincing fake videos, posing risks such as misinformation, identity theft, and privacy violations. In response, this paper proposes an innovative approach to deepfake video detection by integrating features derived from ant colony optimization–particle swarm optimization (ACO-PSO) and deep learning techniques. The proposed methodology leverages ACO-PSO features and deep learning models to enhance detection accuracy and robustness. Features from ACO-PSO are extracted from the spatial and temporal characteristics of video frames, capturing subtle patterns indicative of deepfake manipulation. These features are then used to train a deep learning classifier to automatically distinguish between authentic and deepfake videos. Extensive experiments using comparative datasets demonstrate the superiority of the proposed method in terms of detection accuracy, robustness to manipulation techniques, and generalization to unseen data. The computational efficiency of the approach is also analyzed, highlighting its practical feasibility for real-time applications. The findings revealed that the proposed method achieved an accuracy of 98.91% and an F1 score of 99.12%, indicating remarkable success in deepfake detection. The integration of ACO-PSO features and deep learning enables comprehensive analysis, bolstering precision and resilience in detecting deepfake content. This approach addresses the challenges involved in facial forgery detection and contributes to safeguarding digital media integrity amid misinformation and manipulation.
Visibility is a measure of the atmospheric transparency at an observation point, expressed as the maximum horizontal distance over which a person can see and identify objects. Low atmospheric visibility often occurs in conjunction with air pollution, posing hazards to both traffic safety and human health. In this study, we combined satellite remote sensing images with environmental data to explore the classification performance of two distinct multimodal data processing techniques. The first approach involves developing four multimodal data classification models using deep learning. The second approach integrates deep learning and machine learning to create twelve multimodal data classifiers. Based on the results of a five-fold cross-validation experiment, the inclusion of various environmental data significantly enhances the classification performance of satellite imagery. Specifically, the test accuracy increased from 0.880 to 0.903 when using the deep learning multimodal fusion technique. Furthermore, when combining deep learning and machine learning for multimodal data processing, the test accuracy improved even further, reaching 0.978. Notably, weather conditions, as part of the environmental data, play a crucial role in enhancing visibility prediction performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.