Synthetic media or "deepfakes" are making great advances in visual quality, diversity, and verisimilitude, empowered by large-scale publicly accessible datasets and rapid technical progress in deep generative modeling. Heralding a paradigm shift in how online content is trusted, researchers in digital image forensics have responded with different proposals to reliably detect AI-generated images in the wild. However, binary classification of image authenticity is insufficient to regulate the ethical usage of deepfake technology as new applications are developed. This article provides an overview of the major innovations in synthetic forgery detection as of 2020, while highlighting the recent shift in research towards ways to attribute AI-generated images to their generative sources with evidence. We define the various categories of deepfakes in existence, the subtle processing traces and fingerprints that distinguish AI-generated images from reality and each other, and the different degrees of attribution possible with current understanding of generative algorithms.Additionally, we describe the limitations of synthetic image recognition methods in practice, the counter-forensic attacks devised to exploit these limitations, and directions for new research to assure the long-term relevance of deepfake forensics. Reliable, explainable, and generalizable attribution methods would hold malicious users accountable for AI-enabled disinformation, grant plausible deniability to appropriate users, and facilitate intellectual property protection of deepfake technology.
Antibiotic resistance is one of the biggest threats to global health resulting in an increasing number of people suffering from severe illnesses or dying due to infections that were once easily curable with antibiotics. Pseudomonas aeruginosa is a major pathogen that has rapidly developed antibiotic resistance and WHO has categorised this pathogen under the critical list. DNA aptamers can act as a potential candidate for novel antimicrobial agents. In this study, we demonstrated that an existing aptamer is able to affect the growth of P. aeruginosa. A computational screen for aptamers that could bind to a well-conserved and essential outer membrane protein, BamA in Gram-negative bacteria was conducted. Molecular docking of about 100 functional DNA aptamers with BamA protein was performed via both local and global docking approaches. Additionally, genetic algorithm analysis was carried out to rank the aptamers based on their binding affinity. The top hits of aptamers with good binding to BamA protein were synthesised to investigate their in vitro antibacterial activity. Among all aptamers, Apt31, which is known to bind to an antitumor, Daunomycin, exhibited the highest HADDOCK score and resulted in a significant (p < 0.05) reduction in P. aeruginosa growth. Apt31 also induced membrane disruption that resulted in DNA leakage. Hence, computational screening may result in the identification of aptamers that bind to the desired active site with high affinity.
In recent years, automatic facial analysis has attracted much interest among computer science researchers in the healthcare and computer vision fields studying facial anthropometric measurements using photographs. However, to date, there have been no healthcare or computer vision publications that use standardized photographs to differentiate features between sub-ethnic groups by leveraging the power of machine learning on two-dimensional computer vision benchmark data sets (2D CVBDs). Thus, the present work is an interdisciplinary study at the interface of healthcare and computer vision fields that attempts to fill this literature gap where we explore the use of machine learning on 2,789 photographs from eleven 2D CVBDs to identify k top discriminative features in major and sub-ethnic groups. These features are ranked based on information gain values and p-values. We also provide a comprehensive analysis of using information-gain-based and p-value-based features. Our machine learning model achieves an accuracy of 96-99%, and our findings reveal that information-gain-based features have the upper hand over p-valuebased features. The top three information-gain-based features in sub-ethnic groups are: dn (distance from the tip of the nose to the center of the mouth), hf (face height) and wn (nose width), while the top three information-gain-based features in major ethnic groups are: de (distance between the inner corners of the eyelids), hf and dn. These results are then compared to the results obtained using standard deep learning techniques such as OxfordNet (VGG16), Residual Networks (ResNet50), and Inception-V3, where accuracy of 90-94% was seen. We hope that these findings will lead to future collaboration between computer vision and healthcare researchers studying facial anthropometric measurement studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.