Domain Generation Algorithms (DGAs) are frequently used to generate numerous domains for use by botnets. These domains are often utilized as rendezvous points for servers that malware has command and control over. There are many algorithms that are used to generate domains, however many of these algorithms are simplistic and easily detected by traditional machine learning techniques. In this paper, three variants of Generative Adversarial Networks (GANs) are optimized to generate domains which have similar characteristics of benign domains, resulting in domains which greatly evade several state-of-the-art deep learning based DGA classifiers. We additionally provide a detailed analysis into offensive usability for each variant with respect to repeated and existing domain collisions. Finally, we fine-tune the state-of-the-art DGA classifiers by adding GAN generated samples to their original training datasets and analyze the changes in performance. Our results conclude that GAN based DGAs are superior in evading DGA classifiers in comparison to traditional DGAs, and of the variants, the Wasserstein GAN with Gradient Penalty (WGANGP) is the highest performing DGA for uses both offensively and defensively.
Remote sensing change detection, identifying changes between scenes of the same location, is an active area of research with a broad range of applications. Recent advances in multimodal self-supervised pretraining have resulted in state-of-the-art methods which surpass vision models trained solely on optical imagery. In the remote sensing field, there is a wealth of overlapping 2D and 3D modalities which can be exploited to supervise representation learning in vision models. In this paper we propose Contrastive Surface-Image Pretraining (CSIP) for joint learning using optical RGB and above ground level (AGL) map pairs. We then evaluate these pretrained models on several building segmentation and change detection datasets to show that our method does, in fact, extract features relevant to downstream applications where natural and artificial surface information is relevant. 1
Digital image steganalysis is the process of detecting if an image contains concealed data embedded within its pixel space inserted via a steganography algorithm. The detection of these images is highly motivated by Advanced Persistent Threat (APT) groups, such as APT37 Reaper, commonly utilizing these techniques to transmit malicious shellcode to perform further post-exploitation activity on a compromised host. Performing detection has become increasingly difficult due to modern steganography algorithms advancing at a greater rate than the steganalysis techniques designed to combat them. The task of detection is challenging due to modern steganography techniques that embed messages into images with only minor modifications to the original content which varies from image to image. In this paper, we pipeline Spatial Rich Models (SRM) feature extraction, Principal Component Analysis (PCA), and Deep Neural Networks (DNNs) to perform image steganalysis. Our proposed model, Neural Spatial Rich Models (NSRM) is an ensemble of DNN classifiers trained to detect 4 different state-of-the-art steganography algorithms at 5 different embedding rates, allowing for an end-to-end model which can be more easily deployed at scale. Additionally our results show our proposed model outperforms other current state-of-the-art neural network based image steganalysis techniques. Lastly, we provide an analysis of the current academic steganalysis benchmark dataset, BOSSBase, as well as performance of detection of steganography in various file formats with the hope of moving image steganalysis algorithms towards the point they can be utilized in actual industry applications.
Remote sensing change detection, identifying changes between scenes of the same location, is an active area of research with a broad range of applications. Recent advances in multimodal self-supervised pretraining have resulted in state-of-the-art methods which surpass vision models trained solely on optical imagery. In the remote sensing field, there is a wealth of overlapping 2D and 3D modalities which can be exploited to supervise representation learning in vision models. In this paper we propose Contrastive Surface-Image Pretraining (CSIP) for joint learning using optical RGB and above ground level (AGL) map pairs. We then evaluate these pretrained models on several building segmentation and change detection datasets to show that our method does, in fact, extract features relevant to downstream applications where natural and artificial surface information is relevant. 1
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.