Despite CNN-based deblur models have shown their superiority when solving motion blurs, restoring a photorealistic image from severe motion blur remains an ill-posed problem due to the loss of temporal information and textures. Event cameras such as Dynamic and Active-pixel Vision Sensor (DAVIS) [3] can simultaneously produce gray-scale Active Pixel Sensor (APS) frames and events, which can capture fast motions as events of very high temporal resolution, i. e., 1µs, can provide extra information for blurry APS frames. Due to the natural noise and sparsity of events, we employ a recurrent encoderdecoder architecture to generate dense recurrent event representations, which encode the overall historical information. We concatenate the original blurry image with the event representation as our hybrid input, from which the network learns to restore the sharp output. We conduct extensive experiments on GoPro dataset and a real event blurry dataset captured by DAVIS240C. Our experimental results on both synthetic and real images demonstrate state-of-the-art performance for 1280 × 720 images at 30 fps.INDEX TERMS Event-based Vision, High Speed, Image Deblurring, Real-Time.
Contemporary deep learning multi-scale deblurring models suffer from many issues: 1) They perform poorly on non-uniformly blurred images/videos; 2) Simply increasing the model depth with finer-scale levels cannot improve deblurring; 3) Individual RGB frames contain a limited motion information for deblurring; 4) Previous models have a limited robustness to spatial transformations and noise. Below, we extend our preliminary paper [59] by several mechanisms to address the above issues: I) We present a novel self-supervised event-guided deep hierarchical Multi-patch Network (MPN) to deal with blurry images and videos via fine-to-coarse hierarchical localized representations; II) We propose a novel stacked pipeline, StackMPN, to improve the deblurring performance under the increased network depth; III) We propose an event-guided architecture to exploit motion cues contained in videos to tackle complex blur in videos; IV) We propose a novel self-supervised step to expose the model to random transformations (rotations, scale changes), and make it robust to Gaussian noises. Our MPN achieves the state of the art on the GoPro and VideoDeblur datasets with a 40× faster runtime compared to current multi-scale methods. With 30ms to process an image at 1280 ×720 resolution, it is the first real-time deep motion deblurring model for 720p images at 30fps. For StackMPN, we obtain significant improvements over 1.2dB on the GoPro dataset by increasing the network depth. Utilizing the event information and self-supervision further boost results to 33.83dB.
Insider threat detection is important for the smooth operation and security protection of an organizational system. Most existing detection models establish historical baseline by reconstructing single-day and individual user behaviors, and then treat any outlier of the baseline as a threat. However, such methods ignore the temporal and spatial correlations between different activities, which result in an unsatisfying performance. To address such an issue, we propose a novel insider threat detection method, namely, Memory-Augmented Insider Threat Detection (MAITD), in this paper. Such an idea is motivated by the observation that the combination of individual model that focuses on historical baseline and group model that represents peer baseline can effectively identify the low-signal yet long-lasting insider threats, and reduce the possibility of false positives. To illustrate, our MAITD captures the temporal and spatial correlation of user behaviors by constructing compound behavioral matrix and common group model, and combines specific application scenarios to integrate the detection results. Moreover, it introduces the memory-augmented network into autoencoder to enlarge the reconstruction error of abnormal samples, thereby reducing the false negative rate. The experimental results on CERT dataset show that the instance-based and user-based AUCs of MAITD reach up to 87.54% and 94.56%, respectively, which significantly outperform previous works.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.