To develop a cost-effective condition-based maintenance strategy, accurate prediction of the Remaining Useful Life (RUL) is the key. It is known that many failure mechanisms in engineering can be traced back to some underlying degradation processes. This article proposes a two-stage prognostic framework for individual units subject to hard failure, based on joint modeling of degradation signals and time-to-event data. The proposed algorithm features a low computational load, online prediction, and dynamic updating. Its application to automotive battery RUL prediction is discussed in this article as an example. The effectiveness of the proposed method is demonstrated through a simulation study and real data.
Figure 1: Example of partially attacked DeepFake video. The green and red boxes represent real and fake faces respectively.This figure illustrates that not all faces in a fake video are manipulated. Real and fake faces may appear in the same frame. Face labels of one person in nearby frames may also be different.
Recent advances on Vision Transformer (ViT) and its improved variants have shown that self-attention-based networks surpass traditional Convolutional Neural Networks (CNNs) in most vision tasks. However, existing ViTs focus on the standard accuracy and computation cost, lacking the investigation of the intrinsic influence on model robustness and generalization. In this work, we conduct systematic evaluation on components of ViTs in terms of their impact on robustness to adversarial examples, common corruptions and distribution shifts. We find some components can be harmful to robustness. By using and combining robust components as building blocks of ViTs, we propose Robust Vision Transformer (RVT), which is a new vision transformer and has superior performance with strong robustness. We further propose two new plug-and-play techniques called positionaware attention scaling and patch-wise augmentation to augment our RVT, which we abbreviate as RVT * . The experimental results on ImageNet and six robustness benchmarks show the advanced robustness and generalization ability of RVT compared with previous ViTs and state-of-the-art CNNs. Furthermore, RVT-S * also achieves Top-1 rank on multiple robustness leaderboards including ImageNet-C and ImageNet-Sketch. The code will be available at https://git.io/Jswdk.Preprint. Under review.
The task of Language-Based Image Editing (LBIE) aims at generating a target image by editing the source image based on the given language description. The main challenge of LBIE is to disentangle the semantics in image and text and then combine them to generate realistic images. Therefore, the editing performance is heavily dependent on the learned representation. In this work, conditional generative adversarial network (cGAN) is utilized for LBIE. We find that existing conditioning methods in cGAN lack of representation power as they cannot learn the second-order correlation between two conditioning vectors. To solve this problem, we propose an improved conditional layer named Bilinear Residual Layer (BRL) to learning more powerful representations for LBIE task. Qualitative and quantitative comparisons demonstrate that our method can generate images with higher quality when compared to previous LBIE techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.