Image-based virtual try-on for fashion has gained considerable attention recently. The task requires trying on a clothing item on a target model image. An efficient framework for this is composed of two stages: (1) warping (transforming) the try-on cloth to align with the pose and shape of the target model, and (2) a texture transfer module to seamlessly integrate the warped try-on cloth onto the target model image. Existing methods suffer from artifacts and distortions in their try-on output. In this work, we present SieveNet, a framework for robust image-based virtual tryon. Firstly, we introduce a multi-stage coarse-to-fine warping network to better model fine grained intricacies (while transforming the try-on cloth) and train it with a novel perceptual geometric matching loss. Next, we introduce a tryon cloth conditioned segmentation mask prior to improvethe texture transfer network. Finally, we also introduce a duelling triplet loss strategy for training the texture translation network which further improves the quality of generated try-on result. We present extensive qualitative and quantitative evaluations of each component of the proposed pipeline and show significant performance improvements against the current state-of-the-art method.
Adversarial examples are fabricated examples, indistinguishable from the original image that mislead neural networks and drastically lower their performance. Recently proposed AdvGAN, a GAN based approach, takes input image as a prior for generating adversaries to target a model. In this work, we show how latent features can serve as better priors than input images for adversary generation by proposing AdvGAN++, a version of AdvGAN that achieves higher attack rates than AdvGAN and at the same time generates perceptually realistic images on MNIST and CIFAR-10 datasets.
With the rapid growth of online commerce, image-based virtual try-on systems for fitting new in-shop garments onto a person image presents an exciting opportunity to deliver interactive customer experience. Current state-of-the-art methods achieve this in a two-stage pipeline, where the first stage transforms the in-shop cloth into fitting the body shape of the target person and the second stage employs an image composition module to seamlessly integrate the transformed in-shop cloth onto the target person image. In the present work, we introduce a multi-scale patch adversarial loss for training the warping module of a state-ofthe-art virtual try-on network. We show that the proposed loss produces robust transformation of clothes to fit the body shape while preserving texture details, which in turn improves image composition in the second stage. We perform extensive evaluations of the proposed loss on the try-on performance and show significant performance improvement over the existing state-of-the-art method.
Deep neural networks (DNNs) are powerful learning machines that have enabled breakthroughs in several domains. In this work, we introduce a new retrospective loss to improve the training of deep neural network models by utilizing the prior experience available in past model states during training. Minimizing the retrospective loss, along with the task-specific loss, pushes the parameter state at the current training step towards the optimal parameter state while pulling it away from the parameter state at a previous training step. Although a simple idea, we analyze the method as well as conduct comprehensive sets of experiments across domains -images, speech, text and graphs -to show that the proposed loss results in improved performance across input domains, tasks, and architectures.
The inadvertent stealing of private/sensitive information using Knowledge Distillation (KD) has been getting significant attention recently and has guided subsequent defense efforts considering its critical nature. Recent work Nasty Teacher proposed to develop teachers which can not be distilled or imitated by models attacking it. However, the promise of confidentiality offered by a nasty teacher is not well studied, and as a further step to strengthen against such loopholes, we attempt to bypass its defense and steal (or extract) information in its presence successfully. Specifically, we analyze Nasty Teacher from two different directions and subsequently leverage them carefully to develop simple yet efficient methodologies, named as HTC and SCM, which increase the learning from Nasty Teacher by upto 68.63% on standard datasets. Additionally, we also explore an improvised defense method based on our insights of stealing. Our detailed set of experiments and ablations on diverse models/settings demonstrate the efficacy of our approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.