A model compound of Tyr244-His240 of bovine cytochrome c oxidase was synthesized and examined with UV
resonance Raman (UVRR) as well as UV absorption spectroscopy and pH titration. Owing to the covalent
linkage between imidazole and phenol, the pK
a of phenolic OH and imidazolic NδH groups were lowered by
1.1 and 2.3, respectively. UVRR measurements of ortho-imidazole-bound para-cresol (Im⧫CrOH), its
deprotonated anion (Im⧫CrO-), and deprotonated neutral radical (Im⧫CrO•), and their imidazole perdeuterated,
cresol perdeuterated, and 18O derivatives allowed assignments of Raman bands to the imidazole and phenol
modes. Unexpectedly, some of imidazole vibrations were resonance enhanced upon ππ* transition of phenol,
although they were not observed for the corresponding equimolar mixture of imidazole and p-cresol, indicating
delocalization of π electrons between the imidazole and phenol rings via the covalent linkage. Such features
were appreciably changed by incorporation of a bulky group at imidazole C2 position, causing staggered
conformation. The C−O stretching RR band was observed at 1530 cm-1 only for a radical state. The imidazole
substituent at the C2 position seems to increase appreciably a double bond character of the CO bond in the
radical state than its absence. The Y8a band is not shifted upon deprotonation, although it was downshifted
by 12 cm-1 for the unmodified p-cresol. The UVRR difference spectra of the anion and radical with regard
to the neutral states are discussed in relation with the corresponding difference spectra of the enzyme.
Modeling the long-term facial aging process is extremely challenging due to the presence of large and non-linear variations during the face development stages. In order to efficiently address the problem, this work first decomposes the aging process into multiple short-term stages. Then, a novel generative probabilistic model, named Temporal Non-Volume Preserving (TNVP) transformation, is presented to model the facial aging process at each stage. Unlike Generative Adversarial Networks (GANs), which requires an empirical balance threshold, and Restricted Boltzmann Machines (RBM), an intractable model, our proposed TNVP approach guarantees a tractable density function, exact inference and evaluation for embedding the feature transformations between faces in consecutive stages. Our model shows its advantages not only in capturing the non-linear age related variance in each stage but also producing a smooth synthesis in age progression across faces. Our approach can model any face in the wild provided with only four basic landmark points. Moreover, the structure can be transformed into a deep convolutional network while keeping the advantages of probabilistic models with tractable log-likelihood density estimation. Our method is evaluated in both terms of synthesizing age-progressed faces and cross-age face verification and consistently shows the state-of-the-art results in various face aging databases, i.e. FG-NET, MORPH, AginG Faces in the Wild (AGFW), and Cross-Age Celebrity Dataset (CACD). A large-scale face verification on Megaface challenge 1 is also performed to further show the advantages of our proposed approach.
Abstract. Cryptographic devices are vulnerable to the nowadays well known side channel leakage analysis. Secret data can be revealed by power analysis attacks such as Simple Power Analysis (SPA), Differential Power Analysis (DPA) and Correlation Power Analysis (CPA). First, we give an overview of DPA in mono-bit and multi-bit cases. Next, the existing multi-bit DPA methods are generalized into the proposed Partitioning Power Analysis (PPA) method. Finally, we focus on the CPA technique, showing that this attack is a case of PPA with special coefficients and a normalization factor. We also propose a method that allows us to improve the performance of CPA by restricting the normalization factor.
This paper presents a method for detecting salient objects in videos, where temporal information in addition to spatial information is fully taken into account. Following recent reports on the advantage of deep features over conventional handcrafted features, we propose a new set of spatiotemporal deep (STD) features that utilize local and global contexts over frames. We also propose new spatiotemporal conditional random field (STCRF) to compute saliency from STD features. STCRF is our extension of CRF to the temporal domain and describes the relationships among neighboring regions both in a frame and over frames. STCRF leads to temporally consistent saliency maps over frames, contributing to accurate detection of salient objects' boundaries and noise reduction during detection. Our proposed method first segments an input video into multiple scales and then computes a saliency map at each scale level using STD features with STCRF. The final saliency map is computed by fusing saliency maps at different scale levels. Our experiments, using publicly available benchmark datasets, confirm that the proposed method significantly outperforms the state-of-the-art methods. We also applied our saliency computation to the video object segmentation task, showing that our method outperforms existing video object segmentation methods.
Applications in secure components (such as smartcards, mobile phones or secure dongles) must be hardened against fault injection to guarantee security even in the presence of a malicious fault. Crafting applications robust against fault injection is an open problem for all actors of the secure application development life cycle, which prompted the development of many simulation tools. A major difficulty for these tools is the absence of representative codes, criteria and metrics to evaluate or compare obtained results. We present FISSC, the first public code collection dedicated to the analysis of code robustness against fault injection attacks. FISSC provides a framework of various robust code implementations and an approach for comparing tools based on predefined attack scenarios.This work has been partially supported by the SERTIF project (ANR-14-ASTR-0003-01): http://sertif-projet.forge.imag.fr. This work has been partially supported by the LabEx PERSYVAL-Lab (ANR-11-LABX-0025).
This paper presents a novel end-to-end 3D fully convolutional network for salient object detection in videos. The proposed network uses 3D filters in the spatiotemporal domain to directly learn both spatial and temporal information to have 3D deep features, and transfers the 3D deep features to pixel-level saliency prediction, outputting saliency voxels. In our network, we combine the refinement at each layer and deep supervision to efficiently and accurately detect salient object boundaries. The refinement module recurrently enhances to learn contextual information into the feature map. Applying deeply-supervised learning to hidden layers, on the other hand, improves details of the intermediate saliency voxel, and thus the saliency voxel is refined progressively to become finer and finer. Intensive experiments using publicly available benchmark datasets confirm that our network outperforms state-of-the-art methods. The proposed saliency model also effectively works for video object segmentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.