The bit error rate (BER) performances of variant delay multiple-access differential chaos-shift keying (VDMA-DCSK) communication systems over a multipath fading channel with delay spread are investigated. The BER formula of the VDMA-DCSK over the fading channel is evaluated. A two-ray Rayleigh fading channel model is used to simulate the VDMA-DCSK system. The theoretical and simulation results are presented and they match each other, which supports the theoretical analysis. The multipath performance of the VDMA-DCSK is compared with that of a benchmark coherent MA-CSK system and with that of an invariant delay MA-DCSK system. The results show that in the multipath fading channel with delay spread environment the VDMA-DCSK system performance decreases least.
An exact method is employed to analyze the bit error rate (BER) performances of differential chaos shift keying (DCSK) communication system over fading channels. The exact BER performances of DCSK in Nakagami-m Rayleigh and Rician fading channels are derived, respectively. The Gaussian approximation (GA) method is compared with the exact method. The numerical results and simulation results for the two methods are presented and are compared in the fading channels, respectively. These results support our theoretical analysis.
Matching the image and text with deep models has been extensively studied in recent years. Mining the correlation between image and text to learn effective multi-modal features is crucial for image-text matching. However, most existing approaches model the different types of correlation independently. In this work, we propose a novel model named Adversarial Attentive Multi-modal Embedding Learning (AAMEL) for image-text matching. It combines adversarial networks and attention mechanism to learn effective and robust multi-modal embeddings for better matching between the image and text. Adversarial learning is implemented as an interplay between two processes. First, two attention models are proposed to exploit two types of correlation between the image and text for multi-modal embedding learning and to confuse the other process. Then the discriminator tries to distinguish the two types of multi-modal embeddings learned by the two attention models, in which the two attention models are reinforced mutually. Through adversarial learning, it is expected that both the two embeddings from the attention models can well exploit the two types of correlation, and thus they can deceive the discriminator that they are generated from the other attention-based model. By integrating the attention mechanism and adversarial learning, the learned multi-modal embeddings are more effective for image and text matching. Extensive experiments have been conducted on the benchmark datasets of Flickr30K and MSCOCO to demonstrate the superiority of the proposed approaches over the state-of-the-art methods on image-text retrieval.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.