Neural networks are information processing systems whose design was inspired by the studies of the ability of the human brain to learn from observations, and to generalize by abstraction. In the RF and microwave areas, neural networks are used to model passive and active microwave devices to enhance circuit design. Neural networks can be trained using measured or simulated microwave device data. The trained neural networks become models of microwave devices and can be used in place of CPU‐intensive EM/physics models to significantly speed up circuit design. Here we describe the fundamentals of neural networks from RF and microwave perspective. We describe what neural networks are, how to develop them, and how to use them in RF and microwave CAD.
Diluted magnetic semiconductor Zn1−xMnxO nanowires were synthesized via an in situ doping of manganese in ZnO nanowires using vapor phase growth at 500 °C. The maximum content of the manganese in the ZnO is around 13 at. %, approaching the maximum thermal equilibrium limit of Mn solubility in ZnO at the growth temperature. Structure and composition analysis revealed that the manganese was doped into the lattice structure, forming solid solution instead of precipitation. Magnetic property measurements revealed that the as-doped Zn1−xMnxO nanowires exhibit ferromagnetic behavior with Curie temperature around 37 K.
In this paper, we propose a novel pretraining-based encoder-decoder framework, which can generate the output sequence based on the input sequence in a two-stage manner. For the encoder of our model, we encode the input sequence into context representations using BERT. For the decoder, there are two stages in our model, in the first stage, we use a Transformer-based decoder to generate a draft output sequence. In the second stage, we mask each word of the draft sequence and feed it to BERT, then by combining the input sequence and the draft representation generated by BERT, we use a Transformer-based decoder to predict the refined word for each masked position. To the best of our knowledge, our approach is the first method which applies the BERT into text generation tasks. As the first step in this direction, we evaluate our proposed method on the text summarization task. Experimental results show that our model achieves new state-of-the-art on both CNN/Daily Mail and New York Times datasets.2. We design a two-stage decoder process. In this architecture, our model can generate each word of the summary considering both sides' context information.3. We conduct experiments on the benchmark datasets CNN/Daily Mail and New York Times. Our model achieves a 33.33 average of ROUGE-1, ROUGE-2 and ROUGE-L on the CNN/Daily Mail, which is state-of-the-art. On the New York Times dataset, our model achieves about 5.6% relative improvement over ROUGE-1.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.