Here, a resolution enhancement method is developed for post-processing images from atomic force microscopy (AFM). This method is based on deep learning neural networks in the AFM topography measurements. In this study, a very deep convolution neural network is developed to derive a high-resolution topography image from a low-resolution topography image. The AFM measured images from various materials are tested in this study. The derived high-resolution AFM images are comparable with the experimental measured high-resolution images measured at the same locations. The results suggest that this method can be developed as a general post-processing method for AFM image analysis.Atomic force microscopy (AFM) is a well-known powerful technique to image the surface structures and properties at nanoscale [1,2] with ultrahigh resolution. It tracks cantilever motion affected by the interaction between the tip and the sample surface; therefore, the resolution can reach the atomic or molecular level. [3][4][5] However, the unavoidable experimental errors, such as "tip crash," [6] the cross-talk between topographic and electrostatic information, [7] the large height variation of the sample surface, [8] and the influence by the properties of the sample or the ambient environment [9] can severely reduce the spatial resolution of the AFM images. In addition, scanning a large area with high resolution usually needs long time and may cause the drift of the image and tip wearing as well as the distortion of the nanostructures on the sample surface. [10] Generally speaking, low-resolution images certainly contain insufficient information, which may cause some of the important features, including grain boundary, surface defect, dislocation and interface unclear or even ignored. Hence, several methods and techniques were proposed to enhance the resolution and quality of the AFM images, such as by improving the shape and properties of the tip or cantilever, [11][12][13][14] the development and application of the multiple frequency excitation techniques, [15][16][17][18] the contour metrology, [19]
Emotion recognition in conversations (ERC) has received much attention recently in the natural language processing community. Considering that the emotions of the utterances in conversations are interactive, previous works usually implicitly model the emotion interaction between utterances by modeling dialogue context, but the misleading emotion information from context often interferes with the emotion interaction. We noticed that the gold emotion labels of the context utterances can provide explicit and accurate emotion interaction, but it is impossible to input gold labels at inference time. To address this problem, we propose an iterative emotion interaction network, which uses iteratively predicted emotion labels instead of gold emotion labels to explicitly model the emotion interaction. This approach solves the above problem, and can effectively retain the performance advantages of explicit modeling. We conduct experiments on two datasets, and our approach achieves state-of-the-art performance.
Active coronagraphy is deemed to play a key role for the next generation of high-contrast instruments, notably in order to deal with large segmented mirrors that might exhibit time-dependent pupil merit function, caused by missing or defective segments. To this purpose, we recently introduced a new technological framework called digital adaptive coronagraphy (DAC), making use of liquid-crystal spatial light modulators (SLMs) display panels operating as active focal-plane phase mask coronagraphs. Here, we first review the latest contrast performance, measured in laboratory conditions with monochromatic visible light, and describe a few potential pathways to improve SLM coronagraphic nulling in the future. We then unveil a few unique capabilities of SLM-based DAC that were recently, or are currently in the process of being, demonstrated in our laboratory, including NCPA wavefront sensing, aperture-matched adaptive phase masks, coronagraphic nulling of multiple star systems, and coherent differential imaging (CDI).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.