2018
DOI: 10.1016/j.ultramic.2018.03.004
|View full text |Cite
|
Sign up to set email alerts
|

A deep convolutional neural network to analyze position averaged convergent beam electron diffraction patterns

Abstract: We establish a series of deep convolutional neural networks to automatically analyze position averaged convergent beam electron diffraction patterns. The networks first calibrate the zero-order disk size, center position, and rotation without the need for pretreating the data. With the aligned data, additional networks then measure the sample thickness and tilt. The performance of the network is explored as a function of a variety of variables including thickness, tilt, and dose. A methodology to explore the r… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
41
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 55 publications
(41 citation statements)
references
References 38 publications
0
41
0
Order By: Relevance
“…17,[19][20][21][22][23] Recent advances in computer vision and machine learning (ML) have been introduced into the electron microscopic field for image analysis, such as detection and segmentation in medical images, [24][25][26] unsupervised statistical representation of microstructure images, 27 clustering or classification in various materials images, [28][29][30] chemical identification and local transformation tracking at the atomic level in scanning transmission electron microscopic (STEM) images, 31 and analysis of electron diffraction patterns. 32 To our knowledge, there is very limited published research work on using computer vision or deep learning approaches to recognize locations and extract quantitative information for nanoscale and mesoscale defects in microscopic images. Ziatdinov et al 31 recently reported their work on using deep learning to detect location of the atomic species and type of lattice defects for atomically resolved images, but their approach did not detect defects at a larger scale (>1 nm) or extract quantitative shape or contour information of the defects.…”
Section: Introductionmentioning
confidence: 99%
“…17,[19][20][21][22][23] Recent advances in computer vision and machine learning (ML) have been introduced into the electron microscopic field for image analysis, such as detection and segmentation in medical images, [24][25][26] unsupervised statistical representation of microstructure images, 27 clustering or classification in various materials images, [28][29][30] chemical identification and local transformation tracking at the atomic level in scanning transmission electron microscopic (STEM) images, 31 and analysis of electron diffraction patterns. 32 To our knowledge, there is very limited published research work on using computer vision or deep learning approaches to recognize locations and extract quantitative information for nanoscale and mesoscale defects in microscopic images. Ziatdinov et al 31 recently reported their work on using deep learning to detect location of the atomic species and type of lattice defects for atomically resolved images, but their approach did not detect defects at a larger scale (>1 nm) or extract quantitative shape or contour information of the defects.…”
Section: Introductionmentioning
confidence: 99%
“…Direct consequences of neglecting low-loss inelastic scattering could be expected for the accuracy of thickness determination based on PACBED pattern matching [24,31], especially for thicker samples and, in particular, on the training of neural networks to automate such determinations [32,33]. Since intensity redistribution due to low-loss inelastic scattering is mainly from the bright field towards the low angle dark field, the quantitative analysis of images acquired from these angular regimes are expected to greatly benefit from including the effect in simulations.…”
Section: Discussionmentioning
confidence: 99%
“…Here, we present a deep convolutional neural network (CNN) for predicting the optimal convergence angle for STEM imaging with the Strehl ratio. CNNs have been shown to have remarkable performance at image analysis tasks (LeCun et al, 2015), such as classification (Krizhevsky et al, 2012), encoding and decoding (Badrinarayanan et al, 2017), and regression (Mahendran et al, 2017;Lathuiliére et al, 2019); including recent applications in electron microscopy (Xu & LeBeau, 2018;Ede & Beanland, 2019;Zhang et al, 2020). The Strehl ratio is an accurate and efficiently calculated metric for probe quality that straightforwardly incorporates into an objective function for the optimization of the STEM probe.…”
Section: Introductionmentioning
confidence: 99%