2020
DOI: 10.1587/elex.17.20190739
|View full text |Cite
|
Sign up to set email alerts
|

Acceleration of nonequilibrium Green’s function simulation for nanoscale FETs by applying convolutional neural network model

Abstract: We investigate the application of convolutional neural networks (CNNs) to accelerate quantum mechanical transport simulations (based on the nonequilibrium Green's function (NEGF) method) of double-gate MOSFETS. In particular, given a potential distribution as input data, we implement the convolutional autoencoder to train and predict the carrier density and local quantum capacitance distributions. The results indicate that the use of a single trained CNN model in the NEGF self-consistent calculation along with… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 25 publications
0
2
0
Order By: Relevance
“…There are two main steps when developing a deep learning model for device simulations, which in sequence are training data generation and model training. The training data may be collected systematically by self-consistent simulations [1][2][3][4][5][6][7][8][9][10][11] or measurements 12,13 . In this work, we choose the former approach based on a TCAD simulator 18 to easily access the physical quantities.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…There are two main steps when developing a deep learning model for device simulations, which in sequence are training data generation and model training. The training data may be collected systematically by self-consistent simulations [1][2][3][4][5][6][7][8][9][10][11] or measurements 12,13 . In this work, we choose the former approach based on a TCAD simulator 18 to easily access the physical quantities.…”
Section: Methodsmentioning
confidence: 99%
“…The data category for the two models is summarized in Table 1. Compared to prior literatures [1][2][3][4][5][6][7][8][9][10][11][12][13] , thanks to the unique advantage of U-Net 17 , hundreds or even thousands of data are not needed to train the model.…”
Section: Methodsmentioning
confidence: 99%