2021
DOI: 10.1117/1.jmm.20.4.043201
|View full text |Cite
|
Sign up to set email alerts
|

Accurate prediction of EUV lithographic images and 3D mask effects using generative networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(13 citation statements)
references
References 17 publications
0
13
0
Order By: Relevance
“…They are classified into three types depending on the target: near-field amplitude on the mask [12][13][14][15] or far-field diffraction amplitude at the pupil 16 or image intensity on the wafer. 17,18 In our model 16 a CNN is used to predict the far-field diffraction amplitude from the input mask pattern. One of the issues in DNN is that it requires huge amount of the data.…”
Section: Introductionmentioning
confidence: 99%
“…They are classified into three types depending on the target: near-field amplitude on the mask [12][13][14][15] or far-field diffraction amplitude at the pupil 16 or image intensity on the wafer. 17,18 In our model 16 a CNN is used to predict the far-field diffraction amplitude from the input mask pattern. One of the issues in DNN is that it requires huge amount of the data.…”
Section: Introductionmentioning
confidence: 99%
“…They are classified into three types depending on the target: near-field amplitude on the mask, [16][17][18][19] far-field diffraction amplitude at the pupil, 20 and image intensity on the wafer. 21,22 In our model, 20 a CNN is used to predict the far-field diffraction amplitude from the input mask pattern. Although the training of CNN takes a very long time (more than 1 day), the prediction time is very short, 0.05 s for 256 nm × 256 nm.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, many attempts have been made to simulate the M3D effects using deep neural networks, such as convolutional neural network (CNN) or generative adversarial network. They are classified into three types depending on the target: near-field amplitude on the mask, 16 19 far-field diffraction amplitude at the pupil, 20 and image intensity on the wafer 21 , 22 . In our model, 20 a CNN is used to predict the far-field diffraction amplitude from the input mask pattern.…”
Section: Introductionmentioning
confidence: 99%
“…Because the far-field amplitudes are described in momentum (wave vector) space and the source position corresponds to the incident momentum in Koehler illumination, our model naturally parametrizes the source position dependence of the amplitude. The third model 13 , 14 uses the image intensity on the wafer as the target of DNN. This model is much straightforward than other models because the image intensity is used in the following resist simulation.…”
Section: Introductionmentioning
confidence: 99%