2015
DOI: 10.1364/ao.54.005291
|View full text |Cite
|
Sign up to set email alerts
|

Robust hybrid source and mask optimization to lithography source blur and flare

Abstract: As a promising resolution enhancement technique, a set of pixelated source and mask optimization (SMO) methods has been introduced to further improve the lithography at 45 nm node and beyond. Recently, some papers studied the impact of the scanner errors on SMO, and the results revealed that the source blur and flare seriously impact on the lithography performance of the optimal source and mask resulting from SMO. However, current SMO methods did not propose an effective method to compensate for the impact of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 23 publications
(3 citation statements)
references
References 34 publications
(35 reference statements)
0
3
0
Order By: Relevance
“…In this section, we develop a gradient-based SNPCO method to comprehensively optimize the source, NA, and process parameters for mitigating the impacts of mask absorber errors on lithographic imaging performance. For the statistical optimization problem described in Equations (10) and (11), the SGD algorithm is adept in solving such problem [26,27]. Thus, we use the SGD algorithm to optimize the source.…”
Section: Snpco Methods For Duv Lithography Systemmentioning
confidence: 99%
“…In this section, we develop a gradient-based SNPCO method to comprehensively optimize the source, NA, and process parameters for mitigating the impacts of mask absorber errors on lithographic imaging performance. For the statistical optimization problem described in Equations (10) and (11), the SGD algorithm is adept in solving such problem [26,27]. Thus, we use the SGD algorithm to optimize the source.…”
Section: Snpco Methods For Duv Lithography Systemmentioning
confidence: 99%
“…Table 1 illustrates the optimization flow of the SGD and MBGD algorithm, respectively. In our previous works [25,26], SGD was adopted to calculate the gradient in a single training sample β i,k in each iteration with fast speed. However, due to the wider sample range of defocus disturbances, the SGD algorithm could not guarantee each iteration was conducted in the global optimal direction.…”
Section: Drsmo Optimization Algorithmmentioning
confidence: 99%
“…We have previously used the SGD algorithm to solve multi-objective SMO [25,26] and it converged well with fast speed. However, due to the wider sampling range of defocusing in the DRSMO framework, it was hard for the SGD algorithm to search for the global optimal direction if each iteration was only driven by one sample gradient in the training set.…”
Section: Comparison Of Sgd and Mbgd Algorithm For Drsmomentioning
confidence: 99%