2023
DOI: 10.1117/1.jmm.22.3.031208
|View full text |Cite
|
Sign up to set email alerts
|

Training procedure for scanning electron microscope 3D surface reconstruction using unsupervised domain adaptation with simulated data

Abstract: Accurate metrology techniques for semiconductor devices are indispensable for controlling the manufacturing process. For instance, the dimensions of a transistor's current channel (fin) are an important indicator of the device's performance regarding switching voltages and parasitic capacities. We expand upon traditional 2D analysis by utilizing computer vision techniques for full-surface reconstruction. We propose a data-driven approach that predicts the dimensions, height and width (CD) values, of fin-like s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 45 publications
0
1
0
Order By: Relevance
“…[175] Presents a model that works upon generative adversarial networks for semi-supervised learning Can improve performance by learning from both labeled and unlabeled data [176] Introduces a new method for learning invariant features for domain adaptation Can improve performance by learning representations that are invariant to domain shift [177] Introduces a new method for domain adaptation that combines invariant representations and self-supervised learning Can improve performance by learning representations that are invariant to domain shift and using self-supervised learning to learn features that are transferable to new domains [178] Presents a model that works upon meta-learning for transferable features Can improve performance by learning features that are transferable to new domains [179] Presents a model that works upon multi-task learning and attention Can improve performance by using multi-task learning and attention to learn representations that are invariant to domain shift [180] Presents a model that works upon multi-task learning for few-shot image classification Can improve performance by learning multiple tasks with few examples [181] Presents a model that works upon patch-level self-supervised learning Can improve performance by using patch-level self-supervised learning to learn features that are invariant to domain shift [182] Presents a model that works upon self-supervised contrastive learning Can improve performance by learning representations that are invariant to domain shift [183] Presents a model that works upon self-supervised contrastive learning Can improve performance by learning representations that are invariant to domain shift [184] Presents a model that works upon self-supervised learning and synthetic data Can improve performance by using self-supervised learning to learn features that are invariant to domain shift and by generating synthetic data that are like the target domain [185] Presents a model that works upon synthetic data and domain-invariant feature aggregation…”
Section: Paper Contribution Advantagesmentioning
confidence: 99%
“…[175] Presents a model that works upon generative adversarial networks for semi-supervised learning Can improve performance by learning from both labeled and unlabeled data [176] Introduces a new method for learning invariant features for domain adaptation Can improve performance by learning representations that are invariant to domain shift [177] Introduces a new method for domain adaptation that combines invariant representations and self-supervised learning Can improve performance by learning representations that are invariant to domain shift and using self-supervised learning to learn features that are transferable to new domains [178] Presents a model that works upon meta-learning for transferable features Can improve performance by learning features that are transferable to new domains [179] Presents a model that works upon multi-task learning and attention Can improve performance by using multi-task learning and attention to learn representations that are invariant to domain shift [180] Presents a model that works upon multi-task learning for few-shot image classification Can improve performance by learning multiple tasks with few examples [181] Presents a model that works upon patch-level self-supervised learning Can improve performance by using patch-level self-supervised learning to learn features that are invariant to domain shift [182] Presents a model that works upon self-supervised contrastive learning Can improve performance by learning representations that are invariant to domain shift [183] Presents a model that works upon self-supervised contrastive learning Can improve performance by learning representations that are invariant to domain shift [184] Presents a model that works upon self-supervised learning and synthetic data Can improve performance by using self-supervised learning to learn features that are invariant to domain shift and by generating synthetic data that are like the target domain [185] Presents a model that works upon synthetic data and domain-invariant feature aggregation…”
Section: Paper Contribution Advantagesmentioning
confidence: 99%