1985
DOI: 10.1117/12.946575
|View full text |Cite
|
Sign up to set email alerts
|

Digital Autofocus Using Scene Content

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

1990
1990
2021
2021

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…and Multi-Head Attention (Vaswani et al, 2017). This model use the swish activation function (Ramachandran et al, 2017) with GLU (Shazeer, 2020) for the MLP, also commonly referred to as swiglu. For normalization, we use RMSNorm (Zhang & Sennrich, 2019) since it's computationally more efficient than LayerNorm (Ba et al, 2016).…”
Section: Model Architecturementioning
confidence: 99%
See 1 more Smart Citation
“…and Multi-Head Attention (Vaswani et al, 2017). This model use the swish activation function (Ramachandran et al, 2017) with GLU (Shazeer, 2020) for the MLP, also commonly referred to as swiglu. For normalization, we use RMSNorm (Zhang & Sennrich, 2019) since it's computationally more efficient than LayerNorm (Ba et al, 2016).…”
Section: Model Architecturementioning
confidence: 99%
“…The 20B code model is trained with learned absolute position embeddings. We use Multi-Query Attention (Shazeer, 2019) during training for efficient downstream inference.…”
Section: Bmentioning
confidence: 99%
“…(Shazzer and Harris 1985) has developed monolithic image processing chips to implement their autofocus technique used in FLIR (Forward Looking Infrared) imaging.…”
Section: Image Focusing and Autofocusmentioning
confidence: 99%