2016 IEEE Winter Conference on Applications of Computer Vision (WACV) 2016
DOI: 10.1109/wacv.2016.7477589
|View full text |Cite
|
Sign up to set email alerts
|

Combining multiple sources of knowledge in deep CNNs for action recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
111
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 144 publications
(114 citation statements)
references
References 14 publications
0
111
0
Order By: Relevance
“…In addition to content and network signals, we incorporate other linguistic cues into our networks. For this we rely on the "late fusion" approach that has been shown to be effective in vision tasks (Karpathy et al, 2014;Park et al, 2016). "Fusion" allows for a network to learn a combined representation of multiple input streams.…”
Section: Datamentioning
confidence: 99%
“…In addition to content and network signals, we incorporate other linguistic cues into our networks. For this we rely on the "late fusion" approach that has been shown to be effective in vision tasks (Karpathy et al, 2014;Park et al, 2016). "Fusion" allows for a network to learn a combined representation of multiple input streams.…”
Section: Datamentioning
confidence: 99%
“…This will add a geometrical "shape" information to the RGB reflectance values. There are several ways of combining these inputs such as presented in (Park et al, 2016). A first method would consist in merging the inputs and training the network with four channels.…”
Section: Resultsmentioning
confidence: 99%
“…For image processing, the Scale-Invariant Feature Transform (SIFT) [11] is a well-known image feature matching algorithm. Another method based on Deep Learning [12] can be used for object recognition, and has many other applications such as human action analysis [13]. Moreover, Deep Learning can be applied to some unlearned objects.…”
Section: Detection Of Neighboring Objects and Interobjectmentioning
confidence: 99%