2015 IEEE International Conference on Computer Vision (ICCV) 2015
DOI: 10.1109/iccv.2015.460
|View full text |Cite
|
Sign up to set email alerts
|

Differential Recurrent Neural Networks for Action Recognition

Abstract: The long short-term memory (LSTM) neural network is capable of processing complex sequential information since it utilizes special gating schemes for learning representations from long input sequences. It has the potential to model any sequential time-series data, where the current hidden state has to be considered in the context of the past hidden states. This property makes LSTM an ideal choice to learn the complex dynamics of various actions. Unfortunately, the conventional LSTMs do not consider the impact … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
227
0
1

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 436 publications
(239 citation statements)
references
References 37 publications
0
227
0
1
Order By: Relevance
“…This can be achieved by developing a novel scheme that can adjust combining weights based on time-dependent reasoning and self-adjustment. It is also shown that our LSTM ensemble forecasting can effectively model highly nonlinear statistical dependencies, since their gating mechanisms enable quickly modifying the memory content of the cells and the internal dynamics in a cooperative way [20,21]. In addition, our complexity analysis demonstrates that our LSTM ensemble is able to have a runtime which is competitive with the approach to use only a single LSTM network.…”
Section: Discussionmentioning
confidence: 92%
See 1 more Smart Citation
“…This can be achieved by developing a novel scheme that can adjust combining weights based on time-dependent reasoning and self-adjustment. It is also shown that our LSTM ensemble forecasting can effectively model highly nonlinear statistical dependencies, since their gating mechanisms enable quickly modifying the memory content of the cells and the internal dynamics in a cooperative way [20,21]. In addition, our complexity analysis demonstrates that our LSTM ensemble is able to have a runtime which is competitive with the approach to use only a single LSTM network.…”
Section: Discussionmentioning
confidence: 92%
“…As such, LSTM gates provide an effective mechanism in terms of quickly modifying the memory content of the cells and the internal dynamics in a cooperative way [20,21]. In this sense, the LSTM may have a superior ability to learn nonlinear statistical dependencies of real-world time series data in comparison to conventional forecasting models.…”
Section: Mathematical Problems In Engineeringmentioning
confidence: 99%
“…There is a large body of research that focuses on recognizing and tracking human motion. The latest developments in deep features and convolutional neural network architectures achieve impressive performance; however, these require large amounts of data [7][8][9][10]. These methods tackle the recognition of actions performed at the center of the camera plane, except for [11], which uses static cameras to analyze actions.…”
Section: Introductionmentioning
confidence: 99%
“…-k is moving to the left-hand side of RL 0 k is moving along RL or not moving at all + k is moving to the right-hand side of RL C 4 Side constraint: Movement of l with respect to RL at time t 1 : analogously to C 3…”
Section: Qualitative Trajectory Representationmentioning
confidence: 99%
“…al. [4] use densely sampled HOG3D features which are directly classified by an extended LSTM network which considers the spatio-temporal dynamics of salient motions patterns. They use the Derivative of States ∂st ∂t , where s t is the state of memory cell s a time t, to gate the information flow in and out of the memory cell.…”
Section: Related Workmentioning
confidence: 99%