2019
DOI: 10.48550/arxiv.1901.10503
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Time-Space tradeoff in deep learning models for crop classification on satellite multi-spectral image time series

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Here, recurrent neural networks [45], such as Long Short-Term Memory (LSTM) [46] or Gated Recurrent Units (GRU) [47] were commonly used in encode-decoder architectures [48] for generative prediction of words. In Earth observation, the encoder model was utilized for change detection [41,42], and land cover [20] as well as crop type identification [39,44,49,50]. To utilize both spatial and temporal features from the time series, combinations of convolutional layers with recurrent layers [43,42] and convolutional-recurrent networks [51] have been explored and comprehensively compared in [49].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Here, recurrent neural networks [45], such as Long Short-Term Memory (LSTM) [46] or Gated Recurrent Units (GRU) [47] were commonly used in encode-decoder architectures [48] for generative prediction of words. In Earth observation, the encoder model was utilized for change detection [41,42], and land cover [20] as well as crop type identification [39,44,49,50]. To utilize both spatial and temporal features from the time series, combinations of convolutional layers with recurrent layers [43,42] and convolutional-recurrent networks [51] have been explored and comprehensively compared in [49].…”
Section: Related Workmentioning
confidence: 99%
“…In Earth observation, the encoder model was utilized for change detection [41,42], and land cover [20] as well as crop type identification [39,44,49,50]. To utilize both spatial and temporal features from the time series, combinations of convolutional layers with recurrent layers [43,42] and convolutional-recurrent networks [51] have been explored and comprehensively compared in [49]. These recurrent neural network encoders can be augmented by soft-attention mechanisms, originally developed for machine translation [52], as tested in [44].…”
Section: Related Workmentioning
confidence: 99%