Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2019
DOI: 10.1007/978-3-030-20887-5_21
|View full text |Cite
|
Sign up to set email alerts
|

Multi-level Sequence GAN for Group Activity Recognition

Abstract: We propose a novel semi supervised, Multi Level Sequential Generative Adversarial Network (MLS-GAN ) architecture for group activity recognition. In contrast to previous works which utilise manually annotated individual human action predictions, we allow the models to learn it's own internal representations to discover pertinent subactivities that aid the final group activity recognition task. The generator is fed with person-level and scene-level features that are mapped temporally through LSTM networks. Acti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 17 publications
(14 citation statements)
references
References 26 publications
0
14
0
Order By: Relevance
“…Sometimes, different group activities share the same local motion which may cause misclassifications. To reduce the influence of confused motions, Kim et al [73] proposed a discriminative group context feature (DGCF) that takes prominent sub-events into consideration. Two types of features, individual activity and sub-event feature, are extracted to construct group activity representations.…”
Section: Hierarchical Temporal Modelingmentioning
confidence: 99%
“…Sometimes, different group activities share the same local motion which may cause misclassifications. To reduce the influence of confused motions, Kim et al [73] proposed a discriminative group context feature (DGCF) that takes prominent sub-events into consideration. Two types of features, individual activity and sub-event feature, are extracted to construct group activity representations.…”
Section: Hierarchical Temporal Modelingmentioning
confidence: 99%
“…We exploit the task-specific loss-function learning capability of the GAN framework to automatically learn a custom loss function [30]- [33] that facilitates these two tasks . The merit of this approach is that it allows us to learn a highly nonlinear loss, in contrast to a linear loss like cross entropy, to optimally capture the underlying semantics of the process.…”
Section: The Proposed Approachmentioning
confidence: 99%
“…This custom loss function learning capability of GANs is highly beneficial in the multi-task learning setting, as it allows us to learn a custom loss function that accounts for all the tasks at hand rather than simply adding together the loss functions for individual tasks. For instance, in [33] the authors illustrate the utility of GANs for video based action prediction while synthesising future frame representations, and the authors in [34] showed that this process is highly beneficial for mitigating the errors due to variation of view angles in gait recognition through view synthesis.…”
Section: The Proposed Approachmentioning
confidence: 99%
“…In our work we utilise a conditional GAN [12,13,36] for deep future representation generation. A limited number The model receives RGB and optical flow streams as the visual and temporal representations of the given scene.…”
Section: Previous Workmentioning
confidence: 99%