2005
DOI: 10.1109/tpami.2005.49
|View full text |Cite
|
Sign up to set email alerts
|

Automatic analysis of multimodal group actions in meetings

Abstract: This paper investigates the recognition of group actions in meetings. A framework is employed in which group actions result from the interactions of the individual participants. The group actions are modeled using different HMM-based approaches, where the observations are provided by a set of audiovisual features monitoring the actions of individuals. Experiments demonstrate the importance of taking interactions into account in modeling the group actions. It is also shown that the visual modality contains usef… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
211
0

Year Published

2005
2005
2014
2014

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 289 publications
(215 citation statements)
references
References 47 publications
0
211
0
Order By: Relevance
“…A few studies also included audio features [25], [26], [91] and found significant advantage in doing so. Multimodal multisource event detection is likely to receive more attention in the near future due to many emerging applications and the availability of large multimodal collections.…”
Section: ) Discussionmentioning
confidence: 99%
“…A few studies also included audio features [25], [26], [91] and found significant advantage in doing so. Multimodal multisource event detection is likely to receive more attention in the near future due to many emerging applications and the availability of large multimodal collections.…”
Section: ) Discussionmentioning
confidence: 99%
“…In the literature, the modelling of human interactions has reached a deep maturity, both under a pure visual perspective [71,51], and also considering other modalities [60,4,46,15,45,14,10]. Anyway, like in the case of the definition of behavior, there is the lack of a common, formal definition of interaction.…”
Section: Surveillance and Monitoringmentioning
confidence: 99%
“…One of the goals of the AMI and M4 projects is to find these kinds of group events in order to detect meeting structures (see e.g. [19]). To detect these highest level components a combination of the first and second level of interpreted information is needed.…”
Section: Meeting Modelling: Overviewmentioning
confidence: 99%
“…a presentation activity when observing a pattern of certain NCAs, CAs and physical state combined with certain argumentative structures in the data. The training can be done using machine learning techniques like Hidden Markov Models [19] or Dynamic Bayesian Networks, looking for statistical correlations.…”
Section: The Fourth Levelmentioning
confidence: 99%