2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops 2014
DOI: 10.1109/cvprw.2014.44
|View full text |Cite
|
Sign up to set email alerts
|

Multi-source Multi-modal Activity Recognition in Aerial Video Surveillance

Abstract: Recognizing activities in wide aerial/overhead imagery remains a challenging problem due in part to low-resolution video and cluttered scenes with a large number of moving objects. In the context of this research, we deal with two unsynchronized data sources collected in real-world operating scenarios: full-motion videos (FMV) and analyst call-outs (ACO) in the form of chat messages (voice-to-text) made by a human watching the streamed FMV from an aerial platform. We present a multi-source multi-modal activity… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 22 publications
(7 citation statements)
references
References 28 publications
0
7
0
Order By: Relevance
“…To test the subquery evaluation performance of the execution engine and related metadata structures, an evaluation scenario comprised of executing queries simultaneously for a period of duration. The evaluation was done to detect objects, classify type (by size), and discern activities (Hammoud, Sahin, Blasch, et al, 2014). The LVC-DMBS could be applicable to make image collections such as plume detection (Ravela, 2013).…”
Section: Resultsmentioning
confidence: 99%
“…To test the subquery evaluation performance of the execution engine and related metadata structures, an evaluation scenario comprised of executing queries simultaneously for a period of duration. The evaluation was done to detect objects, classify type (by size), and discern activities (Hammoud, Sahin, Blasch, et al, 2014). The LVC-DMBS could be applicable to make image collections such as plume detection (Ravela, 2013).…”
Section: Resultsmentioning
confidence: 99%
“…The result was an effort in controlled natural language which then could utilize the many developments in natural language processing (NLP) [35] for situation awareness [36]. Subsequent efforts included applications for control of full-motion video platforms [37,38]. Recent efforts are enabled the next generation analysts [39].…”
Section: Battle Management Language (Bml)mentioning
confidence: 99%
“…The technology should augment a human analyst with their associated work domain objectives. As shown in Figure 3, information fusion includes low-level information fusion [LLIF] of Level 1 Object Assessment tracking and classification [20,21]. LLIF can be accessed from video, radar, or text data, among others for intelligence services of explicit decision making.…”
Section: Activity Analysismentioning
confidence: 99%