2016
DOI: 10.1109/tmm.2015.2496372
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Perspective Cost-Sensitive Context-Aware Multi-Instance Sparse Coding and Its Application to Sensitive Video Recognition

Abstract: Abstract:With the development of video-sharing websites, P2P, micro-blog, mobile WAP websites, and so on, sensitive videos can be more easily accessed. Effective sensitive video recognition is necessary for web content security. Among web sensitive videos, this paper focuses on violent and horror videos. Based on color emotion and color harmony theories, we extract visual emotional features from videos. A video is viewed as a bag and each shot in the video is represented by a key frame which is treated as an i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 46 publications
(49 reference statements)
0
13
0
Order By: Relevance
“…Readers are referred to the Discussion Section for more details on parameter settings. We compared the proposed TCbGA algorithm to several state-of-the-art feature selection algorithms, including DEMOFS [34] , MOEA/D [35], MDisABC [36], W-QEISS [37], SB-ELM [38], HPSO-LS [39], MoDE [40], GASNCM [41], GCACO [42], GCNC [43], UFSACO [44], FFW-DGC [45], QIFS [46], FSFWISIW [47], BALO [48], MI-SC [49], VMBACO [50], HDBPSO [51] and bGWO [52]. Table 3 shows the mean and standard deviation of the classification accuracy obtained by applying our algorithm to each dataset 25 times and the performance of other algorithms reported in the literature.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Readers are referred to the Discussion Section for more details on parameter settings. We compared the proposed TCbGA algorithm to several state-of-the-art feature selection algorithms, including DEMOFS [34] , MOEA/D [35], MDisABC [36], W-QEISS [37], SB-ELM [38], HPSO-LS [39], MoDE [40], GASNCM [41], GCACO [42], GCNC [43], UFSACO [44], FFW-DGC [45], QIFS [46], FSFWISIW [47], BALO [48], MI-SC [49], VMBACO [50], HDBPSO [51] and bGWO [52]. Table 3 shows the mean and standard deviation of the classification accuracy obtained by applying our algorithm to each dataset 25 times and the performance of other algorithms reported in the literature.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…SVM) to solve the MIL problem, is a very effective MIl algorithm. For example, DD-SVM [18], Multi-Instance Learning via Embedded Instance Selection (MILES) [19], LSA-MIL [20], MI-J-SC [21], EC-SVM [22], MILDM [23] and miFV/miVLAD [24], etc. Unfortunately, few existing feature representation methods are effective to describe the sematic of images (bags), so it is difficult to adapt some well-known SIL methods to solve the MIL problems.…”
Section: B Mil Related Algorithmsmentioning
confidence: 99%
“…To represent audio content, pitch, zero crossing rates (ZCR), Mel frequency cepstrum coefficients (MFCC), and energy are the most popular features [2,43,82,132,170]. In particular, the MFCC [14,49,74,147,176,177] and its ∆MFCC are used to characterize emotions in video clips frequently; while the derivatives and statistics (min, max,mean) of MFCC or ∆MFCC are also explored widely. As for pitch, [87] shows that pitch of sound is associated closely with some emotions, such as anger with higher pitch and sadness with lower standard deviation of pitch.…”
Section: Content-related Featuresmentioning
confidence: 99%
“…For example, EmoBase10 feature depicting audio cues is computed in [147]. Hu et al [49] proposed a method of combining the audio and visual features to model contextual structures of the key frames selected from video. This can produce a kind of so-called multiinstance sparse coding (MI-SC) for next analysis.…”
Section: Content-related Featuresmentioning
confidence: 99%