2018
DOI: 10.1016/j.procs.2018.10.454
|View full text |Cite
|
Sign up to set email alerts
|

Sentiment Extraction from Naturalistic Video

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…In this calculation, we use the base 2 logarithmic formula, so that the statistical result is always between 0 and 1, so the similarity of words can be easily expressed. If the PMI value is greater than 0, it can indicate that the words have a correlation; if the PMI value is equal to 0, the words are independent of each other; if the PMI value is less than 0, the words have no correlation [12].…”
Section: Actual So-pmi Algorithm Operationmentioning
confidence: 99%
“…In this calculation, we use the base 2 logarithmic formula, so that the statistical result is always between 0 and 1, so the similarity of words can be easily expressed. If the PMI value is greater than 0, it can indicate that the words have a correlation; if the PMI value is equal to 0, the words are independent of each other; if the PMI value is less than 0, the words have no correlation [12].…”
Section: Actual So-pmi Algorithm Operationmentioning
confidence: 99%
“…If the same feature points can be detected in an image at different scales, then the feature points have certain scale invariance. In order to find the points with scale invariance, it is the key to construct the scale space first [7,8], considering the construction of a good kernel function in the scale space, namely Gaussian kernel function. So we can use the two-dimensional Gaussian kernel function to convolute the original image to construct the scale space…”
Section: Scale Invariant Feature Points In Scale Spacementioning
confidence: 99%
“…'s multimodal emotion recognition model extracts features from text and videos using a convolutional neural network architecture, incorporating all three modalities-visual, audio, and text. Radhakrishnan et al (2018) proposed a new approach for sentiment analysis from audio clips, which uses a hybrid of the Keyword Spotting System. The Maximum Entropy classifier was designed to integrate audio and text processing into a single system, and this model outperformed other conventional classifiers.…”
Section: Related Workmentioning
confidence: 99%