2014 IEEE 6th International Conference on Awareness Science and Technology (iCAST) 2014
DOI: 10.1109/icawst.2014.6981842
|View full text |Cite
|
Sign up to set email alerts
|

SVR-based outlier detection and its application to hotel ranking

Abstract: With the rapid advance in information technology, more and more information exchange platforms appear. People can freely exchange information on these platforms. However, not all information is reliable. To make correct decisions, it is necessary to detect and remove unreliable information. The main purpose of this study is to improve the reliability of hotel ranking by detecting and deleting outlier on-line reviews. For this purpose, we design a support vector regression (SVR) based outlier detector using exi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…A plethora of ML techniques have been applied in this area, but the 34 per cent of the analyzed research works on this topic expose the use of SVM algorithms. In [130][131][132]134,135,138,141,142] and [147] SVMs are used to extract sentiment data from reviews, blogs and various platforms, and in [74,77,85,90,95,133,137,140] and [98] SVMs are used along with N-Gram kernels, various naive Bayes techniques, ANNs, association rules, maximum entropy classifiers, C4.5 decision trees and random forests, JRIP and SVR.…”
Section: Sentiment Analysis and Satisfaction Degreementioning
confidence: 99%
“…A plethora of ML techniques have been applied in this area, but the 34 per cent of the analyzed research works on this topic expose the use of SVM algorithms. In [130][131][132]134,135,138,141,142] and [147] SVMs are used to extract sentiment data from reviews, blogs and various platforms, and in [74,77,85,90,95,133,137,140] and [98] SVMs are used along with N-Gram kernels, various naive Bayes techniques, ANNs, association rules, maximum entropy classifiers, C4.5 decision trees and random forests, JRIP and SVR.…”
Section: Sentiment Analysis and Satisfaction Degreementioning
confidence: 99%
“…Manual removal of error data is difficult and impossible to accomplish in MBD due to the massive volume. Common data cleaning methods can alleviate the dirty data problem to some extent by training support vector regression (SVR) classifiers [73], multiple linear regression models [74], autoencoder [75], Bayesian methods [76][77][78], unsupervised methods [79], or information-theoretic models [79].…”
Section: Raw Datamentioning
confidence: 99%