Proceedings of the 23rd International Conference on Enterprise Information Systems 2021
DOI: 10.5220/0010453502160223
|View full text |Cite
|
Sign up to set email alerts
|

Application of Classification and Word Embedding Techniques to Evaluate Tourists’ Hotel-revisit Intention

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…The study done by Christodoulou et al [12] solved the problem of revisiting tourists from the point of view of big data analysis. The applied method used topic modeling, word embedding, XGBoost, and random forest classification algorithms.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The study done by Christodoulou et al [12] solved the problem of revisiting tourists from the point of view of big data analysis. The applied method used topic modeling, word embedding, XGBoost, and random forest classification algorithms.…”
Section: Literature Reviewmentioning
confidence: 99%
“…However, such a revisitintention dataset is not available, hence, a sentiment-labelled dataset with hotel reviews was used, acting as a proxy for revisit intention given that satisfaction is a prerequisite of revisit intention. The utilization of such labelled data is motivated by our previous work in revisit intention [96] and other similar research literature that uses sentiment as a proxy for revisit [26]. The rationale is that tourists who provide extremely positive reviews will possibly want to revisit a hotel and thus their eWOM might include words pointing to that intention.…”
Section: Intention Filteringmentioning
confidence: 99%
“…Word embedding is a continuous vector representation of words that encodes the meaning of the word, such that the words that are closer in the vector space are supposed to be similar in the meaning. The use of word embeddings as additional features improves the performance in many NLP tasks, including text classification [22][23][24][25][26][27][28][29][30]. Different Machine Learning algorithms can be trained to derive these vectors, such as Word2Vec [31], FastText [32], Glove [33].…”
Section: Literature Reviewmentioning
confidence: 99%