Proceedings of the 2020 Conference on Human Information Interaction and Retrieval 2020
DOI: 10.1145/3343413.3377987
|View full text |Cite
|
Sign up to set email alerts
|

Subjective Search Intent Predictions using Customer Reviews

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…Note that we pre-process all intents by removing stopwords and conducting lemmatization. The results show that our task allows to collect a diverse set of intent patterns; in comparison, related studies such as [8] only consider two intent patterns, activity (verb) and audience (noun). A summary of the statistics of our collected datasets is given in Table 2.…”
Section: Resulting Datasetsmentioning
confidence: 90%
See 1 more Smart Citation
“…Note that we pre-process all intents by removing stopwords and conducting lemmatization. The results show that our task allows to collect a diverse set of intent patterns; in comparison, related studies such as [8] only consider two intent patterns, activity (verb) and audience (noun). A summary of the statistics of our collected datasets is given in Table 2.…”
Section: Resulting Datasetsmentioning
confidence: 90%
“…We sample 2000 sessions for each domain with lengths from 2 to 10 considering that (1) most session lengths are in the range of [2,10] (see Figure 3(a)); and that (2) longer sessions may contain noisy items and increase the difficulty of annotation. To account for the imbalance of the session length, we adopt the stratified sampling strategy: for each domain, we divide the sessions into 4 groups based on their lengths -[2, 3], [4,5], [6,7], [8,9,10], and then sample 500 sessions from each group. As a result, we obtain 2000 sessions with balanced length distribution from each domain.…”
Section: Crowdsourced Annotation Taskmentioning
confidence: 99%
“…Transactional queries are identified by CURL in [10]. Authors in [11] trained a classifier to label queries with the intents extracted from user reviews. In [12] query-specific features, such as bag-of-words, length, recognized named entity, noun phrase, question and so on, are exploited to build three multi-class classifiers.…”
Section: Related Workmentioning
confidence: 99%