2017
DOI: 10.1007/978-3-319-57454-7_1
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional Bi-directional LSTM for Detecting Inappropriate Query Suggestions in Web Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 14 publications
0
9
0
Order By: Relevance
“…This framework combines CNN and double-layer LSTM for 3D shape segmentation, which can output the edge image of each defined view. Harish et al [45] used the model combining CNN and BiLSTM to automatically identify inappropriate query suggestions, and the performance of this model is better than that of multiple benchmark models using the same data set for training. The model used in this study is based on the above research and further improved according to the research needs.…”
Section: Lstm Bilstm and C-bilstmmentioning
confidence: 99%
“…This framework combines CNN and double-layer LSTM for 3D shape segmentation, which can output the edge image of each defined view. Harish et al [45] used the model combining CNN and BiLSTM to automatically identify inappropriate query suggestions, and the performance of this model is better than that of multiple benchmark models using the same data set for training. The model used in this study is based on the above research and further improved according to the research needs.…”
Section: Lstm Bilstm and C-bilstmmentioning
confidence: 99%
“…Working definition. For our discussion here-to incorporate aspects of problematic suggestions mentioned by prior work e.g., (Diakopoulos 2013b;Cheung 2015;Yenala, Chinnakotla, and Goyal 2017;Elers 2014)-we follow (Olteanu, Diaz, and Kazai 2020) and broadly consider problematic any suggestion that may be unexpectedly offensive, discriminatory, or biased, or that may promote deceit, misinformation or content that is in some other way harmful (including adult, violent or suicidal content). Problematic suggestions may reinforce stereotypes, or may nudge users towards harmful or questionable patterns of behaviour.…”
Section: Characterizationmentioning
confidence: 99%
“…Understanding which suggestions should be construed as problematic and how to efficiently detect them also requires examining possible dimensions of problematic suggestions such as their content (e.g., what type of content or topics are more likely to be perceived as problematic?) (Olteanu, Diaz, and Kazai 2020;Miller and Record 2017;Yenala, Chinnakotla, and Goyal 2017), targets (e.g., who or what is more likely to be referenced in problematic queries?) (Olteanu, Diaz, and Kazai 2020;Olteanu, Castillo, Boy, et al 2018; UN Women 2013), structure (e.g., are problematic queries likely to be written in a certain way?)…”
Section: Dimensions Of Problematic Suggestionsmentioning
confidence: 99%
See 1 more Smart Citation
“…The problem of detecting offensive utterances in conversations is wrought with challenges such as handling natural language ambiguity, rampant use of spelling mistakes and variations for abusive and offensive terms and disambiguating with context and other entity names such as pop songs which usually have abusive terms in them (Chen et al, 2012). For this task, we experimented with several approaches, and found Ruuh's current neural Bi-directional LSTM based model (Yenala et al, 2017) to perform the best.…”
Section: Detecting Offensive Conversationsmentioning
confidence: 99%