Proceedings of the 2019 Conference of the North 2019
DOI: 10.18653/v1/n19-1136
|View full text |Cite
|
Sign up to set email alerts
|

Joint Multi-Label Attention Networks for Social Text Annotation

Abstract: We propose a novel attention network for document annotation with user-generated tags. The network is designed according to the human reading and annotation behaviour. Usually, users try to digest the title and obtain a rough idea about the topic first, and then read the content of the document. Present research shows that the title metadata could largely affect the social annotation. To better utilise this information, we design a framework that separates the title from the content of a document and apply a t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 23 publications
(25 reference statements)
0
4
0
Order By: Relevance
“…The JMAN model, as shown in Fig. 2, is an extension to our previous work [44]. Instead of feeding the whole text sequence X into the neural network as in HAN [12], [17], JMAN takes as inputs the title, x t , and the content (in this work, the abstract of a document is treated as the content), x a , and processes them separately, where…”
Section: B Overall Designmentioning
confidence: 99%
“…The JMAN model, as shown in Fig. 2, is an extension to our previous work [44]. Instead of feeding the whole text sequence X into the neural network as in HAN [12], [17], JMAN takes as inputs the title, x t , and the content (in this work, the abstract of a document is treated as the content), x a , and processes them separately, where…”
Section: B Overall Designmentioning
confidence: 99%
“…Multi-Task Learning (MTL) is a learning paradigm in machine learning and its purpose is to take advantage of useful information contributed by multiple related tasks to improve the generalization performance of all the tasks [11]. MTL has shown significant advantage to single-task learning because of its ability to facilitate knowledge sharing between tasks [31], e.g., bioinformatics and health informatics [32,33], web applications [34,35] and remote sensing [36][37][38].…”
Section: Multi-task Learning In Human Activity Recognitionmentioning
confidence: 99%
“…Recently, the attention mechanism has achieved a great success in many NLP tasks [34], [35], [36], [37], [38], [39], [40], [41], [42]. In the task of (T)ABSA, attention mechanism can help models effectively distinguishing the sentiment polarities of different aspects in the same sentence [9], [10], [11], [43], [44], [45], [46], [47], [48], [49].…”
Section: Introductionmentioning
confidence: 99%