IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)
DOI: 10.1109/ijcnn.2001.938396
|View full text |Cite
|
Sign up to set email alerts
|

Part-of-speech tagging with recurrent neural networks

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…PoS tag may be very coarse (e.g: Ve "Verb") or very fine (e.g: VePiMaPlFsSj "Verb, Imperfect, Masculine, Plural, First Person, Subjunctive"), depending on the task or application [11]. Since the main aim of AMT system is to produce a tagged corpus, tags were developed with a good level of granularity, inflectional features were added to each tag, that satisfied what linguists and NLP developers need.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…PoS tag may be very coarse (e.g: Ve "Verb") or very fine (e.g: VePiMaPlFsSj "Verb, Imperfect, Masculine, Plural, First Person, Subjunctive"), depending on the task or application [11]. Since the main aim of AMT system is to produce a tagged corpus, tags were developed with a good level of granularity, inflectional features were added to each tag, that satisfied what linguists and NLP developers need.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…Pun, Eng or other. The use of the deep neural network for sequence labelling tasks like Part of Speech (POS) tagging and Named Entity Recognition (NER) has been explored by various researchers in recent years (Chen et al , 2010; Chiu and Nichols 2016a, 2016b; Dos Santos and Guimarães, 2015; Dos Santos and Zadrozny, 2014; Perez-Ortiz and Forcada, 2001; Sutskever et al , 2014; Tamburini, 2016). Zazo et al (2016) also employed the techniques in the speech recognition domain.…”
Section: Methodsmentioning
confidence: 99%
“…Collobert et al [25] built a CNN neural network for multiple sequence labeling tasks, which gives state-of-the-art POS results. Recurrent neural network models have also been used for this task [49,50]. Huang et al [50] combines bidirectional LSTM with a CRF layer, their model is robust and has less dependence on word embedding.…”
Section: Related Workmentioning
confidence: 99%