Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2014
DOI: 10.3115/v1/p14-1028
|View full text |Cite
|
Sign up to set email alerts
|

Max-Margin Tensor Neural Network for Chinese Word Segmentation

Abstract: Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability to alleviate the burden of manual feature engineering. In this paper, we propose a novel neural network model for Chinese word segmentation called Max-Margin Tensor Neural Network (MMTNN). By exploiting tag embeddings and tensorbased transformation, MMTNN has the ability to model complicated interactions between tags and context characters. Furthermore, a new tensor factorization approach i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
164
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 149 publications
(170 citation statements)
references
References 13 publications
6
164
0
Order By: Relevance
“…Pretraining further improves the performance of all three models, which is consistent with the conclusion of previous work (Pei et al, 2014;Chen and Manning, 2014). Moreover, 1-order-phrase performs better than 1-order-atomic, which shows that phrase embeddings do improve the model.…”
Section: Experiments Resultssupporting
confidence: 79%
See 2 more Smart Citations
“…Pretraining further improves the performance of all three models, which is consistent with the conclusion of previous work (Pei et al, 2014;Chen and Manning, 2014). Moreover, 1-order-phrase performs better than 1-order-atomic, which shows that phrase embeddings do improve the model.…”
Section: Experiments Resultssupporting
confidence: 79%
“…It is shown that similar features will have similar embeddings which capture the syntactic and semantic information behind features (Bengio et al, 2003; Collobert et al, 2011;Schwenk et al, 2012;Mikolov et al, 2013;Socher et al, 2013;Pei et al, 2014).…”
Section: Feature Embeddingsmentioning
confidence: 99%
See 1 more Smart Citation
“…A comprehensive survey is out of the scope of this paper, but interested readers can refer to Pei et al (Pei et al, 2014) for a recent literature review of the fields.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, neural network has gained increasing research attention, with highly competitive results being reported for numerous NLP tasks, including word segmentation (Zheng et al, 2013;Pei et al, 2014;, POS-tagging (Ma et al, 2014;Plank et al, 2016), and parsing (Chen and Manning, 2014; Dyer et al, 2015;Weiss et al, 2015;. On the other hand, the aforementioned methods on heterogeneous annotations are investigated mainly for discrete models.…”
Section: Introductionmentioning
confidence: 99%