Proceedings of the Eight Workshop on Cognitive Aspects of Computational Language Learning and Processing 2018
DOI: 10.18653/v1/w18-2805
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Japanese Word Order in Double Object Constructions

Abstract: This paper presents a statistical model to predict Japanese word order in the double object constructions. We employed a Bayesian linear mixed model with manually annotated predicate-argument structure data. The findings from the refined corpus analysis confirmed the effects of information status of an NP as 'givennew ordering' in addition to the effects of 'long-before-short' as a tendency of the general Japanese word order.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 13 publications
(4 reference statements)
0
5
0
Order By: Relevance
“…We plan to further explore the capability of LMs on other linguistic phenomena related to word order, such as "given new ordering" (Nakagawa, 2016;Asahara et al, 2018). Since LMs are language-agnostic, analyzing word order in another language with the LM-based method would also be an interesting direction to investigate.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…We plan to further explore the capability of LMs on other linguistic phenomena related to word order, such as "given new ordering" (Nakagawa, 2016;Asahara et al, 2018). Since LMs are language-agnostic, analyzing word order in another language with the LM-based method would also be an interesting direction to investigate.…”
Section: Discussionmentioning
confidence: 99%
“…The effects of "long-before-short," the trend that a long constituent precedes a short one, has been reported in several studies (Asahara et al, 2018;Orita, 2017). We checked whether this effect can be captured with the LM-based method. Among the examples used in Section 5.2, we analyzed about 9.5k examples in which the position of the constituent with the largest number of chunks 17 differed between its canonical case order 18 and the order supported by LMs.…”
Section: Long-before-short Effectmentioning
confidence: 99%
See 1 more Smart Citation
“…Another common data-driven approach is to train an interpretable model (e.g., Bayesian linear mixed models) to predict the targeted linguistic phenomena and analyze the inner workings of the model (e.g., slope parameters) (Bresnan et al, 2007;Asahara et al, 2018). Through this approach, researchers can obtain richer statistics, such as the strength of each factor's effect on the targeted phenomena, but creating labeled data and designing features for supervised learning can be costly.…”
Section: On Typical Methods For Evaluating Word Order Hypotheses and ...mentioning
confidence: 99%
“…1 (Yamashita and Kondo 2011;Orita 2017;Hoji 1985;Miyagawa 1997;Matsuoka 2003) Asahara, Nambu, and Sano (2018) Asahara et al (2018) (Maekawa, Yamazaki, Ogiso, Maruyama, Ogura, Kashino, Koiso, Yamaguchi, Tanaka, and Den 2014) BCCWJ 2015Givón 1976 (assertion) Erteschik-Shir (1997,2007) (Bayesian Linear Mixed Model; (Sorensen, Hohenstein, and Vasishth 2016)…”
mentioning
confidence: 99%