2018
DOI: 10.1007/s10506-018-9225-1
|View full text |Cite
|
Sign up to set email alerts
|

Recurrent neural network-based models for recognizing requisite and effectuation parts in legal texts

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 39 publications
(13 citation statements)
references
References 16 publications
0
13
0
Order By: Relevance
“…For example, “Microsoft” and “Google” have similar semantics, both being tech companies. The words “car” and “journey” are not semantically similar, but these two words are related, and both are related to transportation [ 14 , 15 ].…”
Section: Introductionmentioning
confidence: 99%
“…For example, “Microsoft” and “Google” have similar semantics, both being tech companies. The words “car” and “journey” are not semantically similar, but these two words are related, and both are related to transportation [ 14 , 15 ].…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, it is difficult for a language model with averaging and interpolation capabilities to infer logical structures on its own through unsupervised training. To correctly annotate law sentences with many interlocking logical structures, we need to use multilayer annotation [49]. Table 5.2 is the annotation of the above example.…”
Section: Methodsmentioning
confidence: 99%
“…While no absolute winner was observed, the study highlights the benefit of using feature weights or network attention weights from these predictive models to identify salient phrases in motions or contentions and case facts. Nguyen et al (2017Nguyen et al ( , 2018 propose several approaches to train long short term memory (LSTMs) models and conditional random field (CRF) models for the problem of identifying two key portions of legal documents, i.e., requisite and effectuation segments, with evaluation on Japanese civil code and Japanese National Pension Law dataset. In Chalkidis and Kampas (2019) by Chalkidis and Kampas, a major contribution is the development of word2vec skip-gram embeddings trained on large legal corpora (mostly from European, UK, and US legislations).…”
Section: Related Workmentioning
confidence: 99%