Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.280
|View full text |Cite
|
Sign up to set email alerts
|

Distinguish Confusing Law Articles for Legal Judgment Prediction

Abstract: Legal Judgment Prediction (LJP) is the task of automatically predicting a law case's judgment results given a text describing its facts, which has excellent prospects in judicial assistance systems and convenient services for the public. In practice, confusing charges are frequent, because law cases applicable to similar law articles are easily misjudged. For addressing this issue, the existing method relies heavily on domain experts, which hinders its application in different law systems. In this paper, we pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
30
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 88 publications
(50 citation statements)
references
References 11 publications
0
30
0
Order By: Relevance
“…Then we compare GCI and two integration methods with NN baselines, including LSTM, Bi-LSTM and Bi-LSTM+Att. Bi-LSTM+Att is a common backbone of legal judgement prediction models, while we do not add multitask learning (Luo et al, 2017) and expert knowledge (Xu et al, 2020) for simplicity. Since the prior knowledge learned from pre-trained models may result in unfair comparison, we do not choose the models such as BERT (Devlin et al, 2018) as baselines and backbones to eliminate the influence.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Then we compare GCI and two integration methods with NN baselines, including LSTM, Bi-LSTM and Bi-LSTM+Att. Bi-LSTM+Att is a common backbone of legal judgement prediction models, while we do not add multitask learning (Luo et al, 2017) and expert knowledge (Xu et al, 2020) for simplicity. Since the prior knowledge learned from pre-trained models may result in unfair comparison, we do not choose the models such as BERT (Devlin et al, 2018) as baselines and backbones to eliminate the influence.…”
Section: Methodsmentioning
confidence: 99%
“…Zhong et al (2020) provide interpretable judgements by iteratively questioning and answering. Another line pays attention to confusing charges: manually design discriminative attributes, and Xu et al (2020) use attention mechanisms to highlight differences between similar charges. Using knowledge derived from causal graphs, GCI exhibits a different and interpretable discrimination process.…”
Section: Related Workmentioning
confidence: 99%
“…[7] represents the faction of using discriminative legal attributes for judgment prediction which emphasizes more on the judicial fairness during prediction. [26] stands for the research works based on distinguishing law articles for judgment prediction which has been proved to be effective especially for criminal cases. [2] leverages BERT to focus only on learning good representation of the pure input fact text for judgment prediction.…”
Section: Related Work 21 Legal Judgment Predictionmentioning
confidence: 99%
“…For instance, note that an acyclic dependency exists between the sub-tasks, while introduce a multi-perspective forward prediction and backward verification framework to utilize result dependencies between the sub-tasks. Xu et al (2020) use graph-based methods to group statutory articles into communities, and use distinguishable features from each community to attentively encode facts. Since these methods are geared toward utilizing the correlations between the related sub-tasks (and need training data pertaining to all these tasks), we do not consider them as baselines, since our main focus is only charge identification.…”
Section: Related Workmentioning
confidence: 99%