2021
DOI: 10.20944/preprints202111.0378.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Empirical Comparison of Portuguese and Multilingual BERT Models for Auto-Classification of NCM Codes in International Trade

Abstract: The classification of goods involved in international trade in Brazil is based on the Mercosur Common Nomenclature (NCM). The classification of these goods represents a real challenge due to the complexity involved in assigning the correct category codes especially considering the legal and fiscal implications of misclassification. This work focuses on the training of a classifier based on Bidirectional En-coder Representations from Transformers (BERT) for the tax classification of goods with NCM codes. In par… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…Recently, in [de Lima et al 2022] the authors made use of the BERT (Bidirectional Encoder Representations from Transformers) model to train only one classifier that aims to classify descriptions on its respective MCN chapter code. The authors divided their dataset into 96 chapters and focused the classification only inside a single chapter.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, in [de Lima et al 2022] the authors made use of the BERT (Bidirectional Encoder Representations from Transformers) model to train only one classifier that aims to classify descriptions on its respective MCN chapter code. The authors divided their dataset into 96 chapters and focused the classification only inside a single chapter.…”
Section: Related Workmentioning
confidence: 99%
“…The works of [de Abreu Batista et al 2018] and [de Lima et al 2022] presented a classification of the MCN, considering only the two fist digits, specific chapter only. [Luppes et al 2019] shown a classification considering only the first 4 digits.…”
Section: Related Workmentioning
confidence: 99%