2017
DOI: 10.4000/ijcol.553
|View full text |Cite
|
Sign up to set email alerts
|

Bi-directional LSTM-CNNs-CRF for Italian Sequence Labeling and Multi-Task Learning

Abstract: In this paper, we propose a Deep Learning architecture for several Italian Natural Language Processing tasks based on a state of the art model that exploits both word-and characterlevel representations through the combination of bidirectional LSTM, CNN and CRF. This architecture provided state of the art performance in several sequence labeling tasks for the English language. We exploit the same approach for the Italian language and extend it for performing a multi-task learning involving PoS-tagging and senti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…In this section, we describe the comparison model methods used. The comparison models include RNN [28], LSTM network [29], bidirectional long short term memory network (BiLSTM) [52], and Transformer [53] models on time series. There are also variational AE (VAE) [54], denoising AE (DAE) [55], NWP‐based [56], and satellite‐derived method (SDM) [57] models used by machine learning algorithms for photovoltaic power prediction.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this section, we describe the comparison model methods used. The comparison models include RNN [28], LSTM network [29], bidirectional long short term memory network (BiLSTM) [52], and Transformer [53] models on time series. There are also variational AE (VAE) [54], denoising AE (DAE) [55], NWP‐based [56], and satellite‐derived method (SDM) [57] models used by machine learning algorithms for photovoltaic power prediction.…”
Section: Methodsmentioning
confidence: 99%
“…It allows the network to learn more long‐lasting memories. BiLSTM [52] is a combination of forward LSTM and backward LSTM. It can model contextual information.…”
Section: Methodsmentioning
confidence: 99%