2018
DOI: 10.1007/978-981-13-2206-8_20
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical RNN for Few-Shot Information Extraction Learning

Abstract: Few-Shot Relation Extraction (FSRE), a subtask of Relation Extraction (RE) that utilizes limited training instances, appeals to more researchers in Natural Language Processing (NLP) due to its capability to extract textual information in extremely low-resource scenarios. The primary methodologies employed for FSRE have been fine-tuning or prompt tuning techniques based on Pre-trained Language Models (PLMs). Recently, the emergence of Large Language Models (LLMs) has prompted numerous researchers to explore FSR… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…Nowadays, using DL, accident documents are processed to provide useful information for safety management under two main tasks: information extraction and text classification. Information extraction is the task of finding structured information from unstructured or semistructured text [164], which is essential for handling continuously growing data published on the online, especially in the Big Data era [165]. For example, Feng and Chen (2021) [38] adopted the BiLSTM-CRF model to automatically extract information from accident reports, so this model could help to raise workers' security awareness and prevent hazards and accidents.…”
Section: Accident Investigation and Analysismentioning
confidence: 99%
“…Nowadays, using DL, accident documents are processed to provide useful information for safety management under two main tasks: information extraction and text classification. Information extraction is the task of finding structured information from unstructured or semistructured text [164], which is essential for handling continuously growing data published on the online, especially in the Big Data era [165]. For example, Feng and Chen (2021) [38] adopted the BiLSTM-CRF model to automatically extract information from accident reports, so this model could help to raise workers' security awareness and prevent hazards and accidents.…”
Section: Accident Investigation and Analysismentioning
confidence: 99%
“…Multi-modal extraction: The incorporation of visual information into IE was proposed by Aumann et al (2006), who attempted to learn a fitness function to calculate the visual similarity of a document to one in its training set to extract elements like headlines and authors. Other recent approaches that attempt to address the layout structure of documents are CharGrid (Katti et al, 2018), which represents a document as a two-dimensional grid of characters, RiSER, an extraction technique targeted at templated emails (Kocayusufoglu et al, 2019), and that by Liu et al (2018), which presents an RNN method for learning DOM-tree rules. However, none of these address the OpenIE setting, which requires understanding the relationship between different text fields on the page.…”
Section: Related Workmentioning
confidence: 99%
“…Multi-modal extraction: The incorporation of visual information into IE was proposed by Aumann et al (2006), who attempted to learn a fitness function to calculate the visual similarity of a document to one in its training set to extract elements like headlines and authors. Other recent approaches that attempt to address the layout structure of documents are CharGrid (Katti et al, 2018), which represents a document as a two-dimensional grid of characters, RiSER, an extraction technique targeted at templated emails (Kocayusufoglu et al, 2019), and that by Liu et al (2018), which presents an RNN method for learning DOM-tree rules. However, none of these address the OpenIE setting, which requires understanding the relationship between different text fields on the page.…”
Section: Related Workmentioning
confidence: 99%