2022
DOI: 10.1016/j.compeleceng.2022.108445
|View full text |Cite
|
Sign up to set email alerts
|

SwitchNet: A modular neural network for adaptive relation extraction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 6 publications
0
4
0
Order By: Relevance
“…e model has the potential to be extended to other cascaded tasks, such as information extraction and downstream applications, in the future [38].…”
Section: Discussionmentioning
confidence: 99%
“…e model has the potential to be extended to other cascaded tasks, such as information extraction and downstream applications, in the future [38].…”
Section: Discussionmentioning
confidence: 99%
“…In this subsection, we will introduce the PM modules in the framework, each named after some Beijing locations where I have lived. To realize the controllability of human creativity, we design different PM information flows (Zhu et al 2022). As shown in Figure 1, the modules in it can be combined in a custom way.…”
Section: Functional Module Architecturementioning
confidence: 99%
“…Human controllability can be divided into two dimensions: the controllability of human creativity and the controllability of model outputs responsibility. Through the PM information flow (Zhu et al 2022) set by users, the controllability of human creativity can be realized, the value of human imagination can be maximized, and cultural innovation can be carried out. By generating URI-extension for output resources, responsibility for output data can be controlled, enabling "data ownership" by the way.…”
Section: Introductionmentioning
confidence: 99%
“…This algorithm enables the model to dynamically adapt to new patterns in the data. The downside is that there is a sort of information loss [Zhu et al (2022)] in the discrete inference process.…”
Section: Training Algorithmmentioning
confidence: 99%