1992
DOI: 10.1117/12.140065
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical neural networks for the classification of undersea events

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2006
2006
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(36 citation statements)
references
References 0 publications
0
14
0
Order By: Relevance
“…By specifically modeling the foreground and background of cross-modal images, the fusion network is motivated to learn foreground and background features more directly. Mixture-of-Expert (MoE) [27,29] can dynamically adjust its own structure/parameters while processing different samples to select the most suitable one for the current sample. Inspired by this, we introduce MoE into the modeling of cross-modal foreground and background features with the purpose of expect-ing a better fusion effect through specialized learning of cross-modal foreground and background information.…”
Section: Infrared Imagementioning
confidence: 99%
See 3 more Smart Citations
“…By specifically modeling the foreground and background of cross-modal images, the fusion network is motivated to learn foreground and background features more directly. Mixture-of-Expert (MoE) [27,29] can dynamically adjust its own structure/parameters while processing different samples to select the most suitable one for the current sample. Inspired by this, we introduce MoE into the modeling of cross-modal foreground and background features with the purpose of expect-ing a better fusion effect through specialized learning of cross-modal foreground and background information.…”
Section: Infrared Imagementioning
confidence: 99%
“…where the pixel loss L pixel constrains the fused image to preserve more significant pixel intensities originating from the target images, while the gradient loss L grad forces the fused image to contain more texture details from different modalities. L load represents load loss, which encourages experts to receive roughly equal numbers of training examples [29]. More details about pixel loss, gradient loss and load loss are provided in the appendix.…”
Section: Loss Functionmentioning
confidence: 99%
See 2 more Smart Citations
“…On the one hand, the dynamic pattern of MoE results in severe computation load-imbalance problems that a small number of experts may receive, process, and send the majority of data. Several approaches were proposed to make full use of the available experts, such as adding an auxiliary loss [26], controlling expert capacity [11; 6], and optimizing the assignment scheme for a balanced load [12; 28; 22]. On the other hand, global communication is another main obstacle to efficient MoE training.…”
Section: Introductionmentioning
confidence: 99%