Multimodal medical image segmentation is always a critical problem in medical image segmentation. Traditional deep learning methods utilize fully CNNs for encoding given images, thus leading to deficiency of long-range dependencies and bad generalization performance. Recently, a sequence of Transformer-based methodologies emerges in the field of image processing, which brings great generalization and performance in various tasks. On the other hand, traditional CNNs have their own advantages, such as rapid convergence and local representations. Therefore, we analyze a hybrid multimodal segmentation method based on Transformers and CNNs and propose a novel architecture, HybridCTrm network. We conduct experiments using HybridCTrm on two benchmark datasets and compare with HyperDenseNet, a network based on fully CNNs. Results show that our HybridCTrm outperforms HyperDenseNet on most of the evaluation metrics. Furthermore, we analyze the influence of the depth of Transformer on the performance. Besides, we visualize the results and carefully explore how our hybrid methods improve on segmentations.
Emotion-cause pair extraction (ECPE), an emerging task in sentiment analysis, aims at extracting pairs of emotions and their corresponding causes in documents. This is a more challenging problem than emotion cause extraction (ECE), since it requires no emotion signals which are demonstrated as an important role in the ECE task. Existing work follows a two-stage pipeline which identifies emotions and causes at the first step and pairs them at the second step. However, error propagation across steps and pair combining without contextual information limits the effectiveness. Therefore, we propose a Dual-Questioning Attention Network to alleviate these limitations. Specifically, we question candidate emotions and causes to the context independently through attention networks for a contextual and semantical answer. Also, we explore how weighted loss functions in controlling error propagation between steps. Empirical results show that our method performs better than baselines in terms of multiple evaluation metrics. The source code can be obtained at https://github.com/QixuanSun/DQAN.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.