In natural language processing, it is common that many entities contain other entities inside them. Most existing works on named entity recognition (NER) only deal with flat entities but ignore nested ones. We propose a boundary-aware neural model for nested NER which leverages entity boundaries to predict entity categorical labels. Our model can locate entities precisely by detecting boundaries using sequence labeling models. Based on the detected boundaries, our model utilizes the boundary-relevant regions to predict entity categorical labels, which can decrease computation cost and relieve error propagation problem in layered sequence labeling model. We introduce multitask learning to capture the dependencies of entity boundaries and their categorical labels, which helps to improve the performance of identifying entities. We conduct our experiments on nested NER datasets and the experimental results demonstrate that our model outperforms other state-of-the-art methods.
Visual question answering aims to answer the natural language question about a given image. Existing graph-based methods only focus on the relations between objects in an image and neglect the importance of the syntactic dependency relations between words in a question. To simultaneously capture the relations between objects in an image and the syntactic dependency relations between words in a question, we propose a novel dual channel graph convolutional network (DC-GCN) for better combining visual and textual advantages. The DC-GCN model consists of three parts: an I-GCN module to capture the relations between objects in an image, a Q-GCN module to capture the syntactic dependency relations between words in a question, and an attention alignment module to align image representations and question representations. Experimental results show that our model achieves comparable performance with the state-of-theart approaches.
Purpose
To evaluate the efficacy of a deep‐learning model to segment the lung and thorax regions in pediatric chest X‐rays (CXRs). Validating the diagnosis of bacterial or viral pneumonia could be improved after lung segmentation.
Materials and methods
A clinical‐pediatric CXR set including 1351 patients was proposed to develop a deep‐learning model for the pulmonary‐thoracic segmentations. Model performance was evaluated by Jaccard's similarity coefficient (JSC) and Dice's coefficient (DC). Two adult CXR sets were used to assess the model's generalizability. According to the pulmonary‐thoracic ratio, Pearson's correlation coefficient and the Bland‐Altman plot were generated to demonstrate the correlation and agreement between manual and automatic segmentations. The receiver operating characteristic curves and areas under the curve (AUCs) were used to compare the pneumonia classification performance based on the lung‐extracted images with that based on the original images.
Results
The model achieved JSCs of 0.910 and 0.950, DCs of 0.948 and 0.974 for lung and thorax segmentations, respectively. Pearson's r = 0.96, P < .0001. In the Bland‐Altman plot, the mean difference was 0.0025 with a 95% confidence interval of (−0.0451, 0.0501). For testing with two adult CXR sets, the JSCs were 0.903 and 0.888, respectively, while the DCs were 0.948 and 0.937, respectively. After lung segmentation, the AUC of a classifier to identify bacterial or viral pneumonia increased from 0.815 to 0.879.
Conclusion
We built a pediatric CXR dataset and exploited a deep‐learning model for accurate pulmonary‐thoracic segmentations. Lung segmentation can notably improve the diagnosis of bacterial or viral pneumonia.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.