In recent years, the robotics community has extensively examined methods concerning the place recognition task within the scope of simultaneous localization and mapping applications. This article proposes an appearance‐based loop closure detection pipeline named “Fast and Incremental Loop closure Detection (FILD++). First, the system is fed by consecutive images and, via passing them twice through a single convolutional neural network, global and local deep features are extracted. Subsequently, a hierarchical navigable small‐world graph incrementally constructs a visual database representing the robot's traversed path based on the computed global features. Finally, a query image, grabbed each time step, is set to retrieve similar locations on the traversed route. An image‐to‐image pairing follows, which exploits local features to evaluate the spatial information. Thus, in the proposed article, we propose a single network for global and local feature extraction in contrast to our previous work (FILD), while an exhaustive search for the verification process is adopted over the generated deep local features avoiding the utilization of hash codes. Exhaustive experiments on eleven publicly available data sets exhibit the system's high performance (achieving the highest recall score on eight of them) and low execution times (22.05 ms on average in New College, which is the largest one containing 52,480 images) compared to other state‐of‐the‐art approaches.
Background. Lymph node metastasis is the most common and important way of metastasis in NSCLC and is also the most important factor affecting lung cancer stage and prognosis. It is very important to analyze the relationship between the expression of vascular endothelial growth factor (VEGF) and Ki67 and lymph node metastasis (LNM) in non-small-cell lung cancer (NSCLC). Methods. We searched the PubMed, EMBASE, and Cochrane Library and conducted meta-analyses using the R meta-package. Relative risk (RR) with a 95% confidence interval (95% CI) was the main indicator. Results. Totally, 18 studies were considered eligible, with 4521 patients, including 1518 LNM-positive patients and 3033 LNM-negative patients. The incidence of LNM in Ki67-negative patients was lower than that in Ki67-positive patients (RR = 0.66, 95% CI: 0.44, 0.98). The incidence of LNM in VEGF-A-negative patients was lower than that in VEGF-A-positive patients (RR = 0.64, 95% CI: 0.49, 0.83). The incidence of LNM in VEGF-C negative patients was lower than that in VEGF-C positive patients (RR = 0.68, 95% CI: 0.53, 0.88). The incidence of LNM in VEGF-D negative and positive patients were of no significant differences (RR = 0.84, 95% CI: 0.61, 1.14). Conclusion. The high expression of Ki67, VEGF-A, and VEGF-C significantly increases the risk of lymph node metastasis in NSCLC, while the VEGF-D expression has no correlation with lymph node metastasis. The expression levels of Ki67, VEGF-A, and VEGF-C show a good potential for lymph node metastasis prediction.
Medical images are widely used in clinical practice for diagnosis. Automatically generating interpretable medical reports can reduce radiologists' burden and facilitate timely care. However, most existing approaches to automatic report generation require sufficient labeled data for training. In addition, the learned model can only generate reports for the training classes, lacking the ability to adapt to previously unseen novel diseases. To this end, we propose a lesion guided explainable few weak-shot medical report generation framework that learns correlation between seen and novel classes through visual and semantic feature alignment, aiming to generate medical reports for diseases not observed in training. It integrates a lesion-centric feature extractor and a Transformer-based report generation module. Concretely, the lesioncentric feature extractor detects the abnormal regions and learns correlations between seen and novel classes with multi-view (visual and lexical) embeddings. Then, features of the detected regions and corresponding embeddings are concatenated as multi-view input to the report generation module for explainable report generation, including text descriptions and corresponding abnormal regions detected in the images. We conduct experiments on FFA-IR, a dataset providing explainable annotations, showing that our framework outperforms others on report generation for novel diseases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.