BACKGROUNDIn 1999, a World Health Organization (WHO) committee published histologic criteria for distinct thymoma entities (labeled as Type A, AB, B1, B2, B3 thymomas) and for the heterogeneous group of thymic carcinomas, collectively called Type C thymomas. Whether WHO‐defined histologic thymoma subtypes are of independent prognostic relevance has yet to be proved.METHODSTwo hundred thymomas from the Shanghai Chest Hospital with a mean follow‐up time of 15 years (range, 1–246 months) were studied for the relevance of WHO histologic subtype and other factors (stage, therapy, and myasthenia gravis [MG]) for survival.RESULTSIn order of frequency, 68 patients (34.0%) had Type AB, 39 (19.5%) had Type B2, 36 (18.0%) had Type C, 27 (13.5%) had Type B3, 17 (8.5%) had Type B1, and 8 (4.0%) had Type A thymoma. Five cases (2.5%) were rare thymomas not mentioned in the WHO classification. Survival data showed significant differences among the histologic subtypes (log rank test: P < 0.001). Among patients with Type A and AB thymomas, none died of tumor; of the Type B1 thymoma patients, only one (5.9%) died at 22 months. Type B2, B3, and C thymomas had a significantly worse prognosis with 5‐year survival rates of 75.0%, 70.0%, and 48.0%, respectively. Ninety‐six patients (48.0%) were in Masaoka Stage I, 26 (13.0%) were in Stage II, 65 (32.5%) were in Stage III, and 13 (6.5%) were in Stage IV. Stage was highly significant in predicting survival (log rank, test P < 0.001). The association between histologic subtype and invasive behavior (stage) was statistically significant (P < 0.001). However, histology was an independent predictive factor of survival in Stage I and II thymomas: Type B2, B3, and C thymomas had a worse prognosis than Type A, AB, and B1 thymomas (log rank test: P < 0.003). Thirty patients (15.0%) presented with MG. MG was significantly more frequent in Type B2 and B3 than in Type A, AB, and B1 thymomas (P < 0.01). On multivariate analysis, MG had no adverse effect on survival (P = 0.17). Radiation or chemotherapy improved patients' survival at 5 and 10 years in Type B2, B3, and C thymomas (log rank test: P < 0.003).CONCLUSIONSTumor stage is the most important determinant of survival in thymoma patients, but the WHO histologic subtype is an independent prognostic factor in Stage I and II thymomas, among which WHO Type A, AB, and B1 thymomas form a low‐risk group. Patients with high‐risk thymomas might profit from novel adjuvant radiochemotherapy regimens. Cancer 2002;95:420–9. © 2002 American Cancer Society.DOI 10.1002/cncr.10665
Video captioning is the task of automatically generating a textual description of the actions in a video. Although previous work (e.g. sequence-to-sequence model) has shown promising results in abstracting a coarse description of a short video, it is still very challenging to caption a video containing multiple fine-grained actions with a detailed description. This paper aims to address the challenge by proposing a novel hierarchical reinforcement learning framework for video captioning, where a highlevel Manager module learns to design sub-goals and a low-level Worker module recognizes the primitive actions to fulfill the sub-goal. With this compositional framework to reinforce video captioning at different levels, our approach significantly outperforms all the baseline methods on a newly introduced large-scale dataset for fine-grained video captioning. Furthermore, our non-ensemble model has already achieved the state-of-the-art results on the widelyused MSR-VTT dataset. Caption: A person sits on a bed and puts a laptop into a bag.The person stands up, puts the bag on one shoulder, and walks out of the room. Caption #1: A woman offers her dog some food.Caption #2: A woman is eating and sharing food with her dog. Caption #3: A woman is sharing a snack with a dog.
Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. 1
Existing question answering datasets focus on dealing with homogeneous information, based either only on text or KB/Table information alone. However, as human knowledge is distributed over heterogeneous forms, using homogeneous information alone might lead to severe coverage problems. To fill in the gap, we present HybridQA 1 , a new large-scale question-answering dataset that requires reasoning on heterogeneous information. Each question is aligned with a Wikipedia table and multiple free-form corpora linked with the entities in the table. The questions are designed to aggregate both tabular information and text information, i.e., lack of either form would render the question unanswerable. We test with three different models: 1) a table-only model.2) text-only model. 3) a hybrid model that combines heterogeneous information to find the answer. The experimental results show that the EM scores obtained by two baselines are below 20%, while the hybrid model can achieve an EM over 40%. This gap suggests the necessity to aggregate heterogeneous information in HybridQA. However, the hybrid model's score is still far behind human performance. Hence, HybridQA can serve as a challenging benchmark to study question answering with heterogeneous information.
Neural-based end-to-end approaches to natural language generation (NLG) from structured data or knowledge are data-hungry, making their adoption for real-world applications difficult with limited data. In this work, we propose the new task of few-shot natural language generation. Motivated by how humans tend to summarize tabular data, we propose a simple yet effective approach and show that it not only demonstrates strong performance but also provides good generalization across domains. The design of the model architecture is based on two aspects: content selection from input data and language modeling to compose coherent sentences, which can be acquired from prior knowledge. With just 200 training examples, across multiple domains, we show that our approach achieves very reasonable performances and outperforms the strongest baseline by an average of over 8.0 BLEU points improvement. Our code and data can be found at https:
Neural natural language generation (NLG) models have recently shown remarkable progress in fluency and coherence. However, existing studies on neural NLG are primarily focused on surface-level realizations with limited emphasis on logical inference, an important aspect of human thinking and language. In this paper, we suggest a new NLG task where a model is tasked with generating natural language statements that can be logically entailed by the facts in an open-domain semi-structured table. To facilitate the study of the proposed logical NLG problem, we use the existing Tab-Fact dataset (Chen et al., 2019) featured with a wide range of logical/symbolic inferences as our testbed, and propose new automatic metrics to evaluate the fidelity of generation models w.r.t. logical inference. The new task poses challenges to the existing monotonic generation frameworks due to the mismatch between sequence order and logical order. In our experiments, we comprehensively survey different generation architectures (LSTM, Transformer, Pre-Trained LM) trained with different algorithms (RL, Adversarial Training, Coarse-to-Fine) on the dataset and made following observations: 1) Pre-Trained LM can significantly boost both the fluency and logical fidelity metrics, 2) RL and Adversarial Training are trading fluency for fidelity, 3) Coarse-to-Fine generation can help partially alleviate the fidelity issue while maintaining high language fluency.
BackgroundEmerging evidences indicate that dysregulated long non-coding RNAs (lncRNAs) are implicated in cancer tumorigenesis and progression. LncRNA ANRIL has been shown to promote the progression of gastric cancer. However, the role of lncRNA ANRIL in human non-small cell lung cancer (NSCLC) remains unclear.MethodsExpression of lncRNA ANRIL was analyzed in 87 NSCLC tissues and three lung cancer cell lines by quantitative real-time PCR (qRT-PCR). The correlation of lncRNA ANRIL with clinicopathological features and prognosis was analyzed. Suppression of lncRNA ANRIL using siRNA treatment was performed in order to explore its role in tumor progression.ResultsThe expression level of lncRNA ANRIL was higher in NSCLC tissues and lung cancer cells than in adjacent non-tumor tissues and normal human bronchial epithelial cells. Higher expression of lncRNA ANRIL in NSCLC tissues was associated with higher TNM stage and advanced lymph node metastasis. Patients with high lncRNA ANRIL expression had poorer overall survival compared with low lncRNA ANRIL group. Univariate and multivariate analyses suggested that high expression of lncRNA ANRIL was an independent poor prognostic indicator for NSCLC patients. Moreover, knockdown of lncRNA ANRIL expression could inhibit lung cancer cell proliferation, migration and invasion in vitro.ConclusionsOur results suggested that lncRNA ANRIL was a potential biomarker for NSCLC prognosis, and the dysregulation of lncRNA ANRIL may play an important role in NSCLC progression.Virtual SlidesThe virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1707061287149690.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.