Purpose
In the absence of a virus nucleic acid real-time reverse transcriptase-polymerase chain reaction (RT-PCR) test and experienced radiologists, clinical diagnosis is challenging for viral pneumonia with clinical symptoms and CT signs similar to that of coronavirus disease 2019 (COVID-19). We developed an end-to-end automatic differentiation method based on CT images to identify COVID-19 pneumonia patients in real time.
Methods
From January 18 to February 23, 2020, we conducted a retrospective study and enrolled 201 patients from two hospitals in China who underwent chest CT and RT-PCR tests, of which 98 patients tested positive for COVID-19 (118 males and 83 females, with an average age of 42 years). Patient CT images from one hospital were divided among training, validation and test datasets with an 80%:10%:10% ratio. An end-to-end representation learning method using a large-scale bi-directional generative adversarial network (BigBiGAN) architecture was designed to extract semantic features from the CT images. The semantic feature matrix was input for linear classifier construction. Patients from the other hospital were used for external validation. Differentiation accuracy was evaluated using a receiver operating characteristic curve.
Results
Based on the 120-dimensional semantic features extracted by BigBiGAN from each image, the linear classifier results indicated that the area under the curve (AUC) in the training, validation and test datasets were 0.979, 0.968 and 0.972, respectively, with an average sensitivity of 92% and specificity of 91%. The AUC for external validation was 0.850, with a sensitivity of 80% and specificity of 75%. Publicly available architecture and computing resources were used throughout the study to ensure reproducibility.
Conclusion
This study provides an efficient recognition method for coronavirus disease 2019 pneumonia, using an end-to-end design to implement targeted and effective isolation for the containment of this communicable disease.
With the development of deep learning, the method of large-scale dialogue generation based on deep learning has received extensive attention. The current research has aimed to solve the problem of the quality of generated dialogue content, but has failed to fully consider the emotional factors of generated dialogue content. In order to solve the problem of emotional response in the open domain dialogue system, we proposed a dynamic emotional session generation model (DESG). On the basis of the Seq2Seq (sequence-to-sequence) framework, the model abbreviation incorporates a dictionary-based attention mechanism that encourages the substitution of words in response with synonyms in emotion dictionaries. Meanwhile, in order to improve the model, internal emotion regulator and emotion classifier mechanisms are introduced in order to build a large-scale emotion-session generation model. Experimental results show that our DESG model can not only produce an appropriate output sequence in terms of content (related grammar) for a given post and emotion category, but can also express the expected emotional response explicitly or implicitly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.