Frontotemporal dementia (FTD) and Alzheimer’s disease (AD) have overlapping symptoms, and accurate differential diagnosis is important for targeted intervention and treatment. Previous studies suggest that the deep learning (DL) techniques have the potential to solve the differential diagnosis problem of FTD, AD and normal controls (NCs), but its performance is still unclear. In addition, existing DL-assisted diagnostic studies still rely on hypothesis-based expert-level preprocessing. On the one hand, it imposes high requirements on clinicians and data themselves; On the other hand, it hinders the backtracking of classification results to the original image data, resulting in the classification results cannot be interpreted intuitively. In the current study, a large cohort of 3D T1-weighted structural magnetic resonance imaging (MRI) volumes (n = 4,099) was collected from two publicly available databases, i.e., the ADNI and the NIFD. We trained a DL-based network directly based on raw T1 images to classify FTD, AD and corresponding NCs. And we evaluated the convergence speed, differential diagnosis ability, robustness and generalizability under nine scenarios. The proposed network yielded an accuracy of 91.83% based on the most common T1-weighted sequence [magnetization-prepared rapid acquisition with gradient echo (MPRAGE)]. The knowledge learned by the DL network through multiple classification tasks can also be used to solve subproblems, and the knowledge is generalizable and not limited to a specified dataset. Furthermore, we applied a gradient visualization algorithm based on guided backpropagation to calculate the contribution graph, which tells us intuitively why the DL-based networks make each decision. The regions making valuable contributions to FTD were more widespread in the right frontal white matter regions, while the left temporal, bilateral inferior frontal and parahippocampal regions were contributors to the classification of AD. Our results demonstrated that DL-based networks have the ability to solve the enigma of differential diagnosis of diseases without any hypothesis-based preprocessing. Moreover, they may mine the potential patterns that may be different from human clinicians, which may provide new insight into the understanding of FTD and AD.
To explore the adoption of artificial intelligence (AI) technology in the field of teacher teaching evaluation, the machine learning algorithm is proposed to construct a teaching evaluation model, which is suitable for the current educational model, and can help colleges and universities to improve the existing problems in teaching. Firstly, the existing problems in the current teaching evaluation system are put forward and a novel teaching evaluation model is designed. Then, the relevant theories and techniques required to build the model are introduced. Finally, the experiment methods and process are carried out to find out the appropriate machine learning algorithm and optimize the obtained weighted naive Bayes (WNB) algorithm, which is compared with traditional naive Bayes (NB) algorithm and back propagation (BP) algorithm. The results reveal that compared with NB algorithm, the average classification accuracy of WNB algorithm is 0.817, while that of NB algorithm is 0.751. Compared with BP algorithm, WNB algorithm has a classification accuracy of 0.800, while that of BP algorithm is 0.680. Therefore, it is proved that WNB algorithm has favorable effect in teaching evaluation model.
Legitimate teacher authority is fundamental to effective teaching, but is often a thorny issue that teachers need to grapple with when teaching in cross-cultural teaching contexts. By interviewing 18 pre-service Chinese language teachers on their understanding of legitimate teacher authority throughout teaching practicum at international schools in Hong Kong, this study revealed that the teachers changed their perception about the essentiality and the nature of the pedagogical and interpersonal components of legitimate teacher authority. They developed a more nuanced and balanced understanding about legitimate teacher authority over time. However, their abilities in reaching the balance were constrained by their cultural knowledge and skills in achieving positive interpersonal dynamics when implementing student-centred pedagogies.
Background: Detecting discomfort in infants is an important topic for their well-being and development.In this paper, we present an automatic and continuous video-based system for monitoring and detecting discomfort in infants.Methods: The proposed system employs a novel and efficient 3D convolutional neural network (CNN), which achieves an end-to-end solution without the conventional face detection and tracking steps. In the scheme of this study, we thoroughly investigate the video characteristics (e.g., intensity images and motion images) and CNN architectures (e.g., 2D and 3D) for infant discomfort detection. The realized improvements of the 3D-CNN are based on capturing both the motion and the facial expression information of the infants.Results: The performance of the system is assessed using videos recorded from 24 hospitalized infants by visualizing receiver operating characteristic (ROC) curves and measuring the values of area under the ROC curve (AUC). Additional performance metrics (labeling accuracy) are also calculated. Experimental resultsshow that the proposed system achieves an AUC of 0.99, while the overall labeling accuracy is 0.98. Conclusions:These results confirms the robustness by using the 3D-CNN for infant discomfort monitoring and capturing both motion and facial expressions simultaneously.
Video-based motion analysis gave rise to contactless respiration rate monitoring that measures subtle respiratory movement from a human chest or belly. In this paper, we revisit this technology via a large video benchmark that includes six categories of practical challenges. We analyze two video properties (i.e. pixel intensity variation and pixel movement) that are essential for respiratory motion analysis and various signal extraction approaches (i.e. from conventional to recent Convolutional Neural Network (CNN)-based methods). We find that pixel movement can better quantify respiratory motion than pixel intensity variation in various conditions. We also conclude that the simple conventional approach (e.g. Zerophase Component Analysis) can achieve better performance than CNN that uses data training to define the extraction of respiration signal, which thus raises a more general question whether CNN can improve video-based physiological signal measurement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.