As Information & Communication Technology(ICT) is rapidly evolved, educational paradigms have been changing. The ultimate goal of education with the aid of ICT is to provide customized training for learners to improve the effectiveness of their learning at anytime and anywhere. In the online learning environment where the Internet, mobile devices, peer-to-peer (P2P) and the cloud technology are leveraged, all the information in learning activities is converted into digital data and stored in the Computer Supported Collaborative Learning (CSCL) system. The data in the CSCL system contains various learners' information including the learning objectives, learning preferences, competences and achievements. Thus, by analyzing the activity information of learners in an online CSCL system, meaningful and useful information can be extracted and provided for learners, teachers and administrators as feedback. In this paper, we propose a learner activity model that represents the learner's activity information stored in a CSCL system. As for the proposed learner activity model, we classified the learning activities in a CSCL system into three categories: vivacity, learning and relationship; then we created quotients to represent them accordingly. In addition, we developed a CSCL System, which we termed as COLLA, applied the proposed learner activity model and analyzed the results.
Video scene segmentation is very important research in the field of computer vision, because it helps in efficient storage, indexing and retrieval of videos. Achieving this kind of scene segmentation cannot be done by just calculating the similarity of low-level features presented in the video; high-level features should also be considered to achieve a better performance. Even though much research has been conducted on video scene segmentation, most of these studies failed to semantically segment a video into scenes. Thus, in this study, we propose a Deep-learning Semantic-based Scene-segmentation model (called DeepSSS) that considers image captioning to segment a video into scenes semantically. First, the DeepSSS performs shot boundary detection by comparing colour histograms and then employs maximum-entropy-applied keyframe extraction. Second, for semantic analysis, using image captioning that benefits from deep learning generates a semantic text description of the keyframes. Finally, by comparing and analysing the generated texts, it assembles the keyframes into a scene grouped under a semantic narrative. That said, DeepSSS considers both low- and high-level features of videos to achieve a more meaningful scene segmentation. By applying DeepSSS to data sets from MS COCO for caption generation and evaluating its semantic scene-segmentation task results with the data sets from TRECVid 2016, we demonstrate quantitatively that DeepSSS outperforms other existing scene-segmentation methods using shot boundary detection and keyframes. What’s more, the experiments were done by comparing scenes segmented by humans and scene segmented by the DeepSSS. The results verified that the DeepSSS’ segmentation resembled that of humans. This is a new kind of result that was enabled by semantic analysis, which was impossible by just using low-level features of videos.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.