In this study, we developed a digital game-based learning (DGBL) system, called the ToES, to foster students' creativity. Fifty-one fifth-grade students from two classes in a public school in Taipei, Taiwan, were recruited and consented to participate. Both classes consisted of students with mixed abilities studying a foundation unit entitled "Electrical Science" in a natural science course. One class was chosen to be the experimental group (EG) and the other class was the control group (CG). The goal of this study was to examine how different instructional strategies (i.e., traditional instruction and instruction using digital games) affected the students' creativity and their performance on manual skills. The analytical results indicated that the students' creativity and their performance on manual skills displayed positive growth when they were involved in acquiring knowledge and resolving tasks in a DGBL environment, which fostered their creativity and facilitated the generation of flow experiences. Moreover, there were three interesting findings related to the use of DGBL: (1) the ToES was an effective learning tool for cultivating the students' creativity; (2) there was a positive effect of creativity and their performance of manual skills; and (3) the ToES accelerated the improvement of practical behaviors regarding manual skills.
HighlightsWe design a digital game with creativity called the ToES. We prove differences in creativity and manual skills between traditional classroom and digital game-based environment. Students were able to achieve better learning performances in DGBL environment. DGBL facilitates the generation of flow experience.
The initial boundary value problem for an integro-differential equation with nonlinear damping and source terms in a bounded domain is considered. By modifying the method in a work by Autuori et al. in 2010, we establish the nonexistence result of global solutions with the initial energy controlled by a critical value. This improves earlier results in the literatures.
This paper describes progress towards a general framework for incorporating multimodal cues into a trainable system for automatically annotating user-defined semantic concepts in broadcast video. Models of arbitrary concepts are constructed by building classifiers in a score space defined by a pre-deployed set of multimodal models. Results show annotation for user-defined concepts both in and outside the predeployed set is competitive with our best video-only models on the TREC Video 2002 corpus. An interesting side result shows speech-only models give performance comparable to our best video-only models for detecting visual concepts such as "outdoors", "face" and "cityscape".
In this paper, a general existence theorem on the generalized variational inequality problem GV I(T, C, φ) is derived by using our new versions of Nikaidô's coincidence theorem, for the case where the region C is noncompact and nonconvex, but merely is a nearly convex set. Equipped with a kind of V 0-Karamardian condition, this general existence theorem contains some existing ones as special cases. Based on a Saigal condition, we also modify the main theorem to obtain another existence theorem on GV I(T, C, φ), which generalizes a result of Fang and Peterson.
A special relational structure, called genealogical tree, is introduced; its social interpretation and geometrical realizations are discussed. The numbers C n,k of all abstract genealogical trees with exactly n+1 nodes and k leaves is found by means of enumeration of code words. For each n, the C n,k form a partition of the n-th Catalan numer C n , that means C n,1 + C n,2 + · · · + C n,n = C n .
Modeling visual concepts using supervised or unsupervised machine learning approaches are becoming increasing important for video semantic indexing, retrieval, and filtering applications. Naturally, videos include multimodality data such as audio, speech, visual and text, which are combined to infer therein the overall semantic concepts. However, in the literature, most researches were conducted within only one single domain. In this paper we propose an unsupervised technique that builds context-independent keyword lists for desired visual concept modeling using WordNet. Furthermore, we propose an Extended Speech-based Visual Concept (ESVC) model to reorder and extend the above keyword lists by supervised learning based on multimodality annotation. Experimental results show that the context-independent models can achieve comparable performance compared to conventional supervised learning algorithms, and the ESVC model achieves about 53% and 28.4% improvement in two testing subsets of the TRECVID 2003 corpus over a state-of-the-art speech-based video concept detection algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.