The use of multimedia has often been suggested as a teaching tool in foreign language teaching and learning. In foreign language education, exciting new multimedia applications have appeared over the last years, especially for young learners, but many of these do not seem to produce the desired effect in language development. This article looks into the theories of dual-coding (DCT) and multimedia learning (CTML) as the theoretical basis for the development of more effective digital tools with the use of films and subtitling. Bilingual dual-coding is also presented as a means of indirect access from one language to another and the different types of subtitling are explored regarding their effectiveness, especially in the field of short-term and long-term vocabulary recall and development. Finally, the article looks into some new alternative audiovisual tools that actively engage learners with films and subtitling, tailored towards vocabulary learning.The dual-coding theory (DCT) is a general cognition theory that has been directly applied to literacy and language learning. The theory was proposed by Allan Paivio in 1971 and explains the powerful effects of mental imagery on the mind and memory. In his theory, Paivio originally accounted for verbal and nonverbal influences in memory, but researchers soon started applying it in other areas of cognition [8][9][10]. According to this theory, a person can learn new materials using verbal associations or visual imagery but the combination of both is more successful in learning [11]. The dual-coding theory states that the brain uses both visual and verbal information to represent information [12], but this information is processed differently along two distinct channels in the human mind, creating different representations for information that each channel processes. The existing two coding systems are the verbal system and the nonverbal/visual system. These two coding systems interact, and this interaction results in better recall [10,13]. The verbal system stores linguistic information/units (such as text, sound, or even motor experience such as sign language) in sequential units called "logogens." The non-verbal/visual system processes visual information/units (such as symbols, pictures, or videos) and keeps them in units called "imagens." The terms "logogen" and "imagen" refer respectively to representational units of verbal and nonverbal information that produce already existing mental words and images and can function unconsciously to improve cognitive performance [14]. According to Paivio [9] and Clark and Paivio [15], there are three different processing levels that take place within or between verbal and nonverbal/visual systems: representational, referential, and associative processing. The two systems are linked together through referential connections (Figure 1).
Abstract. The ability to timely predict the academic performance tendency of postgraduate students is very important in MSc programs and useful for tutors. The scope of this research is to investigate which is the most efficient machine learning technique in predicting the final grade of Ionian University Informatics postgraduate students. Consequently, five academic courses are chosen, each constituting an individual dataset, and six well-known classification algorithms are experimented with. Furthermore, the datasets are enriched with demographic, in-term performance and in-class behaviour features. The small size of the datasets and the imbalance in the distribution of class values are the main research challenges of the present work. Several techniques, like resampling and feature selection, are employed to address these issues, for the first time in a performance prediction application. Naïve Bayes and 1-NN achieved the best prediction results, which are very satisfactory compared to those of similar approaches.
Considering music as a sequence of events with multiple complex dependencies, the Long Short-Term Memory (LSTM) architecture has proven very efficient in learning and reproducing musical styles. However, the generation of rhythms requires additional information regarding musical structure and accompanying instruments. In this paper we present DeepDrum, an adaptive Neural Network capable of generating drum rhythms under constraints imposed by Feed-Forward (Conditional) Layers which contain musical parameters along with given instrumentation information (e.g. bass and guitar notes). Results on generated drum sequences are presented indicating that DeepDrum is effective in producing rhythms that resemble the learned style, while at the same time conforming to given constraints that were unknown during the training process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.