Adaptive Educational Hypermedia Systems (AEHS) play a crucial role in supporting adaptive learning and immensely outperform learner-control based systems. AEHS' page indexing and hyperspace rely mostly on navigation supports which provide the learners with a user-friendly interactive learning environment. Such AEHS features provide the systems with a unique ability to adapt learners' preferences. However, obtaining timely and accurate information for their adaptive decision-making process is still a challenge due to the dynamic understanding of individual learner. This causes a spontaneous changing of learners' learning styles that makes hard for system developers to integrate learning objects with learning styles on real-time basis. Thus, in previous research studies, multiple levels navigation supports have been applied to solve this problem. However, this approach destroys their learning motivation because of imposing time and work overload on learners. To address such a challenge, this study proposes a bioinformatics-based adaptive navigation support that was initiated by the alternation of learners' motivation states on a real-time basis. EyeTracking sensor and adaptive time-locked Learning Objects (LOs) were used. Hence, learners' pupil size dilation and reading and reaction time were used for the adaption process and evaluation. The results show that the proposed approach improved the AEHS adaptive process and increased learners' performance up to 78%.
Optimal learning environment highly depends on aptitude treatment interaction. Even though ICT (Information and Communication Technologies) advancement supports knowledge sharing platforms including e-learning and multimedia, designing of learning content in such platforms relies on the aptitude treatment interaction to sustain a learner-centric environment. However, achieving the optimal learning environment in e-learning platform is still a challenge due to variation of learner's skills and abilities to both learning process and learning content. This makes difficulties in monitoring learner's cognition states and quality of learning content. To overcome the difficulties, in this study, a learner-centric metacognitive experiences based approach has been proposed (e-learning Prior Knowledge Assessment System-ePKAS), ePKAS is supporting the detection and evaluation of learners' prior knowledge profiles in turn to enable monitoring of learners' cognition states and adaptation of cognitive states in to e-learning platforms based on visual contact. The study has investigated students' reactions to multimedia content based on their past experiences. The results show that students respond more attentively and accurately (93%) to the learning content that is closely related to their past experiences. Scope of this study based on visual contact in order to support the involvement of people with hearing impairment in e-learning platforms.
A real time communication between deaf and hearing people is still a barrier that isolates the deaf people from the hearing world. Over ninety percent of deaf children are born to hearing parents. However, most of them can only learn how to communicate using sign language at school. One of the reasons is that the hearings parents have neither enough time nor support to learn sign language to communicate and support their children. Not surprisingly, the deaf finds difficulties in the oral-only education. Since many other hearing pupils do not even know about the existence of sign language, they cannot communicate directly with the deaf without a sign language interpreter. Therefore, to enable a face-to-face conversation between deaf and hearing people, it is important not only to sustain real time conversation between the deaf and their hearing counterparts but also to equip the hearing with basics of sign language. However, speech to sign conversion remains a challenge due to dialectal and sign language variation, speech utterance and lack of sign language written form. In this paper, a solution named Face-to-Face Conversation Deaf and Hearing people-FFCDH is proposed to address above issues. FFCDH supports real time conversation and also allows the hearing to learn the signs with the same meaning as the deaf understand. Moreover, FFCDH records the speech of the hearing and converts it into signs for the deaf. It also provides deaf with an option to adjust volume of their speech by displaying volume of their voice. The performance of the system in supporting the deaf has been evaluated by using a real test-bed. The obtained results show that English and Japanese daily conversation phrases can be recognized with over 90 percent accuracy on average. The average coherent of simple content is over 94 percent. However, when the speech includes long and complex phrases, the average accuracy and the coherent are slightly lower because the system could not comprehend long and complex context at large scope.
As intelligent systems demand for human–automation interaction increases, the need for learners’ cognitive traits adaptation in adaptive educational hypermedia systems (AEHS) has dramatically increased. AEHS utilize learners’ cognitive processes to attain fair human–automation interaction for their adaptive processes. However, obtaining accurate cognitive trait for the AEHS adaptation process has been a challenge due to the fact that it is difficult to determine what extent such traits can comprehend system functionalities. Hence, this study has explored correlation among learners’ pupil size dilation, learners’ reading time and endogenous blinking rate when using AEHS so as to enable cognitive load estimation in support of AEHS adaptive process. An eye-tracking sensor was used and the study found correlation among learners’ pupil size dilation, reading time and learners’ endogenous blinking rate. Thus, the results show that endogenous blinking rate, pupil size and reading time are not only AEHS reliable parameters for cognitive load measurement but can also support human–automation interaction at large.
Software is becoming a key ingredient in most of the modern systems and devices that support important business processes in our modern society. It is an essential component of many embedded applications that control various sensitive applications such as air traffic control systems, rockets, automated banking systems (ATM), security systems (radar) and so many other applications. The failure of those systems can result into severe damage [7].It obvious that software testing technologies are essential for software testers. Even though there are several software testing methodologies and techniques to support the quality of software but to find relevant parameters for their applicability conditions remains an open question. This paper attempts to shed light on decision criteria, by describing various software testing techniques and their distinctions. The finding shows that these methodologies and techniques do not have direct influence on the quality of software but the choices we make on selecting relevant methodology/or technique for the application.
Information security has played a great role in supporting security of organizational assets.Computes softwares / information systems developers have taken information security into great consideration particularly during systems/software development.There are several modelling languages that can be used to architect security features of information systems with respect to information system security management domain model(ISSRM).Malicious Activity Diagrams have been widely used by developers to model security features of various information systems as an extension of Unified Modeling Language(UML) [[2]].However Malicious Activity Diagrams can not cover all the features of ISSRM [[11]]. Due to the limitations of Malicious Activity Diagrams,this study has proposed new additional featers that will enable Malicious Activity Diagrams to cover the remaining security concepts of ISSRM(such as security constraint of the static information/security criterion-figure 6).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.