This paper presents a blended learning approach and a study evaluating instruction in a software engineering-related course unit as part of an undergraduate engineering degree program in computing. In the past, the course unit had a lecture-based format. In view of student underachievement and the high course unit dropout rate, a distance-learning system was deployed, where students were allowed to choose between a distance-learning approach driven by a modérate constructivist instructional model or a blended-learning approach. The results of this experience are presented, with the aim of showing the effectiveness of the teaching/learning system deployed compared to the lecture-based system previously in place. The grades earned by students under the new system, following the distance-learning and blended-learning courses, are compared statistically to the grades attained in earlier years in the traditional face-to-face classroom (lecture-based) learning.
Abstract-Agent-oriented software is an established research field. For this reason, it is important to develop comprehensive measures of excellence to evaluate this software. No set of measures defining the overall quality of an agent has been developed to date. Some attempts at evaluating agent quality have addressed certain agent features, like the development process. We believe that agent quality can be determined as a function of well-defined characteristics. Evaluated using appropriate measures, these characteristics will assure an agent's reliability and correct functionality. This paper deals with an important agent feature, namely, autonomy. Autonomy is considered to be the agent's ability to operate independently, without the need for human guidance or the intervention of external elements. The article proposes a set of measures used to evaluate the autonomy of a agent and presents a case study analysing the behaviour of these measures.
Web accessibility for people with disabilities is a highly visible area of research in the field of ICT accessibility, including many policy activities across many countries. The commonly accepted guidelines for web accessibility (WCAG 1.0) were published in 1999 and have been extensively used by designers, evaluators and legislators. W3C-WAI published a new version of these guidelines (WCAG 2.0) in December 2008. One of the main goals of WCAG 2.0 was testability, that is, WCAG 2.0 should be either machine testable or reliably human testable. In this paper we present an educational experiment performed during an intensive web accessibility course. The goal of the experiment was to assess the testability of the 25 level-A success criteria of WCAG 2.0 by beginners. To do this, the students had to manually evaluate the accessibility of the same web page. The result was that only eight success criteria could be considered to be reliably human testable when evaluators were beginners. We also compare our experiment with a similar study published recently. Our work is not a conclusive experiment, but it does suggest some parts of WCAG 2.0 to which special attention should be paid when training accessibility evaluators.
Expert systems are built from knowledge traditionally elicited from the human expert. It is precisely knowledge elicitation from the expert that is the bottleneck in expert system construction. On the other hand, a data mining system, which automatically extracts knowledge, needs expert guidance on the successive decisions to be made in each of the system phases. In this context, expert knowledge and data mining discovered knowledge can cooperate, maximizing their individual capabilities: data mining discovered knowledge can be used as a complementary source of knowledge for the expert system, whereas expert knowledge can be used to guide the data mining process. This article summarizes different examples of systems where there is cooperation between expert knowledge and data mining discovered knowledge and reports our experience of such cooperation gathered from a medical diagnosis project called Intelligent Interpretation of Isokinetics Data, which we developed. From that experience, a series of lessons were learned throughout project development. Some of these lessons are generally applicable and others pertain exclusively to certain project types.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.