A variant of nearest-neighbor (NN) pattern classification and supervised learning by learning vector quantization (LVQ) is described. The decision surface mapping method (DSM) is a fast supervised learning algorithm and is a member of the LVQ family of algorithms. A relatively small number of prototypes are selected from a training set of correctly classified samples. The training set is then used to adapt these prototypes to map the decision surface separating the classes. This algorithm is compared with NN pattern classification, learning vector quantization, and a two-layer perceptron trained by error backpropagation. When the class boundaries are sharply defined (i.e., no classification error in the training set), the DSM algorithm outperforms these methods with respect to error rates, learning rates, and the number of prototypes required to describe class boundaries.
<p class="abstract"><span lang="EN-US">In recent years, universities have been under increased pressure to adopt e-learning practices for teaching and learning. In particular, the emphasis has been on learning management systems (LMSs) and associated collaboration tools to provide opportunities for sharing knowledge, building a community of learners, and supporting higher order learning and critical thinking through conversation and collaboration. Due to the greater level of</span><span lang="EN-GB"> data continuity, reliability, and privacy that LMSs can provide compared to the available free applications, LMSs are still the central platform for many universities to deliver e-learning. Therefore, it is vital to investigate the LMS structure requisites that affect user engagement. This paper focuses on the important LMS design factors that influence user engagement with e-learning tools within LMSs. Results were extracted from 74 interviews about Blackboard with students and lecturers within a major Australian university. </span>A user-friendly structure, avoidance of too many tools and links, support for privacy and anonymous posting, and more customisable student-centred tools were identified as LMS design factors that affect user engagement<span lang="EN-US">.</span></p>
This paper gives an overview of the INEX 2008 Ad Hoc Track. The main goals of the Ad Hoc Track were twofold. The first goal was to investigate the value of the internal document structure (as provided by the XML markup) for retrieving relevant information. This is a continuation of INEX 2007 and, for this reason, the retrieval results are liberalized to arbitrary passages and measures were chosen to fairly compare systems retrieving elements, ranges of elements, and arbitrary passages. The second goal was to compare focused retrieval to article retrieval more directly than in earlier years. For this reason, standard document retrieval rankings have been derived from all runs, and evaluated with standard measures. In addition, a set of queries targeting Wikipedia have been derived from a proxy log, and the runs are also evaluated against the clicked Wikipedia pages. The INEX 2008 Ad Hoc Track featured three tasks: For the Focused Task a ranked-list of nonoverlapping results (elements or passages) was needed. For the Relevant in Context Task non-overlapping results (elements or passages) were returned grouped by the article from which they came. For the Best in Context Task a single starting point (element start tag or passage start) for each article was needed. We discuss the results for the three tasks, and examine the relative effectiveness of element and passage retrieval. This is examined in the context of content only (CO, or Keyword) search as well as content and structure (CAS, or structured) search. Finally, we look at the ability of focused retrieval techniques to rank articles, using standard document retrieval techniques, both against the judged topics as well as against queries and clicks from a proxy log.
No abstract
This version is available at https://strathprints.strath.ac.uk/62696/ Strathprints is designed to allow users to access the research output of the University of Strathclyde. Unless otherwise explicitly stated on the manuscript, Copyright © and Moral Rights for the papers on this site are retained by the individual authors and/or other copyright owners. Please check the manuscript for details of any other licences that may have been applied. You may not engage in further distribution of the material for any profitmaking activities or any commercial gain. You may freely distribute both the url (https://strathprints.strath.ac.uk/) and the content of this paper for research or private study, educational, or not-for-profit purposes without prior permission or charge.Any correspondence concerning this service should be sent to the Strathprints administrator: strathprints@strath.ac.ukThe Strathprints institutional repository (https://strathprints.strath.ac.uk) is a digital archive of University of Strathclyde research outputs. It has been developed to disseminate open access research outputs, expose data about those outputs, and enable the management and persistent access to Strathclyde's intellectual output. A Test Collection for Evaluating Retrieval of Studies for Inclusion in Systematic ReviewsHarrisen Scells ABSTRACT is paper introduces a test collection for evaluating the e ectiveness of di erent methods used to retrieve research studies for inclusion in systematic reviews. Systematic reviews appraise and synthesise studies that meet speci c inclusion criteria. Systematic reviews intended for a biomedical science audience use boolean queries with many, o en complex, search clauses to retrieve studies; these are then manually screened to determine eligibility for inclusion in the review. is process is expensive and time consuming. e development of systems that improve retrieval e ectiveness will have an immediate impact by reducing the complexity and resources required for this process. Our test collection consists of approximately 26 million research studies extracted from the freely available MEDLINE database, 94 review (query) topics extracted from Cochrane systematic reviews, and corresponding relevance assessments. Tasks for which the collection can be used for information retrieval system evaluation are described and the use of the collection to evaluate common baselines within one such task is demonstrated. e test collection is available at h ps://github.com/ielab/SIGIR2017-PICO-Collection.
e PICO process is a technique used in evidence based practice to frame and answer clinical questions. It involves structuring the question around four types of clinical information: Population, Intervention, Control or comparison and Outcome.e PICO framework is used extensively in the compilation of systematic reviews as the means of framing research questions. However, when a search strategy (comprising of a large Boolean query) is formulated to retrieve studies for inclusion in the review, PICO is o en ignored. is paper evaluates how PICO annotations can be applied and integrated into retrieval to improve the screening of studies for inclusion in systematic reviews. e task is to increase precision while maintaining the high level of recall essential to ensure systematic reviews are representative and unbiased. Our results show that restricting the search strategies to match studies using PICO annotations improves precision, however recall is slightly reduced, when compared to the non-PICO baseline. is can lead to both time and cost savings when compiling systematic reviews.
This paper gives an overview of the INEX 2008 Ad Hoc Track. The main goals of the Ad Hoc Track were twofold. The first goal was to investigate the value of the internal document structure (as provided by the XML markup) for retrieving relevant information. This is a continuation of INEX 2007 and, for this reason, the retrieval results are liberalized to arbitrary passages and measures were chosen to fairly compare systems retrieving elements, ranges of elements, and arbitrary passages. The second goal was to compare focused retrieval to article retrieval more directly than in earlier years. For this reason, standard document retrieval rankings have been derived from all runs, and evaluated with standard measures. In addition, a set of queries targeting Wikipedia have been derived from a proxy log, and the runs are also evaluated against the clicked Wikipedia pages. The INEX 2008 Ad Hoc Track featured three tasks: For the Focused Task a ranked-list of nonoverlapping results (elements or passages) was needed. For the Relevant in Context Task non-overlapping results (elements or passages) were returned grouped by the article from which they came. For the Best in Context Task a single starting point (element start tag or passage start) for each article was needed. We discuss the results for the three tasks, and examine the relative effectiveness of element and passage retrieval. This is examined in the context of content only (CO, or Keyword) search as well as content and structure (CAS, or structured) search. Finally, we look at the ability of focused retrieval techniques to rank articles, using standard document retrieval techniques, both against the judged topics as well as against queries and clicks from a proxy log.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.