This article reviews the research and evidence about multi-touch tables to provide an analysis of their key design features and capabilities and how these might relate to their use in educational settings to support collaborative learning. A typology of design features is proposed as a synthesis of the hardware and physical characteristics of the tables so that the longevity of these factors and the associated analysis can be better preserved, particularly in relation to the range of ways in which they may be used collaboratively in classrooms. The variability of features relating to software is also analysed and key pedagogic issues identified. The aim that underpins this review is to relate the design of the technical features with key pedagogic issues concerning the use of digital technologies in classrooms, so as to provide a more robust basis for their integration in classrooms in terms of their potential to support or to improve learning.
The potential of tabletops to enable simultaneous interaction and face-to-face collaboration can provide novel learning opportunities. Despite significant research in the area of collaborative learning around tabletops, little attention has been paid to the integration of multi-touch surfaces into classroom layouts and how to employ this technology to facilitate teacher-learner dialogue and teacher-led activities across multi-touch surfaces. While most existing techniques focus on the collaboration between learners, this work aims to gain a better understanding of practical challenges that need to be considered when integrating multi-touch surfaces into classrooms. It presents a multi-touch interaction technique, called TablePortal, which enables teachers to manage and monitor collaborative learning on students' tables. Early observations of using the proposed technique within a novel classroom consisting of networked multi-touch surfaces are discussed. The aim was to explore the extent to which our design choices facilitate teacher-learner dialogue and assist the management of classroom activity.
We present the experiences from building a web-scale user modeling platform for optimizing display advertising targeting at Yahoo!. The platform described in this paper allows for per-campaign maximization of conversions representing purchase activities or transactions. Conversions directly translate to advertiser's revenue, and thus provide the most relevant metrics of return on advertising investment. We focus on two major challenges: how to efficiently process histories of billions of users on a daily basis, and how to build per-campaign conversion models given the extremely low conversion rates (compared to click rates in a traditional setting). We first present mechanisms for building web-scale user profiles in a daily incremental fashion. Second, we show how to reduce the latency through in-memory processing of billions of user records. Finally, we discuss a technique for scaling the number of handled campaigns/models by introducing an efficient labeling technique that allows for sharing negative training examples across multiple campaigns.
Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details. Abstract-In order to characterize and improve software architecture visualization practice, the paper derives and constructs a qualitative framework, with seven key areas and 31 features, for the assessment of software architecture visualization tools. The framework is derived by the application of the Goal Question Metric paradigm to information obtained from a literature survey and addresses a number of stakeholder issues. The evaluation is performed from multiple stakeholder perspectives and in various architectural contexts. Stakeholders can apply the framework to determine if a particular software architecture visualization tool is appropriate to a given task. The framework is applied in the evaluation of a collection of six software architecture visualization tools. The framework may also be used as a design template for a comprehensive software architecture visualization tool.
The current "state-of-the-art" in phonetic speaker recognition uses relative frequencies of phone n-grams as features for training speaker models and for scoring test-target pairs. Typically, these relative frequencies are computed from a simple 1-best phone decoding of the input speech. In this paper, we present results on the Switchboard-2 corpus, where we compare 1-best phone decodings versus lattice phone decodings for the purposes of performing phonetic speaker recognition. The phone decodings are used to compute relative frequencies of phone bigrams, which are then used as inputs for two standard phonetic speaker recognition systems: a system based on log-likelihood ratios (LLRs) [1,2], and a system based on support vector machines (SVMs) [3]. In each experiment, the lattice phone decodings achieve relative reductions in equal-error rate (EER) of between 31% and 66% below the EERs of the 1-best phone decodings. Our best phonetic system achieves an EER of 2.0% on 8-conversation training and 1.4% when combined with a GMM-based system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.