Predicting promising academic papers is useful for a variety of parties, including researchers, universities, scientific councils, and policymakers. Researchers may benefit from such data to narrow down their reading list and focus on what will be important, and policymakers may use predictions to infer rising fields for a more strategic distribution of resources. This paper proposes a novel technique to predict a paper's future impact (i.e., number of citations) by using temporal and topological features derived from citation networks. We use a behavioral modeling approach in which the temporal change in the number of citations a paper gets is clustered, and new papers are evaluated accordingly. Then, within each cluster, we model the impact prediction as a regression problem where the objective is to predict the number of citations that a paper will get in the near or far future, given the early citation performance of the paper. The results of empirical evaluations on data from several well-known citation databases show that the proposed framework performs significantly better than the state of the art approaches.
Gliding a finger on touchscreen to reach a target, that is, touch exploration, is a common selection method of blind screen-reader users. This paper investigates their gliding behavior and presents a model for their motor performance. We discovered that the gliding trajectories of blind people are a mixture of two strategies: 1) ballistic movements with iterative corrections relying on non-visual feedback, and 2) multiple sub-movements separated by stops, and concatenated until the target is reached. Based on this finding, we propose the mixture pointing model, a model that relates movement time to distance and width of the target. The model outperforms extant models, improving R 2 from 0.65 for Fitts' law to 0.76, and is superior in cross-validation and information criteria. The model advances understanding of gliding-based target selection and serves as a tool for designing interface layouts for screen-reader based touch exploration. CCS CONCEPTS• Human-centered computing → Pointing; Accessibility theory, concepts and paradigms.
People with low vision who use screen magnifiers to interact with computing devices find it very challenging to interact with dynamically changing digital content such as videos, since they do not have the luxury of time to manually move, i.e., pan the magnifier lens to different regions of interest (ROIs) or zoom into these ROIs before the content changes across frames.In this paper, we present SViM, a first of its kind screen-magnifier interface for such users that leverages advances in computer vision, particularly video saliency models, to identify salient ROIs in videos. SViM's interface allows users to zoom in/out of any point of interest, switch between ROIs via mouse clicks and provides assistive panning with the added flexibility that lets the user explore other regions of the video besides the ROIs identified by SViM.Subjective and objective evaluation of a user study with 13 low vision screen magnifier users revealed that overall the participants had a better user experience with SViM over extant screen magnifiers, indicative of the former's promise and potential for making videos accessible to low vision screen magnifier users. CCS CONCEPTS• Human-centered computing → Human computer interaction (HCI); Accessibility systems and tools; User studies.
With Microsoft's launch of Kinect in 2010, and release of Kinect SDK in 2011, numerous applications and research projects exploring new ways in human-computer interaction have been enabled. Gesture recognition is a technology often used in human-computer interaction applications. Dynamic time warping (DTW) is a template matching algorithm and is one of the techniques used in gesture recognition. To recognize a gesture, DTW warps a time sequence of joint positions to reference time sequences and produces a similarity value. However, all body joints are not equally important in computing the similarity of two sequences. We propose a weighted DTW method that weights joints by optimizing a discriminant ratio. Finally, we demonstrate the recognition performance of our proposed weighted DTW with respect to the conventional DTW and state-ofthe-art.
PDF forms are ubiquitous. Businesses big and small, government agencies, health and educational institutions and many others have all embraced PDF forms. People use PDF forms for providing information to these entities. But people who are blind frequently find it very difficult to fill out PDF forms with screen readers, the standard assistive software that they use for interacting with computer applications. Firstly, many of the them are not even accessible as they are non-interactive and hence not editable on a computer. Secondly, even if they are interactive, it is not always easy to associate the correct labels with the form fields, either because the labels are not meaningful or the sequential reading order of the screen reader misses the visual cues that associate the correct labels with the fields. In this paper we present a solution to the accessibility problem of PDF forms. We leverage the fact that many people with visual impairments are familiar with web browsing and are proficient at filling out web forms. Thus, we create a web form layer over the PDF form via a high fidelity transformation process that attempts to preserve all the spatial relationships of the PDF elements including forms, their labels and the textual content. Blind people only interact with the web forms, and the filled out web form fields are transparently transferred to the corresponding fields in the PDF form. An optimization algorithm automatically adjusts the length and width of the PDF fields to accommodate arbitrary size field data. This ensures that the filled out PDF document does not have any truncated form-field values, and additionally, it is readable. A user study with fourteen users with visual impairments revealed that they were able to populate more form fields than the status quo and the self-reported user experience with the proposed interface was superior compared to the status quo.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.