Clinical trials are a fundamental tool used to evaluate the efficacy and safety of new drugs and medical devices and other health system interventions. The traditional clinical trials system acts as a quality funnel for the development and implementation of new drugs, devices and health system interventions. The concept of a "digital clinical trial" involves leveraging digital technology to improve participant access, engagement, trial-related measurements, and/or interventions, enable concealed randomized intervention allocation, and has the potential to transform clinical trials and to lower their cost. In April 2019, the US National Institutes of Health (NIH) and the National Science Foundation (NSF) held a workshop bringing together experts in clinical trials, digital technology, and digital analytics to discuss strategies to implement the use of digital technologies in clinical trials while considering potential challenges. This position paper builds on this workshop to describe the current state of the art for digital clinical trials including (1) defining and outlining the composition and elements of digital trials; (2) describing recruitment and retention using digital technology; (3) outlining data collection elements including mobile health, wearable technologies, application programming interfaces (APIs), digital transmission of data, and consideration of regulatory oversight and guidance for data security, privacy, and remotely provided informed consent; (4) elucidating digital analytics and data science approaches leveraging artificial intelligence and machine learning algorithms; and (5) setting future priorities and strategies that should be addressed to successfully harness digital methods and the myriad benefits of such technologies for clinical research.
A fundamental aspect of rating-based recommender systems is the observation process, the process by which users choose the items they rate. Nearly all research on collaborative filtering and recommender systems is founded on the assumption that missing ratings are missing at random. The statistical theory of missing data shows that incorrect assumptions about missing data can lead to biased parameter estimation and prediction. In a recent study, we demonstrated strong evidence for violations of the missing at random condition in a real recommender system. In this paper we present the first study of the effect of non-random missing data on collaborative ranking, and extend our previous results regarding the impact of non-random missing data on collaborative prediction.
Mobile phones have evolved from communication devices to indispensable accessories with access to real-time content. The increasing reliance on dynamic content comes at the cost of increased latency to pull the content from the Internet before the user can start using it. While prior work has explored parts of this problem, they ignore the bandwidth costs of prefetching, incur significant training overhead, need several sensors to be turned on, and do not consider practical systems issues that arise from the limited background processing capability supported by mobile operating systems. In this paper, we make app prefetch practical on mobile phones. Our contributions are twofold. First, we design an app prediction algorithm, APPM, that requires no prior training, adapts to usage dynamics, predicts not only which app will be used next but also when it will be used, and provides high accuracy without requiring additional sensor context. Second, we perform parallel prefetch on screen unlock, a mechanism that leverages the benefits of prediction while operating within the constraints of mobile operating systems. Our experiments are conducted on long-term traces, live deployments on the Android Play Market, and user studies, and show that we outperform prior approaches to predicting app usage, while also providing practical ways to prefetch application content on mobile phones.
We present a method for joint analysis and synthesis of geometrically diverse 3D shape families. Our method first learns part‐based templates such that an optimal set of fuzzy point and part correspondences is computed between the shapes of an input collection based on a probabilistic deformation model. In contrast to previous template‐based approaches, the geometry and deformation parameters of our part‐based templates are learned from scratch. Based on the estimated shape correspondence, our method also learns a probabilistic generative model that hierarchically captures statistical relationships of corresponding surface point positions and parts as well as their existence in the input shapes. A deep learning procedure is used to capture these hierarchical relationships. The resulting generative model is used to produce control point arrangements that drive shape synthesis by combining and deforming parts from the input collection. The generative model also yields compact shape descriptors that are used to perform fine‐grained classification. Finally, it can be also coupled with the probabilistic deformation model to further improve shape correspondence. We provide qualitative and quantitative evaluations of our method for shape correspondence, segmentation, fine‐grained classification and synthesis. Our experiments demonstrate superior correspondence and segmentation results than previous state‐of‐the‐art approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.