Background: The evolving concepts of pervasive computing, ubiquitous computing and ambient intelligence are increasingly influencing health care and medicine. Summarizing published research, this literature review provides an overview of recent developments and implementations of pervasive computing systems in health care. It also highlights some of the experiences reported in deployment processes. Methods:There is no clear definition of pervasive computing in the current literature. Thus specific inclusion criteria for selecting articles about relevant systems were developed. Searches were conducted in four scientific databases alongside manual journal searches for the period of 2002 to 2006. Articles included present prototypes, case studies and pilot studies, clinical trials and systems that are already in routine use. Results:The searches identified 69 articles describing 67 different systems. In a quantitative analysis, these systems were categorized into project status, health care settings, user groups, improvement aims, and systems features (i.e., component types, data gathering, data transmission, systems functions). The focus is on the types of systems implemented, their frequency of occurrence and their characteristics. Qualitative analyses were performed of deployment issues, such as organizational and personnel issues, privacy and security issues, and financial issues. This paper provides a comprehensive access to the literature of the emerging field by addressing specific topics of application settings, systems features, and deployment experiences. Conclusion:Both an overview and an analysis of the literature on a broad and heterogeneous range of systems are provided. Most systems are described in their prototype stages. Deployment issues, such as implications on organization or personnel, privacy concerns, or financial issues are mentioned rarely, though their solution is regarded as decisive in transferring promising systems to a stage of regular operation. There is a need for further research on the deployment of pervasive computing systems, including clinical studies, economic and social analyses, user studies, etc.
We conducted laboratory experiments to analyze the accuracy of three structured approaches (nominal groups, Delphi, and prediction markets) compared to traditional face-to-face meetings (FTF). We recruited 227 participants (11 groups per method) that had to solve a quantitative judgment task that did not involve distributed knowledge. This task consisted of ten factual questions, which required percentage estimates. While, overall, we did not find statistically significant differences in accuracy between the four methods, the results differed somewhat at the individual question level. Delphi was as accurate as FTF for eight questions and outperformed FTF for two questions. By comparison, prediction markets were unable to outperform FTF for any of the ten questions but were inferior for three questions. The relative performance of nominal groups and FTF was mixed and differences were small. We also compared the results from the three structured approaches to prior individual estimates and staticized groups. The three structured approaches were more accurate than participants' prior individual estimates. Delphi was also more accurate than staticized groups. Nominal groups and prediction markets provided little additional value compared to a simple average of forecast. In addition, we examined participants' perceptions of the group and the group process. Participants rated personal communication more favorable than computer-mediated interaction. Group interaction in FTF and nominal groups was perceived as highly cooperative and effective. Prediction markets were rated least favorable. Prediction market participants were least satisfied with the group process and perceived their method as most difficult. Abstract.We conducted laboratory experiments to analyze the accuracy of three structured approaches (nominal groups, Delphi, and prediction markets) compared to traditional face-to-face
We conducted an online experiment to study people's perception of automated computer-written news. Using a 2 × 2 × 2 design, we varied the article topic (sports, finance; within-subjects) and both the articles' actual and declared source (humanwritten, computer-written; between-subjects). Nine hundred eighty-six subjects rated two articles on credibility, readability, and journalistic expertise. Varying the declared source had small but consistent effects: subjects rated articles declared as human written always more favorably, regardless of the actual source. Varying the actual source had larger effects: subjects rated computer-written articles as more credible and higher in journalistic expertise but less readable. Across topics, subjects' perceptions did not differ. The results provide conservative estimates for the favorability of computerwritten news, which will further increase over time and endorse prior calls for establishing ethics of computer-written news.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.