Research Methods and Procedures:Participants, 2862 eligible overweight and obese (BMI ϭ 27 to 40 kg/m 2 ) members from four regions of Kaiser Permanente's integrated health care delivery system, were randomized to receive either a tailored expert system or information-only Web-based weight management materials. Weight change and program satisfaction were assessed by self-report through an Internet-based survey at 3-and 6-month follow-up periods. Results: Significantly greater weight loss at follow-up was found among participants assigned to the tailored expert system than among those assigned to the information-only condition. Subjects in the tailored expert system lost a mean of 3 Ϯ 0.3% of their baseline weight, whereas subjects in the information-only condition lost a mean of 1.2 Ϯ 0.4% (p Ͻ 0.0004). Participants were also more likely to report that the tailored expert system was personally relevant, helpful, and easy to understand. Notably, 36% of enrollees were African-American, with enrollment rates higher than the general proportion of African Americans in any of the study regions. Discussion: The results of this large, randomized control trial show the potential benefit of the Web-based tailored expert system for weight management compared with a Web-based information-only weight management program.
Relaxing the assumption that relations are always in First-Normal-Form (1NF) necessitates a reexamination of the fundamentals of relational database theory. In this paper we take a first step towards unifying the various theories of ¬1NF databases. We start by determining an appropriate model to couch our formalisms in. We then define an extended relational calculus as the theoretical basis for our ¬1NF database query language. We define a minimal extended relational algebra and prove its equivalence to the ¬1NF relational calculus. We define a class of ¬1NF relations with certain “good” properties and extend our algebra operators to work within this domain. We prove certain desirable equivalences that hold only if we restrict our language to this domain.
Computers looking through a camera at people is a potentially powerful technique to facilitate human-computer interaction. The computer can interpret the user's movements, gestures, and glances. Fundamental visual algorithms include tracking, shape recognition, and motion analysis. For interactive graphics applications, these algorithms need to be robust, fast, and run on inexpensive hardware. Fortunately, the interactive applications also make the vision problems easier: they constrain the possible visual interpretations and provide helpful visual feedback to the user. Thus, some fast and simple vision algorithms can t well with interactive graphics applications. We describe several vision algorithms for interactive graphics, and present various vision controlled graphics applications which w e h a v e built which use them: vision-based computer games, a hand signal recognition system, and a television set controlled by hand gestures. Some of these applications can employ a special arti cial retina chip for image detection or pre-processing.
Abstract-When multiple threads or processes run on a multicore CPU they compete for shared resources, such as caches and memory controllers, and can suffer performance degradation as high as 200%. We design and evaluate a new machine learning model that estimates this degradation online, on previously unseen workloads, and without perturbing the execution.Our motivation is to help data center and HPC cluster operators effectively use workload consolidation. Data center consolidation is about placing many applications on the same server to maximize hardware utilization. In HPC clusters, processes of the same distributed applications run on the same machine. Consolidation improves hardware utilization, but may sacrifice performance as processes compete for resources. Our model helps determine when consolidation is overly harmful to performance. Our work is the first to apply machine learning to this problem domain, and we report on our experience reaping the advantages of machine learning while navigating around its limitations. We demonstrate how the model can be used to improve performance fidelity and save energy for HPC workloads.
Despite the fact that computer memory costs have decreased dramatically over the past few years, data storage still remains, and will probably always remain, an important cost factor for many large scale database applications. Compressing data in a database system is attractive for two reasons: data storage reduction and performance improvement. Storage reduction is a direct and obvious benefit, while performance improves because smaller amounts of physical data need to be moved for any particular operation on the database.
We address several aspects of reversible data compression and compression techniques:
general concepts of data compression;
a number of compression techniques;
a comparison of the effects of compression on common data types;
advantages and disadvantages of compressing data; and
future research needs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.