Objective To assess the accuracy of portion-size estimates and participant preferences using various presentations of digital images. Design Two observational feeding studies were conducted. In both, each participant selected and consumed foods for breakfast and lunch, buffet style, serving themselves portions of nine foods representing five forms (eg, amorphous, pieces). Serving containers were weighed unobtrusively before and after selection as was plate waste. The next day, participants used a computer software program to select photographs representing portion sizes of foods consumed the previous day. Preference information was also collected. In Study 1 (n=29), participants were presented with four different types of images (aerial photographs, angled photographs, images of mounds, and household measures) and two types of screen presentations (simultaneous images vs an empty plate that filled with images of food portions when clicked). In Study 2 (n=20), images were presented in two ways that varied by size (large vs small) and number (4 vs 8). Subjects/setting Convenience sample of volunteers of varying background in an office setting. Statistical analyses performed Repeated-measures analysis of variance of absolute differences between actual and reported portions sizes by presentation methods. Results Accuracy results were largely not statistically significant, indicating that no one image type was most accurate. Accuracy results indicated the use of eight vs four images was more accurate. Strong participant preferences supported presenting simultaneous vs sequential images. Conclusions These findings support the use of aerial photographs in the automated self-administered 24-hour recall. For some food forms, images of mounds or household measures are as accurate as images of food and, therefore, are a cost-effective alternative to photographs of foods.
Background There are currently no standardized measures of tobacco use and secondhand smoke exposure in patients diagnosed with cancer, and this gap hinders the conduct of studies examining the impact of tobacco on cancer treatment outcomes. Our objective was to evaluate and refine questionnaire items proposed by an expert task force to assess tobacco use. Methods Trained interviewers conducted cognitive testing with cancer patients age 21 or older with a history of tobacco use and cancer diagnosis of any stage and organ site, recruited at the National Institutes of Health Clinical Center (Bethesda, MD). Iterative rounds of testing and item modification were conducted to identify and resolve cognitive issues (comprehension, memory retrieval, decision/judgment, response mapping) and instrument navigation issues until no items warranted further significant modification. Results Thirty participants (6 current cigarette smokers, 1 current cigar smoker, 23 former cigarette smokers) were enrolled from September 2014 to February 2015. Most items functioned well. However, qualitative testing identified wording ambiguities related to cancer diagnosis and treatment trajectory, such as “treatment” and “surgery”; difficulties with lifetime recall; errors in estimating quantities; and difficulties with instrument navigation. Revisions to item wording, format, order, response options, and instructions resulted in a questionnaire that demonstrated navigational ease as well as good question comprehension and response accuracy. Conclusions The NCI-AACR Cancer Patient Tobacco Use Questionnaire (C-TUQ) can be utilized as a standardized item set to accelerate investigation of tobacco use in the cancer setting.
Background The National Institutes of Health (NIH), US Department of Health and Human Services (HHS), realized the need to better understand its Web users in order to help assure that websites are user friendly and well designed for effective information dissemination. A trans-NIH group proposed a trans-NIH project to implement an online customer survey, known as the American Customer Satisfaction Index (ACSI) survey, on a large number of NIH websites—the first “enterprise-wide” ACSI application, and probably the largest enterprise Web evaluation of any kind, in the US government. The proposal was funded by the NIH Evaluation Set-Aside Program for two years at a cost of US $1.5 million (US $1.275 million for survey licenses for 60 websites at US $18,000 per website; US $225,000 for a project evaluation contractor).Objective The overall project objectives were to assess the value added to the participating NIH websites of using the ACSI online survey, identify any NIH-wide benefits (and limitations) of the ACSI, ascertain any new understanding about the NIH Web presence based on ACSI survey results, and evaluate the effectiveness of a trans-NIH approach to Web evaluation. This was not an experimental study and was not intended to evaluate the ACSI survey methodology, per se, or the impacts of its use on customer satisfaction with NIH websites.Methods The evaluation methodology included baseline pre-project websites profiles; before and after email surveys of participating website teams; interviews with a representative cross-section of website staff; observations of debriefing meetings with website teams; observations at quarterly trans-NIH Web staff meetings and biweekly trans-NIH leadership team meetings; and review and analysis of secondary data.Results Of the original 60 NIH websites signed up, 55 implemented the ACSI survey, 42 generated sufficient data for formal reporting of survey results for their sites, and 51 completed the final project survey. A broad cross-section of websites participated, and a majority reported significant benefits and new knowledge gained from the ACSI survey results. NIH websites as a group scored consistently higher on overall customer satisfaction relative to US government-wide and private sector benchmarks.Conclusions Overall, the enterprise-wide experiment was successful. On the level of individual websites, the project confirmed the value of online customer surveys as a Web evaluation method. The evaluation results indicated that successful use of the ACSI, whether site-by-site or enterprise-wide, depends in large part on strong staff and management support and adequate funding and time for the use of such evaluative methods. In the age of Web-based e-government, a broad commitment to Web evaluation may well be needed. This commitment would help assure that the potential of the Web and other information technologies to improve customer and citizen satisfaction is fully realized.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.