The Rehabilitation Engineering Research Center on Technology Evaluation and Transfer is exploring how the users of assistive technology devices define the ideal device. This work is called the Consumer Ideal Product program. The results show what device characteristics are most and least important, indicating where to place the priority on product features and functions from the consumer's perspective. The "voice of the customer" can be used (1) to define the ideal characteristics of a product, (2) to make trade-offs in product design and function improvements based on their relative importance to the consumer, (3) to compare the characteristics of existing products against the characteristics of the ideal product, or (4) to generate a product checklist for consumers to use when making a purchase decision. This paper presents the results of consumers' defining the ideal battery charger. Four focus groups generated the survey's content, then 100 experienced users rated 159 characteristics organized under 11 general evaluation criteria. The consumers placed the highest importance on characteristics from the general evaluation criteria of product reliability, effectiveness, and physical security/safety. The findings should help manufacturers and vendors improve their products and services and help professionals and consumers make informed choices.
This study is a secondary analysis of data collected from end-users of Augmentative and Alternative Communication (AAC) devices as part of a project of the Rehabilitation Engineering Research Center on Technology Transfer (T2RERC). The original data, obtained from a Web-based focus group, were used to identify unmet consumer needs in existing AAC devices. The purpose of the secondary analysis was to give context to the original study through phenomenological interpretation of the narratives, thereby gaining an understanding of the common meanings and shared experiences and practices of people who use AAC technology. Underlying this study is the interpretive approach of Heideggerian hermeneutics; through reflective thinking, understanding of the human situation of AAC users in everyday life is uncovered or extended. Six themes and one constitutive pattern emerged to explain the participants' experiences with AAC devices: (a) maintaining effective communication, (b) interacting in various situations, (c) AAC device-imposing limitations, (d) wading through prepackaged technology, (e) AAC device giving more than a voice, (f) accepting the AAC device. The constitutive pattern was communication technology enables humanness. This information will make rehabilitation nurses aware of the value of the AAC device for the users and the limitations that the technology may impose on the users, as well as the need for others to accept the device. Nurses gaining this understanding may facilitate integration of AAC systems and the development of patient/nurse communication partnerships.
BackgroundGovernment-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact—that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs.MethodsThis paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor.ResultsThe resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and “bench to bedside” expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes.ConclusionsHigh-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits.
There is no doubt that having evaluation policies makes the evaluator's job easier, by providing more transparency to the evaluation process and more security to those involved. However, this can only happen when policies are presented to potential stakeholders in clear language, as they are disseminated and utilized for guiding practice. This suggests that building an evaluation culture is necessary in order to effectively implement an evaluation policy, so it can be fully utilized. This paper briefly introduces and presents what the authors understand as the concept of " evaluation culture" and " evaluation policy" . Then, it discusses the importance of evaluation policies for practice of both evaluation and meta-evaluation, and points out the possible consequences of absence of policies. Recommendations are derived from these considerations in order to inspire procedures to build and implement evaluation policies. Finally, reflections are presented based on the Brazilian experience about " culture-policy-practice" inter-relations in evaluation.
Objectives:Uptake of new knowledge by diverse and diffuse stakeholders of health-care technology innovations has been a persistent challenge, as has been measurement of this uptake. This article describes the development of the Level of Knowledge Use Survey instrument, a web-based measure of self-reported knowledge use.Methods:The Level of Knowledge Use Survey instrument was developed in the context of assessing effectiveness of knowledge communication strategies in rehabilitation technology. It was validated on samples representing five stakeholder types: researchers, manufacturers, clinician–practitioners, knowledge brokers, and consumers. Its structure is broadly based on Rogers’ stages of innovation adoption. Its item generation was initially guided by Hall et al’s Levels of Use framework. Item selection was based on content validity indices computed from expert ratings (n 1 = 4; n 2 = 3). Five representative stakeholders established usability of the web version. The version included 47 items (content validity index for individual items >0.78; content validity index for a scale or set of items >0.90) in self-reporting format. Psychometrics were then established for the version.Results:Analyses of data from small (n = 69) and large (n = 215) samples using the Level of Knowledge Use Survey instrument suggested a conceptual model of four levels of knowledge use—Non-awareness, Awareness, Interest, and Use. The levels covered eight dimensions and six user action categories. The sequential nature of levels was inconclusive due to low cell frequencies. The Level of Knowledge Use Survey instrument showed adequate content validity (≈ 0.88; n = 3) and excellent test–retest reliability (1.0; n = 69). It also demonstrated good construct validity (n = 215) for differentiating among new knowledge outputs (p < 0.001) and among stakeholder types (0.001 < p ≤ 0.013). It showed strong responsiveness to change between baseline and follow-up testing (0.001 < p ≤ 0.002; n = 215).Conclusion:The Level of Knowledge Use Survey instrument is valid and reliable for measuring uptake of innovations across diffuse stakeholders of rehabilitation technologies and therefore also for tracking changes in knowledge use.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.