Content, usability, and aesthetics are core constructs in users' perception and evaluation of websites, but little is known about their interplay in different use phases. In a first study web users (N=330) stated content as most relevant, followed by usability and aesthetics. In study 2 tests with four websites were performed (N=300), resulting data were modeled in path analyses. In this model aesthetics had the largest influence on first impressions, while all three constructs had an impact on first and overall impressions. However, only content contributed significantly to the intention to revisit or recommend a website. Using data from a third study (N=512, 42 websites), we were able to replicate this model. As before, perceived usability affected first and overall impressions, while content perception was important for all analyzed website use phases. In addition, aesthetics also had a small but significant impact on the participants' intentions to revisit or recommend.
Background While single indicators measure a specific aspect of quality (e.g. timely support during labour), users of these indicators such as patients, providers and policy-makers are typically interested in some broader construct (e.g. quality of maternity care) whose measurement requires a set of indicators. However, guidance on desirable properties of indicator sets is lacking. Based on the premise that a set of valid indicators does not guarantee a valid set of indicators, the aim of this review is twofold: First, we introduce content validity as a desirable property of indicator sets and review the extent to which studies in the peer-reviewed health care quality literature address this criterion. Second, to obtain a complete inventory of criteria, we examine what additional criteria of quality indicator sets were used so far. Methods : We searched the databases Web of Science, Medline, Cinahl and PsycInfo from inception to May 2021 and the reference lists of included studies. English- or German-language, peer-reviewed studies concerned with desirable characteristics of quality indicator sets were included. Applying qualitative content analysis, two authors independently coded the articles using a structured coding scheme and discussed conflicting codes until consensus was reached. Results Of 366 studies screened, 62 were included in the review. 85% (53/62) of studies addressed at least one of the component criteria of content validity (content coverage, proportional representation, contamination) and 15% (9/62) addressed all component criteria. Studies used various content domains to structure the targeted construct (e.g., quality dimensions, elements of the care pathway, policy priorities), providing a framework to assess content validity. The review revealed four additional substantive criteria for indicator sets: cost of measurement (21% [13/62] of the included studies), prioritization of “essential” indicators (21% [13/62]), avoidance of redundancy (13% [8/62]) and size of the set (15% [9/62]). Additionally, four procedural criteria were identified: stakeholder involvement (69% [43/62]), using a conceptual framework (44% [27/62]), defining the purpose of measurement (26% [16/62]) and transparency of the development process (8% [5/62]). Conclusion The concept of content validity and its component criteria help assessing whether conclusions based on a set of indicators are valid conclusions about the targeted construct. To develop a valid indicator set, careful definition of the targeted construct including its (sub-)domains is paramount. Developers of quality indicators should specify the purpose of measurement and consider trade-offs with other criteria for indicator sets whose application may reduce content validity (e.g. costs of measurement) in light thereof.
Taking up new approaches and calls for experimental test validation, in the present study we propose and validate a process model of sustained attention tests. Four sub-components were postulated: the perception of an item, a simple mental operation to solve the item, a motor reaction, and the shift to the next item. In two studies, several cognitive tasks and modified versions of the d2-R test of sustained attention were applied in order to determine performance in the proposed sub-components. Their contribution for the prediction of performance in sustained attention tests and tests of higher cognitive abilities was assessed. The sub-components of the process model explained a large amount of variance in sustained attention tests, namely 55–74%. More specifically, perceptual and mental operation speed were the strongest predictors, while there was a trend towards a small influence of motor speed on test performance. The measures of item shifting showed low reliabilities and did not predict test scores. In terms of discriminant validity, results of Study 1 indicated that the postulated sub-components were insufficient to explain a large amount of variance in working memory span tasks, in Study 2 the same was demonstrated for reasoning tasks. Altogether, the present study is the first to disentangle sub-components in sustained attention tests and to determine their role for test performance.
Organisations are subject to ongoing changes. These changes offer opportunities but they can also increase the uncertainty about the future of jobs. Although there is a large body of literature on job insecurity, most studies focus on the worry of losing the job while another important stressor, namely the worry of losing valued job features, received less attention. The key contribution of this validation study is the development and psychometric analysis of the Qualitative Job Insecurity Measure (QJIM) that tries to account for the shortcomings of existing qualitative job insecurity scales. It is a quick but still comprehensive measurement of a highly prevalent but understudied phenomenon that directly influences organisational and employee well-being. The psychometric results show the confirmation of the scale's one-dimensional structure via EFA and CFA, good reliability estimates and the demonstration of the scale's predictive validity regarding job satisfaction and disinclination to work. From a research perspective, QJIM can be used to gain insights into how and when changes negatively affect employees and to identify preventive or corrective measures. From an organisational perspective, QJIM is useful to recognise job features that employees value, to carefully plan changes and to actively increase employee well-being.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.