“…These errors could have resulted in items that did not measure the same health literacy construct as the English HLQ. Threats to construct equivalence can lead to interpretations of data that are not valid and, subsequently, to potentially invalid and flawed decision making [3,9,10,20,26,78]. Results from this study reinforce the need for a multi-step translation and central review process [3,7,20,26,32,50].…”
Background: Cross-cultural research with patient-reported outcomes measures (PROMs) assumes that the PROM in the target language will measure the same construct in the same way as the PROM in the source language. Yet translation methods are rarely used to qualitatively maximise construct equivalence or to describe the intents of each item to support common understanding within translation teams. This study aimed to systematically investigate the utility of the Translation Integrity Procedure (TIP), in particular the use of item intent descriptions, to maximise construct equivalence during the translation process, and to demonstrate how documented data from the TIP contributes evidence to a validity argument for construct equivalence between translated and source language PROMs. Methods: Analysis of secondary data was conducted on routinely collected data in TIP Management Grids of translations (n = 9) of the Health Literacy Questionnaire (HLQ) that took place
“…These errors could have resulted in items that did not measure the same health literacy construct as the English HLQ. Threats to construct equivalence can lead to interpretations of data that are not valid and, subsequently, to potentially invalid and flawed decision making [3,9,10,20,26,78]. Results from this study reinforce the need for a multi-step translation and central review process [3,7,20,26,32,50].…”
Background: Cross-cultural research with patient-reported outcomes measures (PROMs) assumes that the PROM in the target language will measure the same construct in the same way as the PROM in the source language. Yet translation methods are rarely used to qualitatively maximise construct equivalence or to describe the intents of each item to support common understanding within translation teams. This study aimed to systematically investigate the utility of the Translation Integrity Procedure (TIP), in particular the use of item intent descriptions, to maximise construct equivalence during the translation process, and to demonstrate how documented data from the TIP contributes evidence to a validity argument for construct equivalence between translated and source language PROMs. Methods: Analysis of secondary data was conducted on routinely collected data in TIP Management Grids of translations (n = 9) of the Health Literacy Questionnaire (HLQ) that took place
“…It is plausible that the discrepancy between these incipient positons and the status quo represented in the DSM‐5 is indicative of an anachronistic paradigm for conceptualizing PTSD responses in modern client samples. Therefore, it may be prudent for practitioners to account for the influence of construct irrelevance and construct underrepresentation within their unique client base (Lenz & Wester, 2017; Spurgeon, 2017). After reviewing these considerations, practitioners may elect to evaluate and conceptualize scores on instruments such as the PTSD Checklist for DSM‐5 according to the factor structure that tends to support the most helpful interventions.…”
Section: Discussionmentioning
confidence: 99%
“…Similarly, practitioners are often compelled to use assessment protocols with normative samples that do not reflect the demographics of clients (Hays & Wood, 2017). As a result, assessment items may be interpreted differently across participant samples, thus creating a threat to validity wherein the PTSD construct inadequately represents the lived experience of certain individuals, yet scores have stark implications for their access to services, supports, and opportunities (Lenz & Wester, 2017; Spurgeon, 2017).…”
This study reported the findings of a meta‐analysis exploring differences between clinician‐administered (C‐A) and self‐reported (S‐R) outcomes of counseling and therapy interventions for posttraumatic stress disorder. A sample of 17 randomized trials resulted in 46 effect sizes (23 C‐A, 23 S‐R) representing the data of 1,405 participants. No statistically significant differences were detected between C‐A and S‐R outcome estimates alone or when considering treatment setting; however, differential estimates emerged for modality across age groups.
“…While top-down approaches are promising, distinctions between curiosity and interest are considerably constrained by the theoretical perspective taken and assessment methods used. Top-down approaches can suffer from construct underrepresentation of the two broad concepts (Downing, 2002;Messick, 1995;Spurgeon, 2017). For example, McGillivray et al, (2015) assessed curiosity and interest using single items simply asking how curious/interested participants were about an answer.…”
Section: How To Empirically Test Theoretical Distinctions?mentioning
Researchers studying curiosity and interest note a lack of consensus in whether and how these important motivations for learning are distinct. Empirical attempts to distinguish them are impeded by this lack of conceptual clarity. Following a recent proposal that curiosity and interest are folk concepts, we sought to determine a non-expert consensus view on their distinction using machine learning methods. In Study 1, we demonstrate that there is a consensus in how they are distinguished, by training a Naïve Bayes classification algorithm to distinguish between free-text definitions of curiosity and interest (n = 396 definitions) and using cross-validation to test the classifier on two sets of data (main n = 196; additional n = 218). In Study 2, we demonstrate that the non-expert consensus is shared by experts and can plausibly underscore future empirical work, as the classifier accurately distinguished definitions provided by experts who study curiosity and interest (n = 92). Our results suggest a shared consensus on the distinction between curiosity and interest, providing a basis for much-needed conceptual clarity facilitating future empirical work. This consensus distinguishes curiosity as more active information seeking directed towards specific and previously unknown information. In contrast, interest is more pleasurable, in-depth, less momentary information seeking towards information in domains where people already have knowledge. However, we note that there are similarities between the concepts, as they are both motivating, involve feelings of wanting, and relate to knowledge acquisition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.