2016
DOI: 10.1177/1094428116639132
|View full text |Cite
|
Sign up to set email alerts
|

Don’t Forget the Items

Abstract: Researchers are generally advised to provide rigorous item-level construct validity evidence when they develop and introduce a new scale. However, these precise, item-level construct validation efforts are rarely reexamined as the scale is put into use by a wider audience. In the present study, we demonstrate how (a) item-level meta-analysis and (b) substantive validity analysis can be used to comprehensively evaluate construct validity evidence for the items comprising scales. This methodology enables a reexa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
10
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 35 publications
(12 citation statements)
references
References 74 publications
1
10
0
Order By: Relevance
“…That said, item-level statistics can remain useful in informing and improving future measure development, whether that means refining existing measures or creating new ones. For instance, we rarely know the extent to which items and scales are influenced by sampling error variance or particular features of the sample or setting, but an item-level meta-analysis could help us understand that, as past research hasdemonstrated (e.g., Carpenter, Son, Harris, Alexander, & Horner, 2016). In addition to taking a statistical approach to items within a measure, as a field, we can also spend more time conceptually examining the redundancy and uniqueness of content across measures that claim to represent the same construct.…”
Section: Introductionmentioning
confidence: 99%
“…That said, item-level statistics can remain useful in informing and improving future measure development, whether that means refining existing measures or creating new ones. For instance, we rarely know the extent to which items and scales are influenced by sampling error variance or particular features of the sample or setting, but an item-level meta-analysis could help us understand that, as past research hasdemonstrated (e.g., Carpenter, Son, Harris, Alexander, & Horner, 2016). In addition to taking a statistical approach to items within a measure, as a field, we can also spend more time conceptually examining the redundancy and uniqueness of content across measures that claim to represent the same construct.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, most two-factor models suggest multidimensionality with some potential PERMA dimension groupings (e.g., P + E). To investigate whether the purported five-factor model holds, other researchers can consider conducting item-level meta-analyses (see Carpenter et al, 2016) or extending Butler and Kern’s (2016) original validation study. Additionally, the aforementioned lower reliabilities highlight opportunities for further scale refinement/development.…”
Section: Discussionmentioning
confidence: 99%
“…Notwithstanding such limitation it is important to highlight that to the authors' knowledge, this is the first study that uses and examines the Spanish version of the PIQ instrument in Mexico. Carpenter et al (2016) as well as Wieland et al (2017) argue that a rigorous evaluation at the item level are appropriate in order to understand how a scales function, particularly when it comes to new surveys. Based on this, it was decided to conduct the analyses at an item level.…”
Section: Limitations and Considerations For Further Researchmentioning
confidence: 99%