In this study, we examine the degree of construct comparability and possible sources of incomparability of the English and French versions of the Programme for International Student Assessment (PISA) 2003 problem-solving measure administered in Canada. Several approaches were used to examine construct comparability at the test-(examination of test data structure, reliability comparisons and test characteristic curves) and item-levels (differential item functioning, item parameter correlations, and linguistic comparisons). Results from the test-level analyses indicate that the two language versions of PISA are highly similar as shown by similarity of internal consistency coefficients, test data structure (same number of factors and item factor loadings) and test characteristic curves for the two language versions of the tests. However, results of item-level analyses reveal several differences between the two language versions as shown by large proportions of items displaying differential item functioning, differences in item parameter correlations (discrimination parameters) and number of items found to contain linguistic differences.In Canada, most large-scale assessments are administered in its two official languages: English and French. Performance data from such assessments are used to make comparisons of student achievement, to inform curriculum, program development and evaluation, and to make decisions concerning educational policies. Implicit assumptions made in using results from these assessments are that measurements are comparable and that they are based on the same construct for the two language groups. The validity of uses of and inferences made based upon results from these assessments critically depend upon their comparability.
The literature and the employee and workforce surveys rank collaborative problem solving (CPS) among the top 5 most critical skills necessary for success in college and the workforce. This paper provides a review of the literature on CPS and related terms, including a discussion of their definitions, importance to higher education and workforce success, and considerations relevant to their assessment. The goal is to discuss progress on CPS to date and help generate future research on CPS in relation to educational and workforce success.
One of the most critical steps in the test development process is defining the construct, or the knowledge, skills, or abilities, to be assessed. This foundational step provides the basis for initial assumptions about the meaning of test scores and serves as a reference for subsequent validity research. In this paper, we describe the purpose of the redesigned TOEIC Bridge® 4 skills assessments and elaborate the theoretical basis of its construct definition. We also describe how an evidence‐centered design (ECD) approach was used to develop the redesigned TOEIC Bridge assessments and the first stage of that approach, the domain analysis. The domain analysis begins by elaborating a clearer definition of the context in which language is evaluated by the redesigned TOEIC Bridge assessments, “everyday adult life.” Next, we review research literature and relevant language proficiency standards to highlight the knowledge, skills, and abilities relevant to beginner to low‐intermediate English proficiency for everyday adult life. This information is synthesized in the construct definitions for reading, listening, speaking, and writing ability for beginner to low‐intermediate levels of general English proficiency in the context of everyday adult life.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.