2017
DOI: 10.1177/1474904117696095
|View full text |Cite
|
Sign up to set email alerts
|

Assessment of computer and information literacy in ICILS 2013: Do different item types measure the same construct?

Abstract: The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and information literacy in order to balance technological and information-related aspects of computer and information literacy. The item types differ in the cognitive pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 45 publications
(75 reference statements)
1
11
0
Order By: Relevance
“…Instead, those dimensions seem to be defined by the software applications that students used in the assessment; they capture commonality among students' performance that seems to be due to their familiarity with the assessment tools and/or the context (RQ2). These results extend recent research that indicate item types may constitute measurable DL dimensions (Ihme et al 2017), suggesting that the measurement of DL performance may need to take account of the specific software applications used in the assessment. The existing theoretical frameworks on the dimensionality of DL may need revision, as most have assumed that DL is a generic competence, similar to reading literacy and numeracy.…”
supporting
confidence: 82%
See 4 more Smart Citations
“…Instead, those dimensions seem to be defined by the software applications that students used in the assessment; they capture commonality among students' performance that seems to be due to their familiarity with the assessment tools and/or the context (RQ2). These results extend recent research that indicate item types may constitute measurable DL dimensions (Ihme et al 2017), suggesting that the measurement of DL performance may need to take account of the specific software applications used in the assessment. The existing theoretical frameworks on the dimensionality of DL may need revision, as most have assumed that DL is a generic competence, similar to reading literacy and numeracy.…”
supporting
confidence: 82%
“…How far DL as assessed through virtual environments specifically developed for the purpose of assessment reflects a person's competence when interacting with authentic software available in the commercial market remains unclear. Yet recent evidence, such as moderate correlations among DL performance shown with different commercial software tools in a longitudinal study (Lazonder et al 2020) and the task dependence identified among ICILS items (Ihme et al 2017), challenge the assumptions of task and tool independence of DL.…”
Section: Task and Technology (In)dependence Of DL Assessmentsmentioning
confidence: 99%
See 3 more Smart Citations