2014 37th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO) 2014
DOI: 10.1109/mipro.2014.6859649
|View full text |Cite
|
Sign up to set email alerts
|

Online vs. Paper-based testing: A comparison of test results

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
7
1

Year Published

2015
2015
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 2 publications
1
7
1
Order By: Relevance
“…The results in general, therefore, appear to provide evidence to support the interchangeability of paper-based and computer-based test modes using these particular tests. This echoes the findings in the Croatian study by Candrlic et al (2014), mentioned earlier (See Introduction), although this was in a completely different field.…”
Section: Resultssupporting
confidence: 90%
See 1 more Smart Citation
“…The results in general, therefore, appear to provide evidence to support the interchangeability of paper-based and computer-based test modes using these particular tests. This echoes the findings in the Croatian study by Candrlic et al (2014), mentioned earlier (See Introduction), although this was in a completely different field.…”
Section: Resultssupporting
confidence: 90%
“…A comparable study in the field of Informatics at the University of Rijeka in Croatia found that there was no significant difference in median values of the results achieved during online tests and traditional paper-based tests (Candrlic, Asenbrener-Katic & Dlab, 2014). In other words, the evidence in general is still somewhat mixed, although the latter research seems most similar in terms of age and literacy (traditional and computer) levels of participants to the situation in the present case.…”
Section: Introductioncontrasting
confidence: 41%
“…Interest in the impact of CBTs picked up in the 1990s as testing companies (e.g., the Educational Testing Service and the College Board) transitioned services to computers and digital Learning Management Systems (e.g., Blackboard Learn and Desire2Learn) emerged as common course tools (Bugbee 1996). These shifts in testing practices led to several studies into the impact of computerizing high-stakes, proctored assessments in both K-12 (Kingston 2008;Wang et al 2007a,b) and university settings (Prisacari and Danielson 2017;Čandrlić et al 2014;Wellman and Marcinkiewicz 2004;Anakwe 2008;Clariana and Wallace 2002). Research across these settings generally found that performance on proctored computerized versions of high-stakes assessments was indistinguishable from performance on traditional PPTs.…”
Section: Performancementioning
confidence: 99%
“…On average, 29.7% of response rate successfully returned eight hundred forty-six (846) sets of questionnaires in both survey tools. Hence, a study by Candrlic et al, (2014) proved that only a slight difference occurred in both applications, indicating that the approaches were reliable to be employed for data collection in this study. However, after data screening procedures, only seven hundred ninetyeight (798) datasets were utilized for further statistical analysis.…”
Section: Quantitative Research Technique For Data Collectionmentioning
confidence: 60%