2019
DOI: 10.3389/fpsyg.2019.01010
|View full text |Cite
|
Sign up to set email alerts
|

Development of a Computerized Adaptive Testing for Internet Addiction

Abstract: Internet addiction disorder has become one of the most popular forms of addiction in psychological and behavioral areas, and measuring it is growing increasingly important in practice. This study aimed to develop a computerized adaptive testing to measure and assess internet addiction (CAT-IA) efficiently. Four standardized scales were used to build the original item bank. A total of 59 polytomously scored items were finally chosen after excluding 42 items for failing the psychometric evaluation. For the final… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 62 publications
0
7
0
Order By: Relevance
“…This measure has demonstrated good reliability and validity in Chinese adolescents ( Li et al, 2013 ; Zhou et al, 2017 ). Young’s questionnaire has good construct validity ( Kelley and Gruber, 2010 ) and predictive validity over internet addiction disorder ( Zhang et al, 2019 ). Construct validity for the current data was tested by comparing five alternative models ( Laconi et al, 2019 ).…”
Section: Methodsmentioning
confidence: 99%
“…This measure has demonstrated good reliability and validity in Chinese adolescents ( Li et al, 2013 ; Zhou et al, 2017 ). Young’s questionnaire has good construct validity ( Kelley and Gruber, 2010 ) and predictive validity over internet addiction disorder ( Zhang et al, 2019 ). Construct validity for the current data was tested by comparing five alternative models ( Laconi et al, 2019 ).…”
Section: Methodsmentioning
confidence: 99%
“…A CAT item bank must be evaluated to confirm the unidimensional assumption, which states that responses to each item are influenced by a single latent characteristic of the participants (Embretson & Reise, 2013); when the ability is conditional, determine the appropriate item response theory model based on test-level model-fit indices, and ensure that internal and external examinee distributions on one item are not relevant to other test items (Cohen, 2013); item response function refers to how examinees' likelihood of answering correctly on a particular item corresponds to their abilities, and items with different item response functions between subgroups and other items are said to be biased. Different methods exist for detecting items that behave differently, such as the Mantel-Haenszel procedure, the Logistic Regression procedure, the Multiple Indicators Multiple Causes (MIMIC) model, the likelihood ratio test of item response theory (IRT-LR), the Lord's IRT-based Wald test, and the simultaneous item bias test (SIBTEST) (Aybek & Demirtasli, 2017;Zhang et al, 2019). CAT is an item response theory-based technique used for solving a wide variety of measurement problems, which is useful for building tests, identifying items that may be biased, equating scores from different tests or forms of the same test, and reporting test scores.…”
Section: Analysis Of Pre-test Datamentioning
confidence: 99%
“…A CAT may cease to operate when there is a standard error or if there are no items in the item pool that meet a minimum level of information (Han, 2018;Oladele et al, 2020;Zhang et al, 2019). To determine whether another item is to be administered based upon the precision of measurement, the standard error procedure calculates the precision of measurement after the examinee responds to each item.…”
Section: Termination Criterionmentioning
confidence: 99%
“…While reiterating that CAT is not easy, the goal is to ease the task using clean software with no need for code writing while aligning with best practices and international standards. Zhang et al (2019) developed CAT to assess internet addiction while investigating related validity issues. The standardised scales used had a total of 59 carefully calibrated polytomous scored items and satisfying the IRT assumptions of unidimensionality, as well as a good item-model fit.…”
mentioning
confidence: 99%
“…It is a systematic evaluation of the effectiveness of each test item. Zhang et al (2019) explained that developing an item bank for CAT requires evaluation for ascertaining unidimensional assumption of the item pool, a measure of only the main latent trait; selecting the test IRT model-fit, assessing local independence of the item pool for ensuring that within and across examinee response on an item will not be influenced by other test items; assessing item pool monotonicity, connoting that examinees with higher latent trait levels have a probability of higher scores and that items functions at par for examinees who are of the same ability level, also known as Differential Item functioning (DIF) (Aybek & Demirtasli, 2017). According to Izard, 2005, item analysis is aimed at determining:…”
mentioning
confidence: 99%