The ever-expanding choice of ocular metrology and imaging equipment has driven research into the validity of their measurements. Consequently, studies of the agreement between two instruments or clinical tests have proliferated in the ophthalmic literature. It is important that researchers apply the appropriate statistical tests in agreement studies. Correlation coefficients are hazardous and should be avoided. The 'limits of agreement' method originally proposed by Altman and Bland in 1983 is the statistical procedure of choice. Its step-by-step use and practical considerations in relation to optometry and ophthalmology are detailed in addition to sample size considerations and statistical approaches to precision (repeatability or reproducibility) estimates.
This review provides a descriptive catalog of ophthalmic PRO instruments to inform researchers and clinicians on the choice of the highest-quality PRO instrument suitable for their purpose.
Measurements taken with the Pentacam HR are repeatable and reproducible, especially those obtained with the cornea fine scan. Although the Pentacam HR is clearly a very useful clinical and research tool, the measurement of corneal axes, pupil center pachymetry, front meridional and axial maps, refractive power maps, and equivalent K readings should be interpreted with caution.
Methods: A list of 121 items was generated from 13 focus groups with 4 children and young people with and without a visual impairment. A long 89 5 item questionnaire was piloted with 45 visually impaired children and young 6 people using face to face interviews. Rasch analysis was used to analyze the 7 response category function and to facilitate item removal ensuring a valid 8 unidimensional scale. The validity and reliability of the short questionnaire 9were assessed on a group of 109 visually impaired children (58.7% boys; 10 median age, 13 years) using Rasch analysis and intraclass correlation 11 coefficient (ICC).
The GAL-9 has superior psychometric properties over the GQL-15. Its only limitation is poor targeting of item difficulty to person ability, which is an inevitable attribute of a vision-related activity limitation instrument for glaucoma patients, most of whom have only peripheral visual field defects and little difficulty with daily activities.
BackgroundA critical component that influences the measurement properties of a patient-reported outcome (PRO) instrument is the rating scale. Yet, there is a lack of general consensus regarding optimal rating scale format, including aspects of question structure, the number and the labels of response categories. This study aims to explore the characteristics of rating scales that function well and those that do not, and thereby develop guidelines for formulating rating scales.MethodsSeventeen existing PROs designed to measure vision-related quality of life dimensions were mailed for self-administration, in sets of 10, to patients who were on a waiting list for cataract extraction. These PROs included questions with ratings of difficulty, frequency, severity, and global ratings. Using Rasch analysis, performance of rating scales were assessed by examining hierarchical ordering (indicating categories are distinct from each other and follow a logical transition from lower to higher value), evenness (indicating relative utilization of categories), and range (indicating coverage of the attribute by the rating scale).ResultsThe rating scales with complicated question format, a large number of response categories, or unlabelled categories, tended to be dysfunctional. Rating scales with five or fewer response categories tended to be functional. Most of the rating scales measuring difficulty performed well. The rating scales measuring frequency and severity demonstrated hierarchical ordering but the categories lacked even utilization.ConclusionDevelopers of PRO instruments should use a simple question format, fewer (four to five) and labelled response categories.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.