East Asia has experienced an excessive increase in myopia in the past decades with more than 80% of the younger generation now affected. Environmental and genetic factors are both assumed to contribute in the development of refractive errors, but the etiology is unknown. The environmental factor argued to be of greatest importance in preventing myopia is high levels of daylight exposure. If true, myopia prevalence would be higher in adolescents living in high latitude countries with fewer daylight hours in the autumn-winter. We examined the prevalence of refractive errors in a representative sample of 16–19-year-old Norwegian Caucasians (n = 393, 41.2% males) in a representative region of Norway (60° latitude North). At this latitude, autumn-winter is 50 days longer than summer. Using gold-standard methods of cycloplegic autorefraction and ocular biometry, the overall prevalence of myopia [spherical equivalent refraction (SER) ≤−0.50 D] was 13%, considerably lower than in East Asians. Hyperopia (SER ≥ + 0.50 D), astigmatism (≥1.00 DC) and anisometropia (≥1.00 D) were found in 57%, 9% and 4%. Norwegian adolescents seem to defy the world-wide trend of increasing myopia. This suggests that there is a need to explore why daylight exposure during a relatively short summer outweighs that of the longer autumn-winter.
As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved. Previous evidence shows that the human visual system accounts for the distance the observer has walked and the separation of the eyes when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.
Visually recognizing objects at different orientations and distances has been assumed to depend either on extracting from the retinal image a viewpoint-invariant, typically three-dimensional (3D) structure, such as object parts, or on mentally transforming two-dimensional (2D) views. To test how these processes might interact with each other, an experiment was performed in which observers discriminated images of novel, computer-generated, 3D objects, differing by rotations in 3D space and in the number of parts (in principle, a viewpoint-invariant, 'non-accidental' property) or in the curvature, length or angle of join of their parts (in principle, each a viewpoint-dependent, metric property), such that the discriminatory cue varied along a common physical scale. Although differences in the number of parts were more readily discriminated than differences in metric properties, they showed almost exactly the same orientation dependence. Overall, visual performance proved remarkably lawful: for both long (2 s) and short (100 ms) display durations, it could be summarized by a simple, compact equation with one term representing generalized viewpoint-invariant parts-based processing of 3D object structure, including metric structure, and another term representing structure-invariant processing of 2D views. Object discriminability was determined by summing signals from these two independent processes.
Cue combination rules have often been applied to the perception of surface shape but not to judgements of object location. Here, we used immersive virtual reality to explore the relationship between different cues to distance. Participants viewed a virtual scene and judged the change in distance of an object presented in two intervals, where the scene changed in size between intervals (by a factor of between 0.25 and 4). We measured thresholds for detecting a change in object distance when there were only 'physical' (stereo and motion parallax) or 'texture-based' cues (independent of the scale of the scene) and used these to predict biases in a distance matching task. Under a range of conditions, in which the viewing distance and position of the target relative to other objects was varied, the ratio of 'physical' to 'texture-based' thresholds was a good predictor of biases in the distance matching task. The cue combination approach, which successfully accounts for our data, relies on quite different principles from those underlying traditional models of 3D reconstruction.
Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers underestimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an underestimation of distance walked. We discuss implications for theories of a task-independent representation of visual space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.