Ocular optics is normally estimated based on up to 2,600 measurement points within the pupil of the eye, which implies a lateral resolution of approximately 175 µm for a 9 mm pupil diameter. This is because information below this resolution is not thought to be relevant or even possible to obtain with current measurement systems. In this work, we characterize the in vivo ocular optics of the human eye with a lateral resolution of 8.6 µm, which implies roughly 1 million measurement points for a pupil diameter of 9 mm. The results suggest that the normal human eye presents a series of hitherto unknown optical patterns with amplitudes between 200 and 300 nm and is made up of a series of in-phase peaks and valleys. If the results are analysed at only high lateral frequencies, the human eye is also found to contain a whole range of new information. This discovery could have a great impact on the way we understand some fundamental mechanisms of human vision and could be of outstanding utility in certain fields of ophthalmology.
Introduction: This study performs optical aberration assessment in patients using a novel ultra-high-resolution device. The objective of this study is to analyze optical aberrations, especially the very high order wavefront (more than 10th order of Zernike coefficients), and compare between keratoconus and healthy patients. Methods: In this cross-sectional study, we analyzed 43 eyes from 25 healthy patients and 43 eyes from 27 patients with keratoconus using corneal tomography and a very high-resolution (8.55 lm) aberrometer prototype (T-eyede) outfitted with a sensor originally developed for use in the field of astrophysics. Corneal aberration values were assessed using an optical model built with Zemax optical software, while ocular aberrations were assessed using T-eyede. In addition, image-processing analysis was performed of the wavefront phase, creating a highpass filter map. Results: We found lower values for ocular aberrations than corneal aberrations in both groups (p \ 0.001). Specifically, we found a reduction in primary astigmatism (0.145 lm) and primary coma (0.017 lm). Also, the keratoconus group showed significantly higher wavefront aberration values compared with controls (p \ 0.001). An analysis of the highpass filter map revealed 2 contrasting results: one smooth or clear, while the other presented a banding pattern. Almost all in the control group (95%) showed the first pattern, while 77% of the keratoconus group showed a banding pattern on the filtered map (chi-squared test, p \ 0.001). Conclusion: This device provides reliable, precise measurements of ocular aberrations that correlate well with corneal aberrations. Furthermore, the extraordinary high-resolution measurements revealed unprecedented micro changes in the wavefront phase of patients with keratoconus that varied with disease stage. These findings could lead to new screening or follow-up methods.
Depth range cameras are a promising solution for the 3DTV production chain. The generation of color images with their accompanying depth value simplifies the transmission bandwidth problem in 3DTV and yields a direct input for autostereoscopic displays. Recent developments in plenoptic video-cameras make it possible to introduce 3D cameras that operate similarly to traditional cameras. The use of plenoptic cameras for 3DTV has some benefits with respect to 3D capture systems based on dual stereo cameras since there is no need for geometric and color calibration or frame synchronization. This paper presents a method for simultaneously recovering depth and all-in-focus images from a plenoptic camera in near real time using graphics processing units (GPUs). Previous methods for 3D reconstruction using plenoptic images suffered from the drawback of low spatial resolution. A method that overcomes this deficiency is developed on parallel hardware to obtain near real-time 3D reconstruction with a final spatial resolution of800×600pixels. This resolution is suitable as an input to some autostereoscopic displays currently on the market and shows that real-time 3DTV based on plenoptic video-cameras is technologically feasible.
AOLI (Adaptive Optics Lucky Imager) is a state-of-art instrument that combines adaptive optics (AO) and lucky imaging (LI) with the objective of obtaining diffraction limited images in visible wavelength at mid-and big-size ground-based telescopes.The key innovation of AOLI is the development and use of the new TP3-WFS (Two Pupil Plane Positions Wavefront Sensor). The TP3-WFS, working in visible band, represents an advance over classical wavefront sensors such as the Shack-Hartmann WFS (SH-WFS) because it can theoretically use fainter natural reference stars, which would ultimately provide better sky coverages to AO instruments using this newer sensor. This paper describes the software, algorithms and procedures that enabled AOLI to become the first astronomical instrument performing real-time adaptive optics corrections in a telescope with this new type of WFS, including the first control-related results at the William Herschel Telescope (WHT).
In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.