Light field cameras capture both the spatial and the angular properties of light rays in space. Due to its property, one can compute the depth from light fields in uncontrolled lighting environments, which is a big advantage over active sensing devices. Depth computed from light fields can be used for many applications including 3D modelling and refocusing. However, light field images from hand-held cameras have very narrow baselines with noise, making the depth estimation difficult. Many approaches have been proposed to overcome these limitations for the light field depth estimation, but there is a clear trade-off between the accuracy and the speed in these methods. In this paper, we introduce a fast and accurate light field depth estimation method based on a fully-convolutional neural network. Our network is designed by considering the light field geometry and we also overcome the lack of training data by proposing light field specific data augmentation methods. We achieved the top rank in the HCI 4D Light Field Benchmark on most metrics, and we also demonstrate the effectiveness of the proposed method on real-world light-field images.
Commercial Light-Field cameras provide spatial and angular information, but its limited resolution becomes an important problem in practical use. In this paper, we present a novel method for Light-Field image super-resolution (SR) via a deep convolutional neural network. Rather than the conventional optimization framework, we adopt a datadriven learning method to simultaneously up-sample the angular resolution as well as the spatial resolution of a Light-Field image. We first augment the spatial resolution of each sub-aperture image to enhance details by a spatial SR network. Then, novel views between the sub-aperture images are generated by an angular super-resolution network. These networks are trained independently but finally finetuned via end-to-end training. The proposed method shows the state-of-the-art performance on HCI synthetic dataset, and is further evaluated by challenging real-world applications including refocusing and depth map estimation.
A novel direct writing of eutectic gallium indium (EGaIn) patterns on uneven surfaces including both inclined and curved substrates is reported. The approach relies on four degrees‐of‐freedom motion control of the pressurized EGaIn dispenser and precise sensing of the dispenser tip–substrate distance. An experimental hardware is built by using three motorized linear stages, a motorized rotation stage, two electronic pressure regulators, and a laser distance sensor and operating programs are developed. While the rotation stage makes the laser sensor always precede the dispenser tip by a predetermined value, the vertical stage maintains the dispenser tip–substrate distance by using the readout of the laser sensor recorded beforehand. By incorporating the time delay from the laser sensor preceding the dispenser tip for feedback control of the dispenser tip position, various EGaIn patterns are directly written on uneven substrates with their widths being 70–80 µm. Electrical connectivity and structural integrity of written EGaIn patterns are confirmed by the light‐emitting diode mounted between two end segments of EGaIn patterns. The maximum slope for reliable patterning is found to be ≈20°. To show practical applications of this new concept, a curved keypad and a glove‐type wearable device with strain sensors integrated are demonstrated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.