This paper describes a multi-spectral imaging near infrared (NIR) transfl ectance system developed for on-line determination of crude chemical composition of highly heterogeneous foods and other bio-materials. The system was evaluated for moisture determination in 70 dried salted coalfi sh (bacalao), an extremely heterogeneous product. A spectral image cube was obtained for each fi sh and different sub-sampling approaches for spectral extraction and partial least squares calibration were evaluated. The best prediction models obtained correlation R 2 values around 0.92 and root mean square error of cross-validation of 0.70%, which is much more accurate than today's traditional manual grading. The combination of non-contact NIR transfl ectance measurements with spectral imaging allows rather deep penetrating optical sampling as well as large fl exibility in spatial sampling patterns and calibration approaches. The technique works well for moisture determination in heterogeneous foods and should, in principle, work for other NIR absorbing compounds such as fat and protein. A part of this study compares the principles of refl ectance, contact transfl ectance and non-contact transfl ectance with regard to water determination in a set of 20 well-defi ned dried salted cod samples. Transfl ectance and non-contact transfl ectance performed equally well and were superior to refl ectance measurements, since the measured light penetrated deeper into the sample.
High-quality video observations are very much needed in underwater environments for the monitoring of several ecosystem indicators and to support the sustainable development and management of almost all activities in the ocean. Reliable video observations are however challenging to collect, because of the generally poor visibility conditions and the difficulties to deploy cost-effective sensors and platforms in the marine environment. Visibility in water is regulated by natural light availability at different depths, and by the presence of suspended particles, scattering incident light in all directions. Those elements are also largely variable in time and space, making it difficult to identify technological solutions that can be used in all conditions. By combining state-of-the-art "time of flight" (ToF) image sensors and innovative pulsed laser illumination, we have developed a range-gated camera system (UTOFIA) that enables affordable and enhanced 3D underwater imaging at high resolution. This range-gated solution allows users to eliminate close-range backscattering, improving quality of the images and providing information on the distance of each illuminated object, hence giving access to real-time 3D measurements. Furthermore, as the system is based on pulsed laser light, it is almost independent of natural light conditions and can achieve similar performances at an extended depth range. We use this system to collect observations in different oceanographic conditions and for different applications, including aquaculture monitoring, seafloor mapping, litter identifications and structure inspection. Performances are evaluated by comparing images to regular cameras and by using standard targets to assess accuracy and precision of distance measurements. We suggest that this type of technology can become a standard in underwater 3D imaging to support the future development of the ocean economy.Sustainability 2019, 11, 162 2 of 13 create a turbid environment that strongly increases light scattering and enhances the absorption probability of photons [3]. When the light source is the sun, this process effectively decreases the amount of ambient light present at any depth and limits the range of visual observations. With artificial illumination, the range for underwater vision can be extended (for example we can move deeper) but at the cost of degrading image contrast due to (forward-and back-) scattering generated by suspended particles. The situation is similar to driving a car in foggy conditions with the headlights on: increasing the power of illumination does not improve the visibility, as the backscattering increases proportionally. Image contrast is also lowered with shorter visual range as the light attenuation will reduce the illumination of distant targets. These factors remain the outstanding challenges in underwater imaging and limit the application of visual observations in many sectors [1,2,4].Various optical and acoustic imaging systems for mitigating or solving these problems are under constant develo...
Abstract3D imaging systems provide valuable information for autonomous robot navigation based on landmark detection in pipelines. This paper presents a method for using a time-of-flight (TOF) camera for detection and tracking of pipeline features such as junctions, bends and obstacles. Feature extraction is done by fitting a cylinder to images of the pipeline. Data in captured images appear to take a conic rather than cylindrical shape, and we adjust the geometric primitive accordingly. Pixels deviating from the estimated cylinder/cone fit are grouped into blobs. Blobs fulfilling constraints on shape and stability over time are then tracked. The usefulness of TOF imagery as a source for landmark detection and tracking in pipelines is evaluated by comparison to auxiliary measurements. Experiments using a model pipeline and a prototype robot show encouraging results.
Recently, Range Imaging (RIM) cameras have become available that capture high resolution range images at video rate. Such cameras measure the distance from the scene for each pixel independently based upon a measured time of flight (TOF). Some cameras, such as the SwissRanger™ SR-3000, measure the TOF based on the phase shift of reflected light from a modulated light source. Such cameras are shown to be susceptible to severe distortions in the measured range due to light scattering within the lens and camera. Earlier work induced using a simplified Gaussian point spread function and inverse filtering to compensate for such distortions. In this work a method is proposed for how to identify and use generally shaped empirical models for the point spread function to get a more accurate compensation. The otherwise difficult inverse problem is solved by using the forward model iteratively, according to well established procedures from image restoration. Each iteration is done as a sequential process, starting with the brightest parts of the image and then moving sequentially to the least bright parts, with each step subtracting the estimated effects from the measurements. This approach gives a faster and more reliable compensation convergence. An average reduction of the error by more than 60% is demonstrated on real images. The computation load corresponds to one or two convolutions of the measured complex image with a real filter of the same size as the image.
We present a range-gated camera system designed for real-time (10 Hz) 3D estimation underwater. The system uses a fast-shutter CMOS sensor (1280×1024) customized to facilitate gating with 1.67 ns (18.8 cm in water) delay steps relative to the triggering of a solid-state actively Q-switched 532 nm laser. A depth estimation algorithm has been carefully designed to handle the effects of light scattering in water, i.e., forward and backward scattering. The raw range-gated signal is carefully filtered to reduce noise while preserving the signal even in the presence of unwanted backscatter. The resulting signal is proportional to the number of photons that are reflected during a small time unit (range), and objects will show up as peaks in the filtered signal. We present a peak-finding algorithm that is robust to unwanted forward scatter peaks and at the same time can pick out distant peaks that are barely higher than peaks caused by sensor and intensity noise. Super-resolution is achieved by fitting a parabola around the peak, which we show can provide depth precision below 1 cm at high signal levels. We show depth estimation results when scanning a range of 8 m (typically 1-9 m) at 10 Hz. The results are dependent on the water quality. We are capable of estimating depth at distances of over 4.5 attenuation lengths when imaging high albedo targets at low attenuation lengths, and we achieve a depth resolution (σ) ranging from 0.8 to 9 cm, depending on signal level.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.