We demonstrate the realization of a coherent random fiber laser (RFL) in the extremely weakly scattering regime, which contains a dispersive solution of polyhedral oligomeric silsesquioxanes nanoparticles (NPs) and laser dye pyrromethene 597 in carbon disulfide that was injected into a hollow optical fiber. Multiple scattering of polyhedral oligomeric silsesquioxanes NPs greatly enhanced by the waveguide confinement effect was experimentally verified to account for coherent lasing observed in our RFL system. This Letter extends the NPs-based RFLs from the incoherent regime to the coherent regime.
The construction of three-dimensional covalent organic frameworks (3D COFs) has proven to be very challenging, as their synthetic driving force mainly comes from the formation of covalent bonds. To facilitate the synthesis, rigid building blocks are always the first choice for designing 3D COFs. In principle, it should be very appealing to construct 3D COFs from flexible building blocks, but there are some obstacles blocking the development of such systems, especially for the designed synthesis and structure determination. Herein, we reported a novel highly crystalline 3D COF (FCOF-5) with flexible C–O single bonds in the building block backbone. By merging 17 continuous rotation electron diffraction data sets, we successfully determined the crystal structure of FCOF-5 to be a 6-fold interpenetrated pts topology. Interestingly, FCOF-5 is flexible and can undergo reversible expansion/contraction upon vapor adsorption/desorption, indicating a breathing motion. Moreover, a smart soft polymer composite film with FCOF-5 was fabricated, which can show a reversible vapor-triggered shape transformation. Therefore, 3D COFs constructed from flexible building blocks can exhibit interesting breathing behavior, and finally, a totally new type of soft porous crystals made of pure organic framework was announced.
Despite significant recent progress, dense, time-resolved imaging of complex, non-stationary 3D flow velocities remains an elusive goal. In this work we tackle this problem by extending an established 2D method, Particle Imaging Velocimetry, to three dimensions by encoding depth into color. The encoding is achieved by illuminating the flow volume with a continuum of light planes (a "rainbow"), such that each depth corresponds to a specific wavelength of light. A diffractive component in the camera optics ensures that all planes are in focus simultaneously. With this setup, a single color camera is sufficient for tracking 3D trajectories of particles by combining 2D spatial and 1D color information. For reconstruction, we derive an image formation model for recovering stationary 3D particle positions. 3D velocity estimation is achieved with a variant of 3D optical flow that accounts for both physical constraints as well as the rainbow image formation model. We evaluate our method with both simulations and an experimental prototype setup.
(With Demo) The Wavefront Sensing ProblemWavefront sensing is an old yet fundamental problem in optics: Direct phase measurement is not feasible through intensity sensors, and thus requires a joint design of both the hardware and the software.• Traditional wavefront sensors [1,2] where k is the wave number and ∇φ is the wavefront gradients. A direct linearization leads to the so-called optical flow method [4]:We devise our own reconstruction method, adding a wavefront smoothness regularizer, and solve for the wavefront directly. In linear algebra: minimizeis a concatenated diagonal matrix with the image derivatives ((g x , g y ) = ∇I 0 (r)) on the diagonal, a "time" derivative g t = I(r) − I 0 (r), and M is a binary diagonal matrix that selects only the visible pixels from the wavefront samples. Our solver employs ADMM [5], and each updating step enjoys closed-form solution and is parallelizable on GPU.
Diffractive optical elements (DOEs) have recently drawn great attention in computational imaging because they can drastically reduce the size and weight of imaging devices compared to their refractive counterparts. However, the inherent strong dispersion is a tremendous obstacle that limits the use of DOEs in full spectrum imaging, causing unacceptable loss of color fidelity in the images. In particular, metamerism introduces a data dependency in the image blur, which has been neglected in computational imaging methods so far. We introduce both a diffractive achromat based on computational optimization, as well as a corresponding algorithm for correction of residual aberrations. Using this approach, we demonstrate high fidelity color diffractive-only imaging over the full visible spectrum. In the optical design, the height profile of a diffractive lens is optimized to balance the focusing contributions of different wavelengths for a specific focal length. The spectral point spread functions (PSFs) become nearly identical to each other, creating approximately spectrally invariant blur kernels. This property guarantees good color preservation in the captured image and facilitates the correction of residual aberrations in our fast two-step deconvolution without additional color priors. We demonstrate our design of diffractive achromat on a 0.5mm ultrathin substrate by photolithography techniques. Experimental results show that our achromatic diffractive lens produces high color fidelity and better image quality in the full visible spectrum.
High-dynamic range (HDR) imaging is an essential imaging modality for a wide range of applications in uncontrolled environments, including autonomous driving, robotics, and mobile phone cameras. However, existing HDR techniques in commodity devices struggle with dynamic scenes due to multi-shot acquisition and postprocessing time, e.g. mobile phone burst photography, making such approaches unsuitable for real-time applications. In this work, we propose a method for snapshot HDR imaging by learning an optical HDR encoding in a single image which maps saturated highlights into neighboring unsaturated areas using a diffractive optical element (DOE). We propose a novel rank-1 parameterization of the DOE which drastically reduces the optical search space while allowing us to efficiently encode high-frequency detail. We propose a reconstruction network tailored to this rank-1 parametrization for the recovery of clipped information from the encoded measurements. The proposed end-to-end framework is validated through simulation and real-world experiments and improves the PSNR by more than 7 dB over state-ofthe-art end-to-end designs.
an optimization procedure with a spatial-spectral prior, specifically designed for deconvolution-based spectral reconstruction. Finally, we demonstrate hyperspectral imaging with a fabricated DOE attached to a conventional DSLR sensor. Results show that our method compares well with other stateof-the-art hyperspectral imaging methods in terms of spectral accuracy and spatial resolution, while our compact, diffraction-based spectral imaging method uses only a single optical element on a bare image sensor. CCS Concepts: • Computing methodologies → Hyperspectral imaging.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.