Hydrochromic materials have been actively investigated in the context of humidity sensing and measuring water contents in organic solvents. Here we report a sensor system that undergoes a brilliant blue-to-red colour transition as well as ‘Turn-On’ fluorescence upon exposure to water. Introduction of a hygroscopic element into a supramolecularly assembled polydiacetylene results in a hydrochromic conjugated polymer that is rapidly responsive (<20 μs), spin-coatable and inkjet-compatible. Importantly, the hydrochromic sensor is found to be suitable for mapping human sweat pores. The exceedingly small quantities (sub-nanolitre) of water secreted from sweat pores are sufficient to promote an instantaneous colorimetric transition of the polymer. As a result, the sensor can be used to construct a precise map of active sweat pores on fingertips. The sensor technology, developed in this study, has the potential of serving as new method for fingerprint analysis and for the clinical diagnosis of malfunctioning sweat pores.
Urine tests are performed by using an off-the-shelf reference sheet to compare the color of test strips. However, the tabular representation is difficult to use and more prone to visual errors, especially when the reference color-swatches to be compared are spatially apart. Thus, making it is difficult to distinguish between the subtle differences of shades on the reagent pads. This manuscript represents a new arrangement of reference arrays for urine test strips (urinalysis). Reference color swatches are grouped in a doughnut chart, surrounding each reagent pad on the strip. The urine test can be evaluated using naked eye by referring to the strip with no additional sheet necessary. Along with this new strip, an algorithm for smartphone based application is also proposed as an alternative to deliver diagnostic results. The proposed colorimetric detection method evaluates the captured image of the strip, under various color spaces and evaluates ten different tests for urine. Thus, the proposed system can deliver results on the spot using both naked eye and smartphone. The proposed scheme delivered accurate results under various environmental illumination conditions without any calibration requirements, exhibiting performances suitable for real-life applications and an ease for a common user.
The separation of circulating tumor cells (CTCs) from the peripheral blood is an important issue that has been highlighted because of their high clinical potential. However, techniques that depend solely on tumor-specific surface molecules or just the larger size of CTCs are limited by tumor heterogeneity. Here, we present a slanted weir microfluidic device that utilizes the size and deformability of CTCs to separate them from the unprocessed whole blood. By testing its ability using a highly invasive breast cancer cell line, our device achieved a 97% separation efficiency, while showing an 8-log depletion of erythrocytes and 5.6-log depletion of leukocytes. We also developed an image analysis tool that was able to characterize the various morphologies and differing deformability of the separating cells. From the results, we believe our system possesses a high potential for liquid biopsy, aiding future cancer research.
A lane detection system using around view monitoring (AVM) images is presented in this paper. To provide safe driving condition, many lane detection approaches have been proposed. However, previous approaches cannot detect lane stably in low visibility condition such as foggy or rainy days because of the use of frontal camera. The proposed lane detection system uses ego-vehicle's surrounding road information to overcome this problem. The proposed method can be split into two stages: generation of AVM images from four fisheye cameras and lane detection using AVM images. To generate AVM images, we use four fisheye cameras mounted on sides, front, and rear of the vehicle. Top-view images covering the surround area of the vehicle are generated from four fisheye images by calibrations of each camera and their relative camera pose. The lane detection procedure consists of detecting and grouping lane responses, fitting lane responses by a linear model, and tracking lanes with kalman filter to smooth the estimates. Experimental results on full lanes and dashed lanes show that the proposed method can achieve the detection accuracies of 98.78% and 90.88% respectively and processing speed of 1 ms per frame in a desktop computer.
Camera-based blind-spot detection systems improve the shortcomings of radar-based systems for accurately detecting the position of a vehicle. However, as with many camera-based applications, the detection performance is insufficient in a low-illumination environment such as at night. This problem can be solved with augmented nighttime images in the training data but acquiring them and annotating the additional images are cumbersome tasks. Therefore, we propose a framework that converts daytime images into synthetic nighttime images using a generative adversarial network and that augments the synthetic images for the training process of the vehicle detector. A public dataset comprising different viewpoints of target images was used to easily obtain the images required for training the generative adversarial network. Experiments on a real nighttime dataset demonstrate that the proposed framework improved the detection performance considerably in comparison with using daytime images only. INDEX TERMS Data augmentation, domain adaptation, generative adversarial networks, blind-spot detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.