The introduction of RGB-Depth (RGB-D) sensors into the visually impaired people (VIP)-assisting area has stirred great interest of many researchers. However, the detection range of RGB-D sensors is limited by narrow depth field angle and sparse depth map in the distance, which hampers broader and longer traversability awareness. This paper proposes an effective approach to expand the detection of traversable area based on a RGB-D sensor, the Intel RealSense R200, which is compatible with both indoor and outdoor environments. The depth image of RealSense is enhanced with IR image large-scale matching and RGB image-guided filtering. Traversable area is obtained with RANdom SAmple Consensus (RANSAC) segmentation and surface normal vector estimation, preliminarily. A seeded growing region algorithm, combining the depth image and RGB image, enlarges the preliminary traversable area greatly. This is critical not only for avoiding close obstacles, but also for allowing superior path planning on navigation. The proposed approach has been tested on a score of indoor and outdoor scenarios. Moreover, the approach has been integrated into an assistance system, which consists of a wearable prototype and an audio interface. Furthermore, the presented approach has been proved to be useful and reliable by a field test with eight visually impaired volunteers.
Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.
The use of RGB-Depth (RGB-D) sensors for assisting visually impaired people (VIP) has been widely reported as they offer portability, function-diversity and cost-effectiveness. However, polarization cues to assist traversability awareness without precautions against stepping into water areas are weak. In this paper, a polarized RGB-Depth (pRGB-D) framework is proposed to detect traversable area and water hazards simultaneously with polarization-color-depth-attitude information to enhance safety during navigation. The approach has been tested on a pRGB-D dataset, which is built for tuning parameters and evaluating the performance. Moreover, the approach has been integrated into a wearable prototype which generates a stereo sound feedback to guide visually impaired people (VIP) follow the prioritized direction to avoid obstacles and water hazards. Furthermore, a preliminary study with ten blindfolded participants suggests its effectivity and reliability.
It is very difficult for visually impaired people to perceive and avoid obstacles at a distance. To address this problem, the unified framework of multiple target detection, recognition and fusion is proposed based on the sensor fusion system comprised of a low-power MMW radar and an RGB-D sensor. In this paper, Mask R-CNN and SSD network are utilized to detect and recognize the objects from color images. The obstacles depth information is obtained from the depth images using the MeanShift algorithm. The position and velocity information of the multiple target are detected by the millimeter wave radar based on the principle of frequency modulated continuous wave. The data fusion based on the Particle Filter obtains more accurate state estimation and richer information by fusing the detection results from the color images, depth images and radar data compared with using only one sensor. The experiment results show that the data fusion enriches the detection results. Meanwhile, the effective detection range is expanded compared to using only the RGB-Depth sensor. Moreover, the data fusion results keep high accuracy and stability under diverse range and illumination conditions. As a wearable system, the sensor fusion system has the characteristics of versatility, portability and cost-effectiveness.
Purpose: In this multicenter phase 3 trial, the efficacy and safety of 60 Gy and 50 Gy doses delivered with modern radiotherapy technology for definitive concurrent chemoradiotherapy (CCRT) in patients with inoperable esophageal squamous cell carcinoma (ESCC) were evaluated. Patients and Methods: Patients with pathologically confirmed stage IIA‒IVA ESCC were randomized 1:1 to receive conventional fractionated 60 Gy or 50 Gy to the tumor and regional lymph nodes. Concurrent weekly chemotherapy (docetaxel 25 mg/m2; cisplatin 25 mg/m2) and two cycles of consolidation chemotherapy (docetaxel 70 mg/m2; cisplatin 25 mg/m2 days 1‒3) were administered. Results: A total of 319 patients were analyzed for survival, and the median follow-up was 34.0 months. The 1- and 3-year locoregional progression-free survival (PFS) rates for the 60 Gy group were 75.6% and 49.5% versus 72.1% and 48.4%, respectively, for the 50 Gy group [HR, 1.00; 95% confidence interval (CI), 0.75‒1.35; P = 0.98]. The overall survival rates were 83.7% and 53.1% versus 84.8% and 52.7%, respectively (HR, 0.99; 95% CI, 0.73‒1.35; P = 0.96), whereas the PFS rates were 71.2% and 46.4% versus 65.2% and 46.1%, respectively (HR, 0.97; 95% CI, 0.73‒1.30; P = 0.86). The incidence of grade 3+ radiotherapy pneumonitis was higher in the 60 Gy group (nominal P = 0.03) than in the 50 Gy group. Conclusions: The 60 Gy arm had similar survival endpoints but a higher severe pneumonitis rate compared with the 50 Gy arm. Fifty Gy should be considered as the recommended dose in CCRT for ESCC.
Crop disease diagnosis is an essential step in crop disease treatment and is a hot issue in agricultural research. However, in agricultural production, identifying only coarse-grained diseases of crops is insufficient because treatment methods are different in different grades of even the same disease. Inappropriate treatments are not only ineffective in treating diseases but also affect crop yield and food safety. We combine IoT technology with deep learning to build an IoT system for crop fine-grained disease identification. This system can automatically detect crop diseases and send diagnostic results to farmers. We propose a multidimensional feature compensation residual neural network (MDFC-ResNet) model for fine-grained disease identification in the system. MDFC-ResNet identifies from three dimensions, namely, species, coarse-grained disease, and fine-grained disease and sets up a compensation layer that uses a compensation algorithm to fuse multidimensional recognition results. Experiments show that the MDFC-ResNet neural network has better recognition effect and is more instructive in actual agricultural production activities than other popular deep learning models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.