Objective:UEscope is a new angulated videolaryngoscope (VL). This review aimed to describe the features of UEscope and provide clinical evidences regarding the efficacy and safety of this video device in adult tracheal intubation and its roles in airway management teaching.Data Sources:The Wan Fang Data, CNKI, PubMed, Embase, Cochrane Library, and Google Scholar were searched for relevant English and Chinese articles published up to January 15, 2017, using the following keywords: “HC video laryngoscope”, “UE videolaryngoscope”, “video laryngoscope”, and “videolaryngoscopy”.Study Selection:Human case reports, case series, observable studies, and randomized controlled clinical trials were included in our search. The results of these studies and their reference lists were cross-referenced to identify a common theme.Results:UEscope features the low-profile portable design, intermediate blade curvatures, all-angle adjustable monitor, effective anti-fog mechanisms, and built-in video recording function. During the past 5 years, there have been a number of clinical studies assessing the application and roles of UEscope in airway management and education. As compared with direct laryngoscope, UEscope improves laryngeal visualization, decreases intubation time (IT), and increases intubation success rate in adult patients with normal and difficult airways. These findings are somewhat different from the previous results regarding the other angulated VLs; they can provide an improved laryngeal view, but no conclusive benefits with regard to IT and intubation success rate. Furthermore, UEscope has extensively been used for intubation teaching and shown a number of advantages.Conclusions:UEscope can be used as a primary intubation tool and may provide more benefits than other VLs in patients with normal and difficult airways. However, more studies with large sample are still needed to address some open questions about clinical performance of this new VL.
Compared with ordinary single exposure images, multi-exposure fusion (MEF) images are prone to color imbalance, detail information loss and abnormal exposure in the process of combining multiple images with different exposure levels. In this paper, we proposed a human visual perception-based multi-exposure fusion image quality assessment method by considering the related perceptual features (i.e., color, dense scale invariant feature transform (DSIFT) and exposure) to measure the quality degradation accurately, which is closely related to the symmetry principle in human eyes. Firstly, the L1 norm of chrominance components between fused images and the designed pseudo images with the most severe color attenuation is calculated to measure the global color degradation, and the color saturation similarity is added to eliminate the influence of color over-saturation. Secondly, a set of distorted images under different exposure levels with strong edge information of fused image is constructed through the structural transfer, thus DSIFT similarity and DSIFT saturation are computed to measure the local detail loss and enhancement, respectively. Thirdly, Gauss exposure function is used to detect the over-exposure or under-exposure areas, and the above perceptual features are aggregated with random forest to predict the final quality of fused image. Experimental results on a public MEF subjective assessment database show the superiority of the proposed method with the state-of-the-art image quality assessment models.
In order to save people’s shopping time and reduce labor cost of supermarket operations, this paper proposes to design a supermarket service robot based on deep convolutional neural networks (DCNNs). Firstly, according to the shopping environment and needs of supermarket, the hardware and software structure of supermarket service robot is designed. The robot uses a robot operating system (ROS) middleware on Raspberry PI as a control kernel to implement wireless communication with customers and staff. So as to move flexibly, the omnidirectional wheels symmetrically installed under the robot chassis are adopted for tracking. The robot uses an infrared detection module to detect whether there are commodities in the warehouse or shelves or not, thereby grasping and placing commodities accurately. Secondly, the recently-developed single shot multibox detector (SSD), as a typical DCNN model, is employed to detect and identify objects. Finally, in order to verify robot performance, a supermarket environment is designed to simulate real-world scenario for experiments. Experimental results show that the designed supermarket service robot can automatically complete the procurement and replenishment of commodities well and present promising performance on commodity detection and recognition tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.