Detecting various types of cells in and aroundthe tumor matrix holds a special significance in characterizing the tumor micro-environment for cancer prognostication and research. Automating the tasks of detecting, segmenting, and classifying nuclei can free up the pathologists' time for higher value tasks and reduce errors due to fatigue and subjectivity. To encourage the computer vision research community to develop and test algorithms for these tasks, we prepared a large and diverse dataset of nucleus boundary annotations and class labels. The dataset has over 46,000 nuclei from 37 hospitals, 71 patients, four organs, and four nucleus types. We also organized a challenge around this dataset as a satellite event at the International Symposium on Biomedical Imaging (ISBI) in April 2020. The challenge saw a wide participation from across the world, and the top methods were able to match inter-human concordance for the challenge metric. In this paper, we summarize the dataset and the key findings of the challenge, including the commonalities and differences between the methods developed by various participants. We have released the MoNuSAC2020 dataset to the public.
Bone scintigraphy is an effective method to diagnose bone diseases such as bone tumors. In the scintigraphic images, bone abnormalities are widely scattered on the whole body. Conventionally, radiologists visually check the whole-body images and find the distributed abnormalities based on their expertise. This manual process is time-consuming and it is not unusual to miss some abnormalities. In this paper, a computer-aided diagnosis (CAD) system is proposed to assist radiologists in the diagnosis of bone scintigraphy. The system will provide warning marks and abnormal scores on some locations of the images to direct radiologists' attention toward these locations. A fuzzy system called characteristic-point-based fuzzy inference system (CPFIS) is employed to implement the diagnosis system and three minimizations are used to systematically train the CPFIS. Asymmetry and brightness are chosen as the two inputs to the CPFIS according to radiologists' knowledge. The resulting CAD system is of a small-sized rule base such that the resulting fuzzy rules can be not only easily understood by radiologists, but also matched to and compared with their expert knowledge. The prototype CAD system was tested on 82 abnormal images and 27 normal images. We employed free-response receiver operating characteristics method with the mean number of false positives (FPs) and the sensitivity as performance indexes to evaluate the proposed system. The sensitivity is 91.5% (227 of 248) and the mean number of FPs is 37.3 per image. The high sensitivity and moderate numbers of FP marks per image shows that the proposed method can provide an effective second-reader information to radiologists in the diagnosis of bone scintigraphy.
In this study, a millimeter-wave (MMW) radar and an onboard camera are used to develop a sensor fusion algorithm for a forward collision warning system. This study proposed integrating an MMW radar and camera to compensate for the deficiencies caused by relying on a single sensor and to improve frontal object detection rates. Density-based spatial clustering of applications with noise and particle filter algorithms are used in the radar-based object detection system to remove non-object noise and track the target object. Meanwhile, the two-stage vision recognition system can detect and recognize the objects in front of a vehicle. The detected objects include pedestrians, motorcycles, and cars. The spatial alignment uses a radial basis function neural network to learn the conversion relationship between the distance information of the MMW radar and the coordinate information in the image. Then a neural network is utilized for object matching. The sensor with a higher confidence index is selected as the system output. Finally, three kinds of scenario conditions (daytime, nighttime, and rainy-day) were designed to test the performance of the proposed method. The detection rates and the false alarm rates of proposed system were approximately 90.5% and 0.6%, respectively.
Many available intelligent monitoring devices and applications only perform passive monitoring of the environment and notify the user when events occur, lacking the integration of viable technologies. In this work, we propose a multipurpose monitoring device that integrates cloud computing, the Internet of Things (IoT), and mobile applications to implement an unmanned mobile monitoring system for real-time monitoring and manipulation. The proposed system utilizes a self-balancing car as a stabilizing vehicle and a micro video camera that records images for AI facial tracking and recognition, combined with an infrared thermal imager to obtain forehead temperature. The recognized results are sent to a mobile device via a live video stream. The video stream can be recorded and images can be captured through a mobile application and uploaded to a cloud storage for playback and examination later. If the streamed video lacks sufficient illumination and cannot be viewed clearly, the system can be switched to infrared thermography to capture thermal images for monitoring the environment.The system can also be used as a mobile body temperature detector for the detection of abnormal body temperatures at a border control for quarantine inspections during the outbreak of contagious diseases. Furthermore, the system is equipped with four gas sensors to detect up to 12 different kinds of gaseous substances or particulate matter so that the user can be aware of the environmental quality around the self-balancing car. The outcome of this work can be applied to public health monitoring, dynamic crowd monitoring at airports and harbors, and the monitoring of hazardous or disaster-affected locations for search and rescue to reduce the costs associated with manual monitoring and the risk of exposure to hazards.This version has been created for advance publication by formatting the accepted manuscript. Some editorial changes may be made to this version.
positioning, smart trash can robot, intelligent path planning People have become busier in the modern world with both work and housework. In addition to the different types of robotic cleaner, it would be quite helpful to have a smart robotic trash collection and dumping system that provides on-call services so that a user does not need to physically get up and put trash into a trash can. As the bin reaches its maximum capacity, it also dumps the trash automatically without the user's instructions. Available smart trash cans are usually focused on determining if the trash is recyclable. Most trash cans also do not have the capability to move autonomously. In this research, we utilize fingerprint mapping for wireless indoor positioning towards the implementation of an autonomous vehicle with a mounted trash can. The user may make an indoor call to the trash-can-mounted vehicle via a mobile application under an IoT and cloud computing environment. The vehicle can position itself in front of the user for trash collection through automatic obstacle avoidance navigation with smart path planning from deep learning. The system can also monitor the amount of accumulated trash and dump trash at a fixed location before returning to the start point. This research on a smart trash-collecting robot can provide significant assistance to people who are busy, those with impaired or limited movements, and the elderly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.