Digital pathology platforms with integrated artificial intelligence have the potential to increase the efficiency of the nonclinical pathologist’s workflow through screening and prioritizing slides with lesions and highlighting areas with specific lesions for review. Herein, we describe the comparison of various single- and multi-magnification convolutional neural network (CNN) architectures to accelerate the detection of lesions in tissues. Different models were evaluated for defining performance characteristics and efficiency in accurately identifying lesions in 5 key rat organs (liver, kidney, heart, lung, and brain). Cohorts for liver and kidney were collected from TG-GATEs open-source repository, and heart, lung, and brain from internally selected R&D studies. Annotations were performed, and models were trained on each of the available lesion classes in the available organs. Various class-consolidation approaches were evaluated from generalized lesion detection to individual lesion detections. The relationship between the amount of annotated lesions and the precision/accuracy of model performance is elucidated. The utility of multi-magnification CNN implementations in specific tissue subtypes is also demonstrated. The use of these CNN-based models offers users the ability to apply generalized lesion detection to whole-slide images, with the potential to generate novel quantitative data that would not be possible with conventional image analysis techniques.
Interactive image segmentation is extensively used in photo editing when the aim is to separate a foreground object from its background so that it is available for various applications. The goal of the interaction is to get an accurate segmentation of the object with the minimal amount of human effort. To improve the usability and user experience using interactive image segmentation we present three interaction methods and study the effect of each using both objective and subjective metrics, such as, accuracy, amount of effort needed, cognitive load and preference of interaction method as voted by users. The novelty of this paper is twofold. First , the evaluation of interaction methods is carried out with objective metrics such as object and boundary accuracies in tandem with subjective metrics to cross check if they support each other. Second, we analyze Electroencephalography (EEG) data obtained from subjects performing the segmentation as an indicator of brain activity. The experimental results potentially give valuable cues for the development of easy-to-use yet efficient interaction methods for image segmentation.
In Tg-rasH2 carcinogenicity mouse models, a positive control group is treated with a carcinogen such as urethane or N-nitroso-N-methylurea to test study validity based on the presence of the expected proliferative lesions in the transgenic mice. We hypothesized that artificial intelligence–based deep learning (DL) could provide decision support for the toxicologic pathologist by screening for the proliferative changes, verifying the expected pattern for the positive control groups. Whole slide images (WSIs) of the lungs, thymus, and stomach from positive control groups were used for supervised training of a convolutional neural network (CNN). A single pathologist annotated WSIs of normal and abnormal tissue regions for training the CNN-based supervised classifier using INHAND criteria. The algorithm was evaluated using a subset of tissue regions that were not used for training and then additional tissues were evaluated blindly by 2 independent pathologists. A binary output (proliferative classes present or not) from the pathologists was compared to that of the CNN classifier. The CNN model grouped proliferative lesion positive and negative animals at high concordance with the pathologists. This process simulated a workflow for review of these studies, whereby a DL algorithm could provide decision support for the pathologists in a nonclinical study.
Rehabilitation from cardiovascular disease (CVD) usually requires lifestyle changes, especially an increase in exercise and physical activity. However uptake and adherence to exercise is low for community based programmes. We propose a mobile application that allows users to choose the type of exercise and compete it at a convenient time in the comfort of their own home. Grounded in a behaviour change framework, the application provides feedback and encouragement to continue exercising and to improve on previous results. The application also utilizes wearable wireless technologies in order to provide highly personalized feedback. The application can accurately detect if a specific exercise is being done, and count the associated number of repetitions utilizing accelerometer or gyroscope signalsMachine learning models are employed to recognize individual local muscular endurance (LME) exercises, achieving overall accuracy of more than 98%. This technology allows providing a near real-time personalized feedback which mimics the feedback that the user might expect from an instructor. This is proved to motivate users to continue the recovery process.
Abstract-In this paper, we investigate the parameters underpinning our previously presented system for detecting unusual events in surveillance applications [1]. The system identifies anomalous events using an unsupervised data-driven approach. During a training period, typical activities within a surveilled environment are modeled using multi-modal sensor readings. Significant deviations from the established model of regular activity can then be flagged as anomalous at run-time. Using this approach, the system can be deployed and automatically adapt for use in any environment without any manual adjustment. Experiments carried out on two days of audio-visual data were performed and evaluated using a manually annotated groundtruth. We investigate sensor fusion and quantitatively evaluate the performance gains over single modality models. We also investigate different formulations of our cluster-based model of usual scenes as well as the impact of dynamic thresholding on identifying anomalous events. Experimental results are promising, even when modeling is performed using very simple audio and visual features.
The percentage of false alarms caused by spiders in automated surveillance can range from 20-50%. False alarms increase the workload of surveillance personnel validating the alarms and the maintenance labor cost associated with regular cleaning of webs. We propose a novel, cost effective method to detect false alarms triggered by spiders/webs in surveillance camera networks. This is accomplished by building a spider classifier intended to be a part of the surveillance video processing pipeline. The proposed method uses a feature descriptor obtained by early fusion of blur and texture. The approach is sufficiently efficient for real-time processing and yet comparable in performance with more computationally costly approaches like SIFT with bag of visual words aggregation. The proposed method can eliminate 98.5% of false alarms caused by spiders in a data set supplied by an industry partner, with a false positive rate of less than 1%.
The third phase of the recovery from cardiovascular disease (CVD) is an exercise-based rehabilitation programme. However, adherence to an exercise regime is typically not maintained by the patient for a variety of reasons such as lack of time, financial constraints, etc. In order to facilitate patients to perform their exercises from the comfort of their home and at their own convenience, we have developed a mobile application, termed MedFit. It provides access to a tailored suite of exercises along with easy to understand guidance from audio and video instructions. Two types of wearable sensors are utilized to allow motivational feedback to be provided to the user for self monitoring and to provide near real-time feedback. Fitbit, a commercially available activity and fitness tracker, is used to provide in-depth feedback for self-monitoring over longer periods of time (e.g. day, week, month), whereas the Shimmer wireless sensing platform provides the data for near real-time feedback on the quality of the exercises performed. MedFit is a simple and intuitive mobile application designed to provide the motivation and tools for patients to help ensure faster recovery from the trauma caused by CVD. In this paper we describe the MedFit application as a demo submission to the 2 nd MMHealth Workshop at ACM MM 2017.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.