Cancer classification is a topic of major interest in medicine since it allows accurate and efficient diagnosis and facilitates a successful outcome in medical treatments. Previous studies have classified human tumors using a large-scale RNA profiling and supervised Machine Learning (ML) algorithms to construct a molecular-based classification of carcinoma cells from breast, bladder, adenocarcinoma, colorectal, gastro esophagus, kidney, liver, lung, ovarian, pancreas, and prostate tumors. These datasets are collectively known as the 11_tumor database, although this database has been used in several works in the ML field, no comparative studies of different algorithms can be found in the literature. On the other hand, advances in both hardware and software technologies have fostered considerable improvements in the precision of solutions that use ML, such as Deep Learning (DL). In this study, we compare the most widely used algorithms in classical ML and DL to classify the tumors described in the 11_tumor database. We obtained tumor identification accuracies between 90.6% (Logistic Regression) and 94.43% (Convolutional Neural Networks) using k-fold cross-validation. Also, we show how a tuning process may or may not significantly improve algorithms’ accuracies. Our results demonstrate an efficient and accurate classification method based on gene expression (microarray data) and ML/DL algorithms, which facilitates tumor type prediction in a multi-cancer-type scenario.
In an autonomous vehicle setting, we propose a method for the estimation of a semantic grid, i.e. a bird's eye grid centered on the car's position and aligned with its driving direction, which contains high-level semantic information about the environment and its actors. Each grid cell contains a semantic label with divers classes, as for instance {Road, Vegetation, Building, Pedestrian, Car. .. }. We propose a hybrid approach, which combines the advantages of two different methodologies: we use Deep Learning to perform semantic segmentation on monocular RGB images with supervised learning from labeled groundtruth data. We combine these segmentations with occupancy grids calculated from LIDAR data using a generative Bayesian particle filter. The fusion itself is carried out with a deep neural network, which learns to integrate geometric information from the LIDAR with semantic information from the RGB data. We tested our method on two datasets, namely the KITTI dataset, which is publicly available and widely used, and our own dataset obtained with our own platform, equipped with a LIDAR and various sensors. We largely outperform baselines which calculate the semantic grid either from the RGB image alone or from LIDAR output alone, showing the interest of this hybrid approach.
Object tracking and classification serve as basic components for the different perception tasks of autonomous robots. They provide the robot with the capability of class-aware tracking and richer features for decision-making processes. The joint estimation of class assignments, dynamic states and data associations results in a computationally intractable problem. Therefore, the vast majority of the literature tackles tracking and classification independently. The work presented here proposes a probabilistic model and an inference procedure that render the problem tractable through a structured variational approximation. The framework presented is very generic, and can be used for various tracking applications. It can handle objects with different dynamics, such as cars and pedestrians and it can seamlessly integrate multi-modal features, for example object dynamics and appearance. The method is evaluated and compared with state-of-the-art techniques using the publicly available KITTI dataset.
International audienceRobust perception is the cornerstone of safe and environmentally-aware autonomous navigation systems. Autonomous robots are expected to recognise the objects in their surroundings under a wide range of challenging environmental conditions. This problem has been tackled by combining multiple sensor modalities that have complementary characteristics. This paper proposes an approach to multi-sensor-based robotic perception that leverages the rich and dense appearance information provided by camera sensors, and the range data provided by active sensors independently of how dense their measurements are. We introduce a framework we call XDvision where colour images are augmented with dense depth information obtained from sparser sensors such as lidars. We demonstrate the utility of our framework by comparing the performance of a standard CNN-based image classifier fed with image data only with the performance of a two-layer multimodal CNN trained using our augmented representation
One of the most challenging tasks in the development of path planners for intelligent vehicles is the design of the cost function that models the desired behavior of the vehicle. While this task has been traditionally accomplished by handtuning the model parameters, recent approaches propose to learn the model automatically from demonstrated driving data using Inverse Reinforcement Learning (IRL). To determine if the model has correctly captured the demonstrated behavior, most IRL methods require obtaining a policy by solving the forward control problem repetitively. Calculating the full policy is a costly task in continuous or large domains and thus often approximated by finding a single trajectory using traditional path-planning techniques. In this work, we propose to find such a trajectory using a conformal spatiotemporal state lattice, which offers two main advantages. First, by conforming the lattice to the environment, the search is focused only on feasible motions for the robot, saving computational power. And second, by considering time as part of the state, the trajectory is optimized with respect to the motion of the dynamic obstacles in the scene. As a consequence, the resulting trajectory can be used for the model assessment. We show how the proposed IRL framework can successfully handle highly dynamic environments by modeling the highway tactical driving task from demonstrated driving data gathered with an instrumented vehicle.
In this work, we address the problem of lane change maneuver prediction in highway scenarios using information from sensors and perception systems widely used in automated driving. Our prediction approach is twofold. First, a driver model learned from demonstrations via Inverse Reinforcement Learning is used to equip a host vehicle with the anticipatory behavior reasoning capability of common drivers. Second, inference on an interaction-aware augmented Switching State-Space Model allows the approach to account for the dynamic evidence observed. The use of a driver model that correctly balances the driving and risk-aversive preferences of a driver allows the computation of a planning-based maneuver prediction. Integrating this anticipatory prediction into the maneuver inference engine brings a degree of scene understanding into the estimate and leads to faster lane change detections compared to those obtained by relying on dynamics alone. The performance of the presented framework is evaluated using highway data collected with an instrumented vehicle. The combination of model-based maneuver prediction and filteringbased state and maneuver tracking is shown to outperform an Interacting Multiple Model filter in the detection of highway lane change maneuvers regarding accuracy, detection latencyby an average of 0.4 seconds-and false-positive rates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.