In lung nodules there is a huge variation in structural properties like Shape, Surface Texture. Even the spatial properties vary, where they can be found attached to lung walls, blood vessels in complex non-homogenous lung structures. Moreover, the nodules are of small size at their early stage of development. This poses a serious challenge to develop a Computer aided diagnosis (CAD) system with better false positive reduction. Hence, to reduce the false positives per scan and to deal with the challenges mentioned, this paper proposes a set of three diverse 3D Attention based CNN architectures (3D ACNN) whose predictions on given low dose Volumetric Computed Tomography (CT) scans are fused to achieve more effective and reliable results. Attention mechanism is employed to selectively concentrate/weigh more on nodule specific features and less weight age over other irrelevant features. By using this attention based mechanism in CNN unlike traditional methods there was a significant gain in the classification performance. Contextual dependencies are also taken into account by giving three patches of different sizes surrounding the nodule as input to the ACNN architectures. The system is trained and validated using a publicly available LUNA16 dataset in a 10 fold cross validation approach where a competition performance metric (CPM) score of 0.931 is achieved. The experimental results demonstrate that either a single patch or a single architecture in a one-to-one fashion that is adopted in earlier methods cannot achieve a better performance and signifies the necessity of fusing different multi patched architectures. Though the proposed system is mainly designed for pulmonary nodule detection it can be easily extended to classification tasks of any other 3D medical
The usage of transportation systems is inevitable; any assistance module which can catalyze the flow involved in transportation systems, parallelly improving the reliability of processes involved is a boon for day-today human lives. This paper introduces a novel, cost-effective, and highly responsive Post-active Driving Assistance System, which is "Adaptive-Mask-Modelling Driving Assistance System" with intuitive wide field-of-view modeling architecture. The proposed system is a vision-based approach, which processes a panoramic-front view (stitched from temporal synchronous left, right stereo camera feed) & simple monocular-rear view to generate robust & reliable proximity triggers along with co-relative navigation suggestions. The proposed system generates robust objects, adaptive field-of-view masks using FRCNN+Resnet-101_FPN, DSED neural-networks, and are later processed and mutually analyzed at respective stages to trigger proximity alerts and frame reliable navigation suggestions. The proposed DSED network is an Encoder-Decoder-Convolutional-Neural-Network to estimate lane-offset parameters which are responsible for adaptive modeling of field-of-view range(157 0-210 0) during live inference. Proposed stages, deep-neural-networks, and implemented algorithms, modules are state-of-the-art and achieved outstanding performance with minimal loss(L{p, t}, L δ , L T otal) values during benchmarking analysis on our custom-built, KITTI, MS-COCO, Pascal-VOC, Make-3D datasets. The proposed assistance-system is tested on our custom-built, multiple public datasets to generalize its reliability and robustness under multiple wild conditions, input traffic scenarios & locations. INDEX TERMS Adaptive field of view modeling, Automotive applications, Driving assistance systems, Lane detection and analysis, Object detection and tracking, spatial auto-correlation
Image-stitching (or) mosaicing is considered an active research-topic with numerous use-cases in computer-vision, AR/VR, computer-graphics domains, but maintaining homogeneity among the input image sequences during the stitching/mosaicing process is considered as a primary-limitation & majordisadvantage. To tackle these limitations, this article has introduced a robust and reliable image stitching methodology (l,r-Stitch Unit), which considers multiple non-homogeneous image sequences as input to generate a reliable panoramically stitched wide view as the final output. The l,r-Stitch Unit further consists of a pre-processing, post-processing sub-modules & a l,r-PanoED-network, where each submodule is a robust ensemble of several deep-learning, computer-vision & image-handling techniques. This article has also introduced a novel convolutional-encoder-decoder deep-neural-network (l,r-PanoEDnetwork) with a unique split-encoding-network methodology, to stitch non-coherent input left, right stereo image pairs. The encoder-network of the proposed l,r-PanoED extracts semantically rich deep-featuremaps from the input to stitch/map them into a wide-panoramic domain, the feature-extraction & featuremapping operations are performed simultaneously in the l,r-PanoED's encoder-network based on the split-encoding-network methodology. The decoder-network of l,r-PanoED adaptively reconstructs the output panoramic-view from the encoder networks' bottle-neck feature-maps. The proposed l,r-Stitch Unit has been rigorously benchmarked with alternative image-stitching methodologies on our custom-built traffic dataset and several other public-datasets. Multiple evaluation metrics (SSIM, PSNR, MSE, L α,β,γ , FM-rate, Average-latency-time) & wild-Conditions (rotational/color/intensity variances, noise, etc) were considered during the benchmarking analysis, and based on the results, our proposed method has outperformed among other image-stitching methodologies and has proved to be effective even in wild non-homogeneous inputs.INDEX TERMS Deep feature extraction, encoder-decoder cnn, image mosaicing, multi-image registration, non-homogeneous image stitching.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.