Monocular depth estimation from Red-Green-Blue (RGB) images is a well-studied ill-posed problem in computer vision which has been investigated intensively over the past decade using Deep Learning (DL) approaches. The recent approaches for monocular depth estimation mostly rely on Convolutional Neural Networks (CNN). Estimating depth from two-dimensional images plays an important role in various applications including scene reconstruction, 3D object-detection, robotics and autonomous driving. This survey provides a comprehensive overview of this research topic including the problem representation and a short description of traditional methods for depth estimation. Relevant datasets and 13 state-of-the-art deep learning-based approaches for monocular depth estimation are reviewed, evaluated and discussed. We conclude this paper with a perspective towards future research work requiring further investigation in monocular depth estimation challenges.
Background: There is a high risk of tuberculosis (TB) disease diagnosis among conventional methods.Objectives:This study is aimed at diagnosing TB using hybrid machine learning approaches.Materials and Methods: Patient epicrisis reports obtained from the Pasteur Laboratory in the north of Iran were used. All 175 samples have twenty features. The features are classified based on incorporating a fuzzy logic controller and artificial immune recognition system. The features are normalized through a fuzzy rule based on a labeling system. The labeled features are categorized into normal and tuberculosis classes using the Artificial Immune Recognition Algorithm.Results:Overall, the highest classification accuracy reached was for the 0.8 learning rate (α) values. The artificial immune recognition system (AIRS) classification approaches using fuzzy logic also yielded better diagnosis results in terms of detection accuracy compared to other empirical methods. Classification accuracy was 99.14%, sensitivity 87.00%, and specificity 86.12%.
Multisensor data fusion can be considered as a strong nonlinear system. A precise analytical solution is challenging to obtain, thus making it hard to dissect with routine diagnostic systems. Since tried-and-true logical systems are extremely difficult to undertake, soft computing methodologies are deemed having potential for such applications. This paper presents the support vector regression (SVR) methodology for sensor fusion to improve tracking ability. Radial basis function (RBF) and polynomial function are used as SVR kernel functions. The system combines Kalman filtering and soft computing principle, i.e., SVR, to structure an effective information combination method for the target framework. A radar-infrared system is proposed to adapt contextual changes and lessen the dubious unsettling influence of an information estimation from multisensory data. The experimental results show that an improvement in predictive accuracy and generalization capability can be achieved using the SVR with RBF kernel compared with the SVR with polynomial kernel approach.
Post capture refocusing effect in smartphone cameras is achievable by using focal stacks. However, the accuracy of this effect is totally dependent on the combination of the depth layers in the stack. The accuracy of the extended depth of field effect in this application can be improved significantly by computing an accurate depth map which has been an open issue for decades. To tackle this issue, in this paper, a framework is proposed based on Preconditioned Alternating Direction Method of Multipliers (PADMM) for depth from the focal stack and synthetic defocus application. In addition to its ability to provide high structural accuracy and occlusion handling, the optimization function of the proposed method can, in fact, converge faster and better than state of the art methods. The evaluation has been done on 21 sets of focal stacks and the optimization function has been compared against 5 other methods. Preliminary results indicate that the proposed method has a better performance in terms of structural accuracy and optimization in comparison to the current state of the art methods.
Deep neural networks are applied to a wide range of problems in recent years. In this work, Convolutional Neural Network (CNN) is applied to the problem of determining the depth from a single camera image (monocular depth). Eight different networks are designed to perform depth estimation, each of them suitable for a feature level. Networks with different pooling sizes determine different feature levels. After designing a set of networks, these models may be combined into a single network topology using graph optimization techniques. This "Semi Parallel Deep Neural Network (SPDNN)" eliminates duplicated common network layers, and can be further optimized by retraining to achieve an improved model compared to the individual topologies. In this study, four SPDNN models are trained and have been evaluated at 2 stages on the KITTI dataset. The ground truth images in the first part of the experiment are provided by the benchmark, and for the second part, the ground truth images are the depth map results from applying a state-of-the-art stereo matching method. The results of this evaluation demonstrate that using post-processing techniques to refine the target of the network increases the accuracy of depth estimation on individual mono images. The second evaluation shows that using segmentation data alongside the original data as the input can improve the depth estimation results to a point where performance is comparable with stereo depth estimation. The computational time is also discussed in this study.These authors contributed equally to this work.2 employ single camerase.g. security monitoring, automotive & consumer vision systems, and camera infrastructure for traffic and pedestrian management in smart cities. These and other smart-vision applications can greatly benefit from accurate monocular depth analysis. This challenge has been studied for a decade and is still an open research problem.Recently the idea of using neural networks to solve this problem has attracted attention. In this paper, we tackle this problem by employing a Deep Neural Network (DNN) equipped with semantic pixel-wise segmentation utilizing our recently published disparity post-processing method.This paper also introduces the use of Semi Parallel Deep Neural Networks (SPDNN). A SPDNN is a semi-parallel network topology developed using a graph theory optimization of a set of independently optimized CNNs, each targeted at a specific aspect of the more general classification problem. In 2 3 the effect of SPDNN approach on increasing convergence and improving model generalization is discussed. For the depth from monocular vision problem a fully-connected topology, optimized for fine features, is combined with a series of max-pooled topologies (2×2, 4×4 and 8×8) each optimised for coarser image features. The optimized SPDNN topology is re-trained on the full training dataset and converges to an improved set of network weights.It is worth mentioning that this network design strategy is not limited to the 'depth from monocular vision' problem, and...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.