In recent years, significant progress has been made in developing more accurate and efficient machine learning algorithms for segmentation of medical and natural images. In this review article, we highlight the imperative role of machine learning algorithms in enabling efficient and accurate segmentation in the field of medical imaging. We specifically focus on several key studies pertaining to the application of machine learning methods to biomedical image segmentation. We review classical machine learning algorithms such as Markov random fields, k‐means clustering, random forest, etc. Although such classical learning models are often less accurate compared to the deep‐learning techniques, they are often more sample efficient and have a less complex structure. We also review different deep‐learning architectures, such as the artificial neural networks (ANNs), the convolutional neural networks (CNNs), and the recurrent neural networks (RNNs), and present the segmentation results attained by those learning models that were published in the past 3 yr. We highlight the successes and limitations of each machine learning paradigm. In addition, we discuss several challenges related to the training of different machine learning models, and we present some heuristics to address those challenges.
We study deterministic and stochastic primal-dual sub-gradient algorithms for distributed optimization of a separable objective function with global inequality constraints. In both algorithms, the norm of the Lagrangian multipliers are controlled by augmenting the corresponding Lagrangian function with a quadratic regularization term. Specifically, we show that as long as the stepsize of each algorithm satisfies a certain restriction, the norm of the Lagrangian multipliers is upper bounded by an expression that is inversely proportional to the parameter of the regularization. We use this result to compute upper bounds on the sub-gradients of the Lagrangian function. For the deterministic algorithm, we prove a convergence rate for attaining the optimal objective value. In the stochastic optimization case, we similarly prove convergence rates both in the expectation and with a high probability, using the method of bounded martingale difference. For both algorithms, we demonstrate a trade-off between the convergence rate and the decay rate of the constraint violation, in the sense that improving the convergence rate slows the decay rate of the constraint violation and vice versa. We demonstrate the convergence of our proposed algorithms numerically for distributed regression with the hinge and logistic loss functions over different graph structures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.