We introduce a hybrid stochastic estimator to design stochastic gradient algorithms for solving stochastic optimization problems. Such a hybrid estimator is a convex combination of two existing biased and unbiased estimators and leads to some useful property on its variance. We limit our consideration to a hybrid SARAH-SGD for nonconvex expectation problems. However, our idea can be extended to handle a broader class of estimators in both convex and nonconvex settings. We propose a new single-loop stochastic gradient descent algorithm that can achieve O max σ 3 ε −1 , σε −3 -complexity bound to obtain an ε-stationary point under smoothness and σ 2 -bounded variance assumptions. This complexity is better than O σ 2 ε −4 often obtained in state-of-the-art SGDs when σ < O ε −3 . We also consider different extensions of our method, including constant and adaptive step-size with single-loop, double-loop, and mini-batch variants. We compare our algorithms with existing methods on several datasets using two nonconvex models.
The total complexity (measured as the total number of gradient computations) of a stochastic firstorder optimization algorithm that finds a first-order stationary point of a finite-sum smooth nonconvex objective function F (w) = 1 n n i=1 f i (w) has been proven to be at least Ω(where denotes the attained accuracy E[ ∇F ( w) 2 ] ≤ for the outputted approximation w [6]. In this paper, we provide a convergence analysis for a slightly modified version of the SARAH algorithm [14,15] and achieve total complexity that matches the lower-bound worst case complexity in [6] up to a constant factor when n ≤ O( −2 ) for nonconvex problems. For convex optimization, we propose SARAH++ with sublinear convergence for general convex and linear convergence for strongly convex problems; and we provide a practical version for which numerical experiments on various datasets show an improved performance.
There is an urgent need to reduce the growing backlog of forensic examinations in Digital Forensics Laboratories (DFLs). Currently, DFLs routinely create forensic duplicates and perform in-depth forensic examinations of all submitted media. This approach is rapidly becoming untenable as more cases involve increasing quantities of digital evidence. A more efficient and effective three-tiered strategy for performing forensic examinations will enable DFLs to produce useful results in a timely manner at different phases of an investigation, and will reduce unnecessary expenditure of resources on less serious matters. The three levels of forensic examination are described along with practical examples and suitable tools. Realizing that this is not simply a technical problem, we address the need to update training and establish thresholds in DFLs. Threshold considerations include the likelihood of missing exculpatory evidence and seriousness of the offense. We conclude with the implications of scaling forensic examinations to the investigation.
We develop and analyze a variant of variance reducing stochastic gradient algorithm, known as SARAH [10], which does not require computation of the exact gradient. Thus this new method can be applied to general expectation minimization problems rather than only finite sum problems. While the original SARAH algorithm, as well as its predecessor, SVRG [2], require an exact gradient computation on each outer iteration, the inexact variant of SARAH (iSARAH), which we develop here, requires only stochastic gradient computed on a mini-batch of sufficient size. The proposed method combines variance reduction via sample size selection and iterative stochastic gradient updates. We analyze the convergence rate of the algorithms for strongly convex, convex, and nonconvex cases with appropriate mini-batch size selected for each case. We show that with an additional, reasonable, assumption iS-ARAH achieves the best known complexity among stochastic methods in the case of general convex case stochastic value functions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.