Abstract:This is a short tutorial on complexity studies for differentiable convex optimization. A complexity study is made for a class of problems, an "oracle" that obtains information about the problem at a given point, and a stopping rule for algorithms. These three items compose a scheme, for which we study the performance of algorithms and problem complexity. Our problem classes will be quadratic minimization and convex minimization in R n . The oracle will always be first order. We study the performance of steepes… Show more
“…Our formulations entail solving convex programs with N variables and linear constraints. The complexity of an iterative solver to said programs is measured by the complexity of the initialization procedure, the worst case complexity per iteration for a given target precision [ 24 ], and the convergence rate. Given our rADMM, the initialization process consists of calculating the matrix and computing the matrix , which has computational complexity for Algorithm 1 and the NOC approximation, and for the REC method.…”
Minimally perturbed adversarial examples were shown to drastically reduce the performance of one-stage classifiers while being imperceptible. This paper investigates the susceptibility of hierarchical classifiers, which use fine and coarse level output categories, to adversarial attacks. We formulate a program that encodes minimax constraints to induce misclassification of the coarse class of a hierarchical classifier (e.g., changing the prediction of a ‘monkey’ to a ‘vehicle’ instead of some ‘animal’). Subsequently, we develop solutions based on convex relaxations of said program. An algorithm is obtained using the alternating direction method of multipliers with competitive performance in comparison with state-of-the-art solvers. We show the ability of our approach to fool the coarse classification through a set of measures such as the relative loss in coarse classification accuracy and imperceptibility factors. In comparison with perturbations generated for one-stage classifiers, we show that fooling a classifier about the ‘big picture’ requires higher perturbation levels which results in lower imperceptibility. We also examine the impact of different label groupings on the performance of the proposed attacks.
Supplementary Information
The online version contains supplementary material available at 10.1007/s00034-022-02226-w.
“…Our formulations entail solving convex programs with N variables and linear constraints. The complexity of an iterative solver to said programs is measured by the complexity of the initialization procedure, the worst case complexity per iteration for a given target precision [ 24 ], and the convergence rate. Given our rADMM, the initialization process consists of calculating the matrix and computing the matrix , which has computational complexity for Algorithm 1 and the NOC approximation, and for the REC method.…”
Minimally perturbed adversarial examples were shown to drastically reduce the performance of one-stage classifiers while being imperceptible. This paper investigates the susceptibility of hierarchical classifiers, which use fine and coarse level output categories, to adversarial attacks. We formulate a program that encodes minimax constraints to induce misclassification of the coarse class of a hierarchical classifier (e.g., changing the prediction of a ‘monkey’ to a ‘vehicle’ instead of some ‘animal’). Subsequently, we develop solutions based on convex relaxations of said program. An algorithm is obtained using the alternating direction method of multipliers with competitive performance in comparison with state-of-the-art solvers. We show the ability of our approach to fool the coarse classification through a set of measures such as the relative loss in coarse classification accuracy and imperceptibility factors. In comparison with perturbations generated for one-stage classifiers, we show that fooling a classifier about the ‘big picture’ requires higher perturbation levels which results in lower imperceptibility. We also examine the impact of different label groupings on the performance of the proposed attacks.
Supplementary Information
The online version contains supplementary material available at 10.1007/s00034-022-02226-w.
“…However, the worst case of computation complexity depends on the number of iterations of Algorithm 1, which is related to the convergence performance. The complexity of differentiable convex optimization has been reported in [31], however, the computation saving for non-convex problems has not been reported to the best of the authors' knowledge.…”
Section: Table Iii: Computation Complexity Comparison Of Precoding Matrix Designmentioning
“…See, for example, [2,4,5,6,8,11,14,19,21]. A review of complexity results for the convex case, in addition to novel techniques, can be found in [12].…”
Cubic-regularization and trust-region methods with worst-case first-order complexity O(ε −3/2) and worst-case second-order complexity O(ε −3) have been developed in the last few years. In this paper it is proved that the same complexities are achieved by means of a quadraticregularization method with a cubic sufficient-descent condition instead of the more usual predictedreduction based descent. Asymptotic convergence and order of convergence results are also presented. Finally, some numerical experiments comparing the new algorithm with a well-established quadratic regularization method are shown.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.