Several learning problems involve solving min-max problems, e.g., empirical distributional robust learning Duchi, 2016, Curi et al., 2020] or learning with non-standard aggregated losses [Shalev-Shwartz and Wexler, 2016, Fan et al., 2017]. More specifically, these problems are convex-linear problems where the minimization is carried out over the model parameters w ∈ W and the maximization over the empirical distribution p ∈ K of the training set indexes, where K is the simplex or a subset of it. To design efficient methods, we let an online learning algorithm play against a (combinatorial) bandit algorithm. We argue that the efficiency of such approaches critically depends on the structure of K and propose two properties of K that facilitate designing efficient algorithms. We focus on a specific family of sets S n,k encompassing various learning applications and provide high-probability convergence guarantees to the minimax values.