Abstract. We present a framework for margin based active learning of linear separators. We instantiate it for a few important cases, some of which have been previously considered in the literature. We analyze the effectiveness of our framework both in the realizable case and in a specific noisy setting related to the Tsybakov small noise condition.
We state and analyze the first active learning algorithm that finds an -optimal hypothesis in any hypothesis class, when the underlying distribution has arbitrary forms of noise. The algorithm, A 2 (for Agnostic Active), relies only upon the assumption that it has access to a stream of unlabeled examples drawn i.i.d. from a fixed distribution. We show that A 2 achieves an exponential improvement (i.e., requires only O (ln 1 ) samples to find an -optimal classifier) over the usual sample complexity of supervised learning, for several settings considered before in the realizable case. These include learning threshold classifiers and learning homogeneous linear separators with respect to an input distribution which is uniform over the unit sphere.
We present approximation and online algorithms for a number of problems of pricing items for sale so as to maximize seller's revenue in an unlimited supply setting. Our first result is an O(k)-approximation algorithm for pricing items to single-minded bidders who each want at most k items. This improves over recent independent work of Briest and Krysta [5] who achieve an O(k 2 ) bound. For the case k = 2, where we obtain a 4-approximation, this can be viewed as the following graph vertex pricing problem: given a (multi) graph G with valuations we on the edges, find prices pi ≥ 0 for the vertices to maximize XWe also improve the approximation of Guruswami et al. [11] from O(log m + log n) to O(log n), where m is the number of bidders and n is the number of items, for the "highway problem" in which all desired subsets are intervals on a line. Our approximation algorithms can be fed into the generic reduction of Balcan et al.[2] to yield an incentive-compatible auction with nearly the same performance guarantees so long as the number of bidders is sufficiently large. In addition, we show how our algorithms can be combined with results of Blum and Hartline [3], Blum et al. [4], and Kalai and Vempala [13] to achieve good performance in the online setting, where customers arrive one at a time and each must be presented a set of item prices based only on knowledge of the customers seen so far.
Abstract. Submodular functions are discrete functions that model laws of diminishing returns and enjoy numerous algorithmic applications that have been used in many areas, including combinatorial optimization, machine learning, and economics. In this work we use a learning theoretic angle for studying submodular functions. We provide algorithms for learning submodular functions, as well as lower bounds on their learnability. In doing so, we uncover several novel structural results revealing both extremal properties as well as regularities of submodular functions, of interest to many areas.Submodular functions are a discrete analog of convex functions that enjoy numerous applications and have structural properties that can be exploited algorithmically. They arise naturally in the study of graphs, matroids, covering problems, facility location problems, etc., and they have been extensively studied in operations research and combinatorial optimization for many years [8]. More recently submodular functions have become key concepts both in the machine learning and algorithmic game theory communities. For example, submodular functions have been used to model bidders' valuation functions in combinatorial auctions [12,6,3,14], and for solving feature selection problems in graphical models [11] or for solving various clustering problems [13]. In fact, submodularity has been the topic of several tutorials and workshops at recent major conferences in machine learning [1,9,10,2].Despite the increased interest on submodularity in machine learning, little is known about the topic from a learning theory perspective. In this work, we provide a statistical and computational theory of learning submodular functions in a distributional learning setting.Our study has multiple motivations. From a foundational perspective, submodular functions are a powerful, broad class of important functions, so studying their learnability allows us to understand their structure in a new way. To draw a parallel to the Boolean-valued case, a class of comparable breadth would be the class of monotone Boolean functions; the learnability of such functions has been intensively studied [4,5]. From an applications perspective, algorithms for learning submodular functions may be useful in some of the applications where these functions arise. For example, in the context of algorithmic game theory This note summarizes several results in the paper "Learning Submodular Functions", by Maria Florina Balcan and Nicholas Harvey, which appeared The 43rd ACM Symposium on Theory of Computing (STOC) 2011.
We consider the problem of pricing n items to maximize revenue when faced with a series of unknown buyers with complex preferences, and show that a simple pricing scheme achieves surprisingly strong guarantees.We show that in the unlimited supply setting, a random single price achieves expected revenue within a logarithmic factor of the total social welfare for customers with general valuation functions, which may not even necessarily be monotone. This generalizes work of Guruswami et. al [18], who show a logarithmic factor for only the special cases of single-minded and unit-demand customers.In the limited supply setting, we show that for subadditive valuations, a random single price achieves revenue within a factor of 2 O( √ log n log log n) of the total social welfare, i.e., the optimal revenue the seller could hope to extract even if the seller could price each bundle differently for every buyer. This is the best approximation known for any item pricing scheme for subadditive (or even submodular) valuations, even using multiple prices. We complement this result with a lower bound showing a sequence of subadditive (in fact, XOS) buyers for which any single price has approximation ratio 2 Ω(log 1/4 n) , thus showing that single price schemes cannot achieve a polylogarithmic ratio. This lower bound demonstrates a clear distinction between revenue maximization and social welfare maximization in this setting, for which [12,10] show that a fixed price achieves a logarithmic approximation in the case of XOS [12], and more generally subadditive [10] We also consider the multi-unit case examined by [11] in the context of social welfare, and show that so long as no buyer requires more than a 1 − ǫ fraction of the items, a random single price now does in fact achieve revenue within an O(log n) factor of the maximum social welfare.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.