“…Thanks to its practical applications several problems have been studied through this lens. Examples range from building better data structures [22,28], to improved competitive and approximation ratios for several online tasks [23,24,26,[29][30][31], to cases where advice has been used to speed-up algorithms [1,5] or to reduce their space complexity [19]. Our work can be seen as a formalization of the classic secretary problem in this general framework.…”
The secretary problem is probably the purest model of decision making under uncertainty. In this paper we ask which advice can we give the algorithm to improve its success probability?We propose a general model that unifies a broad range of problems: from the classic secretary problem with no advice, to the variant where the quality of a secretary is drawn from a known distribution and the algorithm learns each candidate's quality on arrival, to more modern versions of advice in the form of samples, to an ML-inspired model where a classifier gives us noisy signal about whether or not the current secretary is the best on the market.Our main technique is a factor revealing LP that captures all of the problems above. We use this LP formulation to gain structural insight into the optimal policy. Using tools from linear programming, we present a tight analysis of optimal algorithms for secretaries with samples, optimal algorithms when secretaries' qualities are drawn from a known distribution, and a new noisy binary advice model. CCS Concepts: • Theory of computation → Online algorithms.
“…Thanks to its practical applications several problems have been studied through this lens. Examples range from building better data structures [22,28], to improved competitive and approximation ratios for several online tasks [23,24,26,[29][30][31], to cases where advice has been used to speed-up algorithms [1,5] or to reduce their space complexity [19]. Our work can be seen as a formalization of the classic secretary problem in this general framework.…”
The secretary problem is probably the purest model of decision making under uncertainty. In this paper we ask which advice can we give the algorithm to improve its success probability?We propose a general model that unifies a broad range of problems: from the classic secretary problem with no advice, to the variant where the quality of a secretary is drawn from a known distribution and the algorithm learns each candidate's quality on arrival, to more modern versions of advice in the form of samples, to an ML-inspired model where a classifier gives us noisy signal about whether or not the current secretary is the best on the market.Our main technique is a factor revealing LP that captures all of the problems above. We use this LP formulation to gain structural insight into the optimal policy. Using tools from linear programming, we present a tight analysis of optimal algorithms for secretaries with samples, optimal algorithms when secretaries' qualities are drawn from a known distribution, and a new noisy binary advice model. CCS Concepts: • Theory of computation → Online algorithms.
“…This paper considers a recently proposed proportional weights algorithm for online matching. The algorithm was first proposed by Agrawal et al [2] and further developed in [21,22].…”
Section: Proportional Weights For Online Matchingmentioning
confidence: 99%
“…Later these weights were considered in the algorithms augmented with predictions model [21,22]. Lavastida et al [22] showed that predicting these weights can be used to go beyond the worst-case for online matching.…”
Section: Proportional Weights For Online Matchingmentioning
confidence: 99%
“…There has been recent interest in augmenting online algorithms with machine-learned prediction. This line of work has lead to new models of algorithm analysis for going beyond worst-case analysis [23,21,17,3]. The theoretical models considered in these works have led to the development of new algorithms which incorporate learned parameters (i.e.…”
Section: Introductionmentioning
confidence: 99%
“…An emerging line of work [21,22,5] has suggested that perhaps better matching algorithms exist if they use learned information about known real world match-ing instances. That is, in practice there is often lots of data available on prior matching instances (e.g.…”
We study the performance of a proportional weights algorithm for online capacitated bipartite matching modeling the delivery of impression ads. The algorithm uses predictions on the advertiser nodes to match arriving impression nodes fractionally in proportion to the weights of its neighbors. This paper gives a thorough empirical study of the performance of the algorithm on a data-set of ad impressions from Yahoo! and shows its superior performance compared to natural baselines such as a greedy water-filling algorithm and the ranking algorithm.The proportional weights algorithm has recently received interest in the theoretical literature where it was shown to have strong guarantees beyond the worst-case model of algorithms augmented with predictions. We extend these results to the case where the advertisers' capacities are no longer stationary over time. Additionally, we show the algorithm has near optimal performance in the random-order arrival model when the number of impressions and the optimal matching are sufficiently large.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.