In recent times we have witnessed the emergence of large online markets with two-sided preferences that are responsible for businesses worth billions of dollars. Recommendation systems are critical components of such markets. It is to be noted that the matching in such a market depends on the preferences of both sides, consequently, the construction of a recommendation system for such a market calls for consideration of preferences of both sides. The online dating market, and the online freelancer market are examples of markets with two-sided preferences. Recommendation systems for such markets are fundamentally different from typical rating based product recommendations. We pose this problem as a bipartite ranking problem. There has been extensive research on bipartite ranking algorithms. Typically, generalized linear regression models are popular methods of constructing such ranking on account of their ability to be learned easily from big data, and their computational simplicity on engineering platforms. However, we show that for markets with two sided preferences, one can improve the AUC (Area Under the receiver operator Curve) score by considering separate models for preferences of both the sides and constructing a two layer architecture for ranking. We call this a two-level model algorithm. For both synthetic and real data we show that the two-level model algorithm has a better AUC performance than the direct application of a generalized linear model such as L1 logistic regression or an ensemble method such as random forest algorithm. We provide a theoretical justification of AUC optimality of two-level model and pose a theoretical problem for a more general result.
We study online learning under logarithmic loss with regular parametric models. Hedayati and Bartlett (2012b) showed that a Bayesian prediction strategy with Jeffreys prior and sequential normalized maximum likelihood (SNML) coincide and are optimal if and only if the latter is exchangeable, and if and only if the optimal strategy can be calculated without knowing the time horizon in advance. They put forward the question what families have exchangeable SNML strategies. This paper fully answers this open problem for one-dimensional exponential families. The exchangeability can happen only for three classes of natural exponential family distributions, namely the Gaussian, Gamma, and the Tweedie exponential family of order 3 /2.
Abstract-Recognition of Tibetan wood block print is a difficult problem that has many challenging steps. We propose a two stage framework involving image preprocessing, which consists of noise removal and baseline detection, and simultaneous character segmentation and recognition by the aid of a generalized hidden Markov model (also known as gHMM). For the latter stage, we train a gHMM and run the generalized Viterbi algorithm on our image to decode observations. There are two major motivations for using gHMM. First, it incorporates a language model into our recognition system which in turn enforces grammar and disambiguates classification errors caused by printing errors and image noise. Second, gHMM solves the segmentation challenge. Simply put gHMM is an HMM where the emission model allows multiple consecutive observations to be mapped to the same state. For features of our emission model we apply line and circle Hough transform to stroke detection, and use classspecific scaling for feature weighing. With gHMM, we find KMQDF to be the most effective distance metric for discriminating character classes. The accuracy of our system is 90.03%.
We study online learning under logarithmic loss with regular parametric models. In this setting, each strategy corresponds to a joint distribution on sequences. The minimax optimal strategy is the normalized maximum likelihood (NML) strategy. We show that the sequential normalized maximum likelihood (SNML) strategy predicts minimax optimally (i.e. as NML) if and only if the joint distribution on sequences defined by SNML is exchangeable. This property also characterizes the optimality of a Bayesian prediction strategy. In that case, the optimal prior distribution is Jeffreys prior for a broad class of parametric models for which the maximum likelihood estimator is asymptotically normal. The optimal prediction strategy, normalized maximum likelihood, depends on the number n of rounds of the game, in general. However, when a Bayesian strategy is optimal, normalized maximum likelihood becomes independent of n. Our proof uses this to exploit the asymptotics of normalized maximum likelihood. The asymptotic normality of the maximum likelihood estimator is responsible for the necessity of Jeffreys prior.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.