In this work new achievable rates are derived, for the uplink channel of a cellular network with joint multicell processing, where unlike previous results, the ideal backhaul network has finite capacity per-cell. Namely, the cell sites are linked to the central joint processor via lossless links with finite capacity. The cellular network is abstracted by symmetric models, which render analytical treatment plausible. For this idealistic model family, achievable rates are presented for cell-sites that use compress-and-forward schemes combined with local decoding, for both Gaussian and fading channels. The rates are given in closed form for the classical Wyner model and the soft-handover model. These rates are then demonstrated to be rather close to the optimal unlimited backhaul joint processing rates, already for modest backhaul capacities, supporting the potential gain offered by the joint multicell processing approach. Particular attention is also given to the low-SNR characterization of these rates through which the effect of the limited backhaul network is explicitly revealed. In addition, the rate at which the backhaul capacity should scale in order to maintain the original high-SNR characterization of an unlimited backhaul capacity system is found.
Multicell processing in the form of joint encoding for the downlink of a cellular system is studied under the assumption that the base stations (BSs) are connected to a central processor (CP) via finitecapacity links (finite-capacity backhaul). To obtain analytical insight into the impact of finite-capacity backhaul on the downlink throughput, the investigation focuses on a simple linear cellular system (as for a highway or a long avenue) based on theWyner model. Several transmission schemes are proposed that require varying degrees of knowledge regarding the system codebooks at the BSs. Achievable rates are derived in closed-form and compared with an upper bound. Performance is also evaluated in asymptotic regimes of interest (high backhaul capacity and extreme signal-to-noise ratio, SNR) and further corroborated by numerical results. The major finding of this work is that even in the presence of oblivious BSs (that is, BSs with no information about the codebooks) multicell processing is able to provide ideal performance with relatively small backhaul capacities, unless the application of interest requires high data rate (i.e., high SNR) and the backhaul capacity is not allowed to increase with the SNR. In these latter cases, some form of codebook information at the BSs becomes necessary.
Online social networks have become very popular in recent years and their number of users is already measured in many hundreds of millions. For various commercial and sociological purposes, an independent estimate of their sizes is important. In this work, algorithms for estimating the number of users in such networks are considered. The proposed schemes are also applicable for estimating the sizes of networks' sub-populations. The suggested algorithms interact with the social networks via their public APIs only, and rely on no other external information. Due to obvious traffic and privacy concerns, the number of such interactions is severely limited. We therefore focus on minimizing the number of API interactions needed for producing good size estimates. We adopt the abstraction of social networks as undirected graphs and use random node sampling. By counting the number of collisions or non-unique nodes in the sample, we produce a size estimate. Then, we show analytically that the estimate error vanishes with high probability for smaller number of samples than those required by prior-art algorithms. Moreover, although our algorithms are provably correct for any graph, they excel when applied to social network-like graphs. The proposed algorithms were evaluated on synthetic as well real social networks such as Facebook, IMDB, and DBLP. Our experiments corroborated the theoretical results, and demonstrated the effectiveness of the algorithms.
In the Internet music scene, where recommendation technology is key for navigating huge collections, large market players enjoy a considerable advantage. Accessing a wider pool of user feedback leads to an increasingly more accurate analysis of user tastes, effectively creating a "rich get richer" effect. This work aims at significantly lowering the entry barrier for creating music recommenders, through a paradigm coupling a public data source and a new collaborative filtering (CF) model. We claim that Internet radio stations form a readily available resource of abundant fresh human signals on music through their playlists, which are essentially cohesive sets of related tracks.In a way, our models rely on the knowledge of a diverse group of experts in lieu of the commonly used wisdom of crowds. Over several weeks, we aggregated publicly available playlists of thousands of Internet radio stations, resulting in a dataset encompassing millions of plays, and hundreds of thousands of tracks and artists. This provides the large scale ground data necessary to mitigate the cold start problem of new items at both mature and emerging services.Furthermore, we developed a new probabilistic CF model, tailored to the Internet radio resource. The success of the model was empirically validated on the collected dataset. Moreover, we tested the model at a cross-source transfer learning manner -the same model trained on the Internet radio data was used to predict behavior of Yahoo! Music users. This demonstrates the ability to tap the Internet radio signals in other music recommendation setups. Based on encouraging empirical results, our hope is that the proposed paradigm will make quality music recommendation accessible to all interested parties in the community.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.