In this paper, we cast the image-ranking problem into the task of identifying "authority" nodes on an inferred visual similarity graph and propose an algorithm to analyze the visual link structure that can be created among a group of images. Through an iterative procedure based on the PageRank computation, a numerical weight is assigned to each image; this measures its relative importance to the other images being considered. The incorporation of visual signals in this process differs from the majority of largescale commercial-search engines in use today. Commercial search-engines often solely rely on the text clues of the pages in which images are embedded to rank images, and often entirely ignore the content of the images themselves as a ranking signal. To quantify the performance of our approach in a real-world system, we conducted a series of experiments based on the task of retrieving images for 2000 of the most popular products queries. Our experimental results show significant improvement, in terms of user satisfaction and relevancy, in comparison to the most recent Google Image Search results.
We demonstrate that, with the availability of distributed computation platforms such as Amazon Web Services and open-source tools, it is possible for a small engineering team to build, launch and maintain a cost-effective, large-scale visual search system with widely available tools. We also demonstrate, through a comprehensive set of live experiments at Pinterest, that content recommendation powered by visual search improve user engagement. By sharing our implementation details and the experiences learned from launching a commercial visual search engines from scratch, we hope visual search are more widely incorporated into today's commercial applications.
BackgroundYunnan Province is located in southwestern China and neighbors the Southeast Asian countries, all of which are dengue-endemic areas. In 2000–2013, sporadic imported cases of dengue fever (DF) were reported almost annually in Yunnan Province. During 2013–2015, we confirmed that a large-scale indigenous DF outbreak emerged in cities of Yunnan Province near the China-Myanmar-Laos border.MethodsEpidemiological characteristics of DF in Yunnan Province during 2013–2015 were evaluated by retrospective analysis. A total of 232 dengue virus (DENV)-positive sera were randomly collected for sequence analysis of the capsid/premembrane region of DENV from patients with DF in Yunnan Province. The envelope gene of DENV isolates was also amplified and sequenced. Phylogenetic analyses were performed using the neighbor-joining method with the Tajima-Nei model.ResultsPhylogenetically, all DENV-positive samples could be classified into DENV-1 genotype I and DENV-2 Asian I genotype during 2013–2015 and DENV-4 genotype I in 2015 from Ruili City; and DENV-3 genotype II in 2013 and DENV-2 Cosmopolitan genotype in 2015 from Xishuangbanna Prefecture.ConclusionsOur results indicated that imported DF from patients from Laos and Myanmar was the primary cause of the DF epidemic in Yunnan Province. Additionally, DENV strains of all four serotypes were identified in indigenous cases in Yunnan Province during the same time period, while the dengue epidemic pattern observed in southwestern Yunnan showed characteristics of a hypoendemic nature: circulation of DENV-1 and DENV-2 over consecutive years.Electronic supplementary materialThe online version of this article (doi:10.1186/s12879-017-2401-1) contains supplementary material, which is available to authorized users.
Over the past three years Pinterest has experimented with several visual search and recommendation services, including Related Pins (2014), Similar Looks (2015), Flashlight (2016) and Lens (2017). This paper presents an overview of our visual discovery engine powering these services, and shares the rationales behind our technical and product decisions such as the use of object detection and interactive user interfaces. We conclude that this visual discovery engine significantly improves engagement in both search and recommendation tasks.
We address the problem of large-scale annotation of web images. Our approach is based on the concept of visual synset, which is an organization of images which are visually-similar and semantically-related. Each visual synset represents a single prototypical visual concept, and has an associated set of weighted annotations. Linear SVM's are utilized to predict the visual synset membership for unseen image examples, and a weighted voting rule is used to construct a ranked list of predicted annotations from a set of visual synsets. We demonstrate that visual synsets lead to better performance than standard methods on a new annotation database containing more than 200 million images and 300 thousand annotations, which is the largest ever reported.
The use of Bayesian networks for classification problems has received a significant amount of recent attention. Although computationally efficient, the standard maximum likelihood learning method tends to be suboptimal due to the mismatch between its optimization criteria (data likelihood) and the actual goal of classification (label prediction accuracy). Recent approaches to optimizing classification performance during parameter or structure learning show promise, but lack the favorable computational properties of maximum likelihood learning. In this paper we present boosted Bayesian network classifiers, a framework to combine discriminative data-weighting with generative training of intermediate models.We show that boosted Bayesian network classifiers encompass the basic generative models in isolation, but improve their classification performance when the model structure is suboptimal. We also demonstrate that structure learning is beneficial in the construction of boosted Bayesian network classifiers. On a large suite of benchmark data-sets, this approach outperforms generative graphical models such as naive Bayes and TAN in classification accuracy. Boosted Bayesian network classifiers have comparable or better performance in comparison to other discriminatively trained graphical models including ELR and BNC. Furthermore, boosted Bayesian networks require significantly less training time than the ELR and BNC algorithms.
Traffic congestion index reflects the state of traffic flow. The detection and analysis on traffic congestion index can be used to estimate the operation status of roads, to plan and organize road traffic for traffic managers, and to make the reasonable decisions of travelers to travel. The traffic conditions of several evaluation indexes were analyzed. Based on the theory of fuzzy mathematics, some membership functions of the evaluating indexes were designed. Three calculation methods of traffic congestion index were proposed. Their calculation results were compared mutually. The conclusion revealed that using saturation calculated by the corresponding service level of traffic congestion index not well reflect the traffic situation, what’s more, travel speed is used to calculate the congestion index of the first method. Using comprehensive parameters can calculate the congestion index of the third method. Both them are roughly similar and in line with the actual traffic phenomenon.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.