Abstract:Most consumers do not want to edit their images, either because they do not have the time, or they do not have the know how. They do want to be able to press a button that will magically make the objects captured in their photo look better. At the heart of enabling such functionality, lay image analysis models. The more we know about the objects in the photo, the better we can enhance and modify it according to human preferences. We present a necessary piece of this puzzle, breaking the image into signi¯cant segments and ¯nding important perceptual objects including skin, sky, snow and foliage. Abstract Most consumers do not want to edit their images, either because they do not have the time, or they do not have the know how. They do want to be able to press a button that will magically make the objects captured in their photo look better. At the heart of enabling such functionality, lay image analysis models. The more we know about the objects in the photo, the better we can enhance and modify it according to human preferences. We present a necessary piece of this puzzle, breaking the image into significant segments and finding important perceptual objects including skin, sky, snow and foliage.
Our business users have often been frustrated with clustering results that do not suit their purpose; when trying to discover clusters of product complaints, the algorithm may return clusters of product models instead. The fundamental issue is that complex text data can be clustered in many different ways, and, really, it is optimistic to expect relevant clusters from an unsupervised process, even with parameter tinkering.We studied this problem in an interactive context and developed an effective solution that re-casts the problem formulation, radically different from traditional or semi-supervised clustering. Given training labels of some known classes, our method incrementally proposes complementary clusters. In tests on various business datasets, we consistently get relevant results and at interactive time scales. This paper describes the method and demonstrates its superior ability using publicly available datasets. For automated evaluation, we devised a unique cluster evaluation framework to match the business user's utility.
Observing and evaluating print defects represents a major challenge in the area of print quality research. Visual identification and quantification of these print defects becomes a key issue for improving print quality. However, the page content may confound the visual evaluation of print defects in actual printouts. Our research is focused on banding in the presence of print content in the context of commercial printing. In this paper, a psychophysical experiment is described to evaluate the perception of bands in the presence of print content. A number of banding defects are added by way of simulation to a selected set of commercial print contents to form our set of stimuli. The participants in the experiment mark these stimuli based on their observations via a graphical user interface (GUI). Based on the collection of the marked stimuli, we were able to see general consistency among different participants. Moreover, the results showed that the likelihood of an observer perceiving the banding defect in a smooth area is much higher than in a high frequency area. Furthermore, our results also indicate that the luminance of the image may locally affect the visibility of the print defects to some degree.
Print quality (PQ) is a composite attribute defined by human perception. As such, the ultimate way to determine and quantify PQ is by human survey. However, repeated surveys are time consuming and often represent a burden on processes that involve repeated evaluations. A desired alternative would be an automatic quality rating tool. Once such quality evaluation measure is proposed, it should be qualified. That is, it should be shown to reflect human assessment. If two of the human opinions conflict, the tool cannot possibly agree with both. Conflicts between human opinions are common, which complicates the evaluation of tool's success in reflecting human judgment. There are many optional ways for measuring the agreement between human assessment and tool evaluation, but different methods may have conflicting results. It is, therefore, important to pre-establish the appropriate method for the evaluation of quality-evaluation-tools, a method that takes the disagreement among the survey participants into account. In this paper, we model human quality preference and derive the most appropriate method to qualify quality evaluation tools. We demonstrate the resulting qualification method in a real life scenario-the qualification of the mechanical band meter.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.