2014
DOI: 10.14778/2732269.2732275
|View full text |Cite
|
Sign up to set email alerts
|

Computing k-regret minimizing sets

Abstract: Regret minimizing sets are a recent approach to representing a dataset D by a small subset R of size r of representative data points. The set R is chosen such that executing any top-1 query on R rather than D is minimally perceptible to any user. However, such a subset R may not exist, even for modest sizes, r. In this paper, we introduce the relaxation to k-regret minimizing sets, whereby a top-1 query on R returns a result imperceptibly close to the top-k on D.We show that, in general, with or without the re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
105
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 56 publications
(105 citation statements)
references
References 26 publications
0
105
0
Order By: Relevance
“…Nanongkai et al [17] suggested the 1-regret minimizing set approach: users can have different preference functions, but, for a given tolerance of x%, it is desired to compute a subset of objects such that for every user, at least one of the subset is within x% of her top choice. This was generalized later to the k regret minimizing set [11], where the preference scores of the users top-k choice, is compared with the best in the subset. Following the definition of k-regret minimizing set in [11], we propose faster approximation algorithms for finding small representative sets.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Nanongkai et al [17] suggested the 1-regret minimizing set approach: users can have different preference functions, but, for a given tolerance of x%, it is desired to compute a subset of objects such that for every user, at least one of the subset is within x% of her top choice. This was generalized later to the k regret minimizing set [11], where the preference scores of the users top-k choice, is compared with the best in the subset. Following the definition of k-regret minimizing set in [11], we propose faster approximation algorithms for finding small representative sets.…”
Section: Introductionmentioning
confidence: 99%
“…This was generalized later to the k regret minimizing set [11], where the preference scores of the users top-k choice, is compared with the best in the subset. Following the definition of k-regret minimizing set in [11], we propose faster approximation algorithms for finding small representative sets.…”
Section: Introductionmentioning
confidence: 99%
“…The dimensionality of our synthetic dataset varies in the range [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22], with the default being 22-dimensional data. The values in each dimension are generated according to a zipf distribution, for which the skewness parameter is set in the range 0.0 (i.e., uniform) to 0.9 (skewed).…”
Section: Experimental Testbedmentioning
confidence: 99%
“…Generating representative data is one such technique that aims to provide meaningful summary of a potentially large query answer (e.g., [20,31,66,87,97]). In proposed as a practical alternative for both queries [66].…”
Section: Chapter 6 Diversity With Few Regretsmentioning
confidence: 99%
See 1 more Smart Citation