2008
DOI: 10.1016/j.ipl.2008.03.007
|View full text |Cite
|
Sign up to set email alerts
|

On the random generation of monotone data sets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2009
2009
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 17 publications
(20 reference statements)
0
5
0
Order By: Relevance
“…As an alternative to restricting oneself to algorithms that are able to process partially non-monotone data sets [10,11] when faced with noise, monotone relabeled real-life data sets allow use of any algorithm. Another area of application for monotone data sets is the formulation of monotone benchmark data sets, where monotone (relabeled if needed) data sets could be of more interest than truly random monotone data sets [8].…”
Section: Problem Settingmentioning
confidence: 99%
“…As an alternative to restricting oneself to algorithms that are able to process partially non-monotone data sets [10,11] when faced with noise, monotone relabeled real-life data sets allow use of any algorithm. Another area of application for monotone data sets is the formulation of monotone benchmark data sets, where monotone (relabeled if needed) data sets could be of more interest than truly random monotone data sets [8].…”
Section: Problem Settingmentioning
confidence: 99%
“…[16] Our algorithm begins with the generation of either purely monotone or almost purely monotone dataset (depending on the version to be shortly explained). In other words, only the first phase of our algorithm has a similar goal as defined in De Loof et al [15] and Potharst et al [16] Similar to Potharst et al, [16] our algorithm does generate a monotone pattern as the base of the computation. However, our algorithm significantly differs from these two approaches by: (a) The resulting dataset in our approach contains user-defined level of noise, and (b) Our algorithms do not use class relabeling.…”
Section: Related Workmentioning
confidence: 99%
“…The paper by De Loof et al is about generating completely random monotone datasets. [15] It uses the computationally intensive Markov Chain Monte Carlo method. The datasets which are generated by this method have no underlying pattern, while in our approach the user can embed any monotone function of his/her choice.…”
Section: Related Workmentioning
confidence: 99%
“…in machine learning [12,28,43,90] and fuzzy modelling [128,155,156,160,161]. However, real-life data is often imperfect and does not fully comply with the monotonicity hypothesis.…”
Section: Monotonicity Of a Representation Of Votesmentioning
confidence: 99%