Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion 2016
DOI: 10.1145/2818052.2869098
|View full text |Cite
|
Sign up to set email alerts
|

Optil.io: Cloud Based Platform For Solving Optimization Problems Using Crowdsourcing Approach

Abstract: UPDATED-January 2, 2016. The main objective of the presented research is to design a platform for continuous evaluation of optimization algorithms using crowdsourcing technique. The resulting platform, called Optil.io, runs in a cloud using platform as a service model and allows researchers from all over the world to collaboratively solve computational problems. This is the approach that has been already proved to be very successful for data mining problems by web services such as Kaggle. During our project we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
11
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 6 publications
0
11
0
Order By: Relevance
“…), (2) strongly focus on constructive heuristics, which are assumed to have access to the instance data (in contrast to black-box optimization heuristics, which implicitly learn about the problem instance only through the evaluation of potential solutions), or (3) aim to bundle efforts on solving specific real-world problem instances, without the attempt to generate a set of scalable or otherwise generalizable optimization problems. Benchmark competitions and crowd-sourcing platforms such as [41] fall into this latter category.…”
Section: Introductionmentioning
confidence: 99%
“…), (2) strongly focus on constructive heuristics, which are assumed to have access to the instance data (in contrast to black-box optimization heuristics, which implicitly learn about the problem instance only through the evaluation of potential solutions), or (3) aim to bundle efforts on solving specific real-world problem instances, without the attempt to generate a set of scalable or otherwise generalizable optimization problems. Benchmark competitions and crowd-sourcing platforms such as [41] fall into this latter category.…”
Section: Introductionmentioning
confidence: 99%
“…If at some point during the race some candidate configurations are identified as performing inferiorly to others, they are dropped from the race. The core principle of each race holds some similarities with the principles of horse race algorithm comparisons or more modern judgemental systems for programming competitions (for instance OPTIL.io by Wasik et al, 2016). A race stops once a maximum number of experiments is reached or the number of surviving candidates drops below a pre-specified bound.…”
Section: Automated Generation Of Hybrid Metaheuristicsmentioning
confidence: 99%
“…While the Black-box Optimization Benchmarking suite (BBOB, [17]) constitutes an established testing framework for evaluating performance of continuous optimizers, the discrete domain, on the other hand, has not had the benefit of an equivalent suite. Attempts to establish such an environment have lately become prominent [12,[23][24][25]. Wherever performance comparisons are sought, based on empirical data, they call for statistical assessments, to evaluate whether the observed performance gaps can be supported by an appropriate estimator for the true, underlying performance distribution, i.e., a distribution which assigns a probability to each possible result of the algorithms.…”
Section: Introductionmentioning
confidence: 99%