Eleventh Euromicro Conference on Parallel, Distributed and Network-Based Processing, 2003. Proceedings. 2003
DOI: 10.1109/empdp.2003.1183620
|View full text |Cite
|
Sign up to set email alerts
|

Algon: a framework for supporting comparison of distributed algorithm performance

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2005
2005
2005
2005

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…The Algon concept and its associated design pattern was first proposed in [3]. It has since been successfully implemented, and a performance comparison tool has been developed [23]. In this paper we explain how the Algon framework operates, how the programmer can use it to interchange and compare algorithms, and how this concept can be extended so that the existing Java-RMI middleware can be dynamically replaced by CORBA.…”
Section: Introductionmentioning
confidence: 98%
“…The Algon concept and its associated design pattern was first proposed in [3]. It has since been successfully implemented, and a performance comparison tool has been developed [23]. In this paper we explain how the Algon framework operates, how the programmer can use it to interchange and compare algorithms, and how this concept can be extended so that the existing Java-RMI middleware can be dynamically replaced by CORBA.…”
Section: Introductionmentioning
confidence: 98%
“…Renaud, Bishop, Lo, van Zyl and Worrall (2003) reported on the works of Hsier and Sivakumar (2001) and Shousha, Petriu, Jalnapurkar and Ngo (1998) stating that the measurement of software performance by and for experts is a well known task. From various other previous works, Renaud, et al (2003) recalled that various metrics can be used to measure performance of algorithms in distributed systems, namely: response or waiting time, synch delay, number of messages exchanged, throughput, communication delay, node fairness, CPU cycle usage, and memory usage.…”
Section: Performance Latency and Throughputmentioning
confidence: 99%
“…The first four metrics were most suited to specifically measuring algorithm performance. The fifth metric is more dependent on network load than a specific algorithm, the sixth is difficult to quantify and the seventh and eighth produce measurement of debatable merit in judging algorithm efficacy (Renaud, et al, 2003).…”
Section: Framework For Performance Comparisonsmentioning
confidence: 99%