It is well known that the performance of quicksort can be improved by selecting the median of a sample of elements as the pivot of each partitioning stage. For large samples the partitions are better, but the amount of additional comparisons and exchanges to find the median of the sample also increases. We show in this paper that the optimal sample size to minimize the average total cost of quicksort, as a function of the size n of the current subarray size, is a • √ n + o(√ n). We give a closed expression for a, which depends on the selection algorithm and the costs of elementary comparisons and exchanges. Moreover, we show that selecting the medians of the samples as pivots is not the best strategy when exchanges are much more expensive than comparisons. We also apply the same ideas and techniques to the analysis of quickselect and get similar results.
Abstract. This paper presents new theorems to analyze divide-and-conquer recurrences, which improve other similar ones in several aspects. In particular, these theorems provide more information, free us almost completely from technicalities like floors and ceilings, and cover a wider set of toll functions and weight distributions, stochastic recurrences included.
In this paper, we present randomized algorithms over binary search trees such that: (a) the insertion of a set of keys, in any fixed order, into an initially empty tree always produces a random binary search tree; (b) the deletion of any key from a random binary search tree results in a random binary search tree; (c) the random choices made by the algorithms are based upon the sizes of the subtrees of the tree; this implies that we can support accesses by rank without additional storage requirements or modification of the data structures; and (d) the cost of any elementary operation, measured as the number of visited nodes, is the same as the expected cost of its standard deterministic counterpart; hence, all search and update operations have guaranteed expected cost O(log
n
), but now irrespective of any assumption on the input distribution.
We investigate voting systems with two classes of voters, for which there is a hierarchy giving each member of the stronger class more influence or important than each member of the weaker class. We deduce for voting systems one important counting fact that allows determining how many of them are for a given number of voters. In fact, the number of these systems follows a Fibonacci sequence with a smooth polynomial variation on the number of voters. On the other hand, we classify by means of some parameters which of these systems are weighted. This result allows us to state an asymptotic conjecture which is opposed to what occurs for symmetric games.
We consider the list access problem and show that one questionable assumption in the original cost model presented by Sleator and Tarjan (1985) and subsequent literature allowed for several competitiveness results of the move-to-front rule (MTF). We present an o-line algorithm for the list access problem and prove that, under a more realistic cost model, no on-line algorithm can be c-competitive for any constant c, MTF included.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.