Large surveys using multiobject spectrographs require automated methods for deciding how to efficiently point observations and how to assign targets to each pointing. The Sloan Digital Sky Survey (SDSS) will observe around 10 6 spectra from targets distributed over an area of about 10,000 deg 2 , using a multiobject fiber spectrograph that can simultaneously observe 640 objects in a circular field of view (referred to as a '' tile '') 1=49 in radius. No two fibers can be placed closer than 55 00 during the same observation; multiple targets closer than this distance are said to '' collide.'' We present here a method of allocating fibers to desired targets given a set of tile centers that includes the effects of collisions and that is nearly optimally efficient and uniform. Because of large-scale structure in the galaxy distribution (which form the bulk of the SDSS targets), a naive covering of the sky with equally spaced tiles does not yield uniform sampling. Thus, we present a heuristic for perturbing the centers of the tiles from the equally spaced distribution that provides more uniform completeness. For the SDSS sample, we can attain a sampling rate of greater than 92% for all targets, and greater than 99% for the set of targets that do not collide with each other, with an efficiency greater than 90% (defined as the fraction of available fibers assigned to targets). The methods used here may prove useful to those planning other large surveys.
Abstract. We give a simple algorithm to find a spanning tree that simultaneously approximates a shortest-path tree and a minimum spanning tree. The algorithm provides a continuous tradeoff: given the two trees and a 7 > 0, the algorithm returns a spanning tree in which the distance between any vertex and the root of the shortest-path tree is at most 1 + x/27 times the shortest-path distance, and yet the total weight of the tree is at most 1 + ,~/2/~/times the weight of a minimum spanning tree.Our algorithm runs in linear time and obtains the best-possible tradeoff. It can be implemented on a CREW PRAM to run a logarithmic time using one processor per vertex.
The paging problem is that of deciding which pages to keep in a memory of k pages in order to minimize the number of page faults. We develop the marking algorithm, a randomized on-line algorithm for the paging problem. We prove that its expected cost on any sequence of requests is within a factor of 2Hk of optimum. (Where Hk is the kth harmonic number, which is roughly In k.) The best such factor that can be achieved is Hk. This is in contrast to deterministic algorithms, which cannot be guaranteed to be within a factor smaller than k of optimum. An alternative to comparing an on-line algorithm with the optimum off-line algorithm is the idea of comparing it to several other on-line algorithms. We have obtained results along these lines for the paging problem. Given a set of on-line algorithms 'Support was provided by a Weizmann fellowship.
We describe sequential and parallel algorithms that approximately solve linear programs with no negative coefficients (a.k.a. mixed packing and covering problems).For BackgroundPacking and covering problems are problems that can be formulated as linear programs using only non-negative coefficients and non-negative variables. Special cases include pure packing problems, which are of the form Ñ Ü ¡ Ü Ü and pure covering problems, which are of the formLagrangian-relaxation algorithms are based on the following basic idea. Given an optimization problem specified as a collection of constraints, modify the problem by selecting some of the constraints and replacing them by a continuous "penalty" function that, given a partial solution Ü, measures how close Ü is to violating the removed constraints. Construct a solution iteratively in small steps, making each choice to maintain the remaining constraints while minimizing the increase in the penalty function.While Lagrangian-relaxation algorithms have the disadvantage of producing only approximately optimal (or approximately feasible) solutions, the algorithms have the following potential advantages in comparison to the simplex, interior point, and ellipsoid methods. They can be faster, easier to implement, and easier to parallelize. They can be particularly useful for problems that are sparse, or that have exponentially many variables or constraints (but still have some polynomial-size representation).Lagrangian relaxation was one of the first methods proposed for solving linear programs -as early as the 1950's, John von Neumann apparently proposed and analyzed an Ç´Ñ ¾ Ò ÐÓ ´ÑÒµ ¯¾µ-time Lagrangian-relaxation algorithm for solving two-person zero-sum matrix games (equivalent to pure packing or covering) [18]. The algorithm returned a solution with additive error¯assuming the matrix was scaled to lie between 0 and 1. In 1950, Brown and von Neumann also proposed a system of differential equations that converged to an optimal solution, with the suggestion that the equations could form the basis of an algorithm [3].Subsequent examples include a multicommodity flow algorithm by Ford and Fulkerson (1958), Dantzig-Wolfe decomposition (1960), Benders' decomposition (1962), and Held and Karp's lower bound for the traveling salesman problem (1971). In 1990, Shahrokhi and Matula proved polynomial-time convergence rates for a Lagrangianrelaxation algorithm for multicommodity flow. This caught the attention of the theoretical computer science research community, which has since produced a large body of research on the subject. Klein et al. [15] and Leighton et al. [19] (and many others) gave additional multicommodity flow results. Plotkin, Shmoys, and Tardos [21] and Grigoriadis and Khachiyan [10,11,8,9] adapted the techniques to the general class of packing/covering problems, including mixed packing and covering problems. These algorithms' running times depended linearly on the width -an unbounded function of the input instance. Relatively complicated techniques were de...
Weighted caching is a generalization of paging in which the cost to evict an item depends on the item. We give two results concerning strategies for these problems that incur a cost within a factor of the minimum possible on each input.We explore the linear programming structure of the more general k-server problem. We obtain the surprising insight that the wellknown "least recently used" and "balance" algorithms are primaldual algorithms. We generalize them both, obtaining a single k k−h+1 -competitive, primal-dual strategy for weighted caching.We introduce loose competitiveness, motivated by Sleator and Tarjan's complaint [ST85] that the standard competitive ratios for paging strategies are too high. A k-server strategy is loosely c(k)-competitive if, for any sequence, for almost all k, the cost incurred by the strategy with k servers eitheris no more than c(k) times the minimum cost or is insignificant. We show that k-competitive paging strategies including "least recently used" and "first in first out" are loosely c(k)-competitive provided c(k)/ ln k → ∞. We show that the (2 ln k)-competitive, randomized "marking algorithm" of Fiat et al. [FKL + 91] is loosely c(k)-competitive provided c(k) − 2 ln ln k → ∞. *
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.