Numerical Analysis: Historical Developments in the 20th Century 2001
DOI: 10.1016/b978-0-444-50617-7.50004-5
|View full text |Cite
|
Sign up to set email alerts
|

Approximation in normed linear spaces

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2006
2006
2006
2006

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 137 publications
0
4
0
Order By: Relevance
“…Now, for uniformity with the literature, let us map every function of in the vector of whose th component is the value , so as to work in a subspace over instead of the space . If is the matrix with elements , (1) is mapped to Thus, if we want to find the optimal approximation of a signal in , we have to find the coefficient vector that minimizes the value (2) This problem has been studied extensively in the mathematical and mathematical programming literature (an exhaustive overview can be found in [2]; see [8] for a classic result) and the most recent approach consists in converting it to a linear program in 1 dimensions. Accordingly, let be the vector with all its components equal to one; then, for every fixed , the value in ( 2) is given by the smallest possible value of , say, , that satisfies where inequalities between vectors are to be intended, here and in what follows, component by component.…”
Section: E E E P R O O Fmentioning
confidence: 99%
See 2 more Smart Citations
“…Now, for uniformity with the literature, let us map every function of in the vector of whose th component is the value , so as to work in a subspace over instead of the space . If is the matrix with elements , (1) is mapped to Thus, if we want to find the optimal approximation of a signal in , we have to find the coefficient vector that minimizes the value (2) This problem has been studied extensively in the mathematical and mathematical programming literature (an exhaustive overview can be found in [2]; see [8] for a classic result) and the most recent approach consists in converting it to a linear program in 1 dimensions. Accordingly, let be the vector with all its components equal to one; then, for every fixed , the value in ( 2) is given by the smallest possible value of , say, , that satisfies where inequalities between vectors are to be intended, here and in what follows, component by component.…”
Section: E E E P R O O Fmentioning
confidence: 99%
“…Here we recall only that the main idea is to construct the convex hull by moving from left to right; at every step the polygon is updated by adding a new point and removing the sides of the polygon that are visible by the entering point. The only basic operation that is required for this algorithm is the evaluation of the order of three generic points , , and in the plane, 2 and it is easy to see that this evaluation is nearly equivalent to the evaluation of the vector product . (See Fig.…”
Section: E E E P R O O Fmentioning
confidence: 99%
See 1 more Smart Citation
“…Given a signal s(x, y) defined over a discrete grid in a rectangular domain D = {(x, y) : x 0 ≤ x ≤ x 1 , y 0 ≤ y ≤ y 1 } we aim at finding a function f , of the form f (x, y) = axy + bx + cy + b, that is a good approximation to s under the l ∞ norm. We start by saying that, as a, b, c and d parameters determine a linear space, finding their optimal values (according to a l ∞ criterion) can be reduced to solving a linear program in R 5 ( [3,4]). By using recent linear programming techniques (see [5]) one can thus solve the problem in O(n) expected operations, being n the number of image samples.…”
Section: Stating the Problemmentioning
confidence: 99%