“…As shown in the previous section, the crucial element for our approximation method is Algorithm A to approximate the gradient G ν (λ) that is given in (37). In this section we propose two candidates and discuss their corresponding complexity function C Φ,A .…”
Section: B Gradient Approximationmentioning
confidence: 99%
“…The main difficulty in this approach is how to obtain samples {X i } n i=1 according to the density Q given above and in particular quantifying its computational complexity. It is well known that if the density Q has a particular structure this samples can be drawn efficiently, e.g., if Q is a log-concave density in polynomial time [37]. Providing assumptions on the channel Φ such that sampling according to Q can be done efficiently is a topic of further research.…”
Section: Second Approach: Importance Samplingmentioning
confidence: 99%
“…Our analysis up to now assumes availability of exact first-order information, namely we assumed that the gradients ∇G ν (λ) and ∇F (λ) are exactly available for any λ. However, in many cases, e.g., in the presence of an additional input cost constraint (Remark 3.13), the evaluation of those gradients requires solving another auxiliary optimization problem or a multi-dimensional integral (37), which only can be done approximately. This motivates the question of how to solve (34) in the case of inexact first-order information which indeed has been studied in detail in [32].…”
Section: A Inexact First-order Informationmentioning
confidence: 99%
“…We will discuss later in Remark 4.9 how Assumption 4.2 can be removed at the cost of computational complexity proportional to ε −1 log ε −1 where ε is the preassigned approximation error, i.e., considering ε as a constant Assumption 4.2 can be automatically satisfied. As detailed in the preceding section and summarized in Algorithm 1, for the approximation of the Holevo capacity one requires to efficiently evaluate the gradient ∇G ν (λ) for an arbitrary λ ∈ Λ given by (37), which involves two integrations over R. Definition 4.3 (Gradient oracle complexity). Given a family of channels {Φ} N,M , the computational complexity for Algorithm A to provide an estimate Gν (λ) for any λ ∈ Λ of the form…”
Section: A Computational Complexitymentioning
confidence: 99%
“…The second approach invokes a non-trivial sampling method, known as importance sampling [35]. Define the function f λ (x) := tr [Φ(E(x))λ] − H(Φ(E(x))) such that the gradient of G ν (λ), given in (37), can be expressed as…”
Section: Example 413 (Familiy Of Channels With An Arbitrary Lipschitz...mentioning
We propose an iterative method for approximating the capacity of
classical-quantum channels with a discrete input alphabet and a finite
dimensional output, possibly under additional constraints on the input
distribution. Based on duality of convex programming, we derive explicit upper
and lower bounds for the capacity. To provide an $\varepsilon$-close estimate
to the capacity, the presented algorithm requires $O(\tfrac{(N \vee M) M^3
\log(N)^{1/2}}{\varepsilon})$, where $N$ denotes the input alphabet size and
$M$ the output dimension. We then generalize the method for the task of
approximating the capacity of classical-quantum channels with a bounded
continuous input alphabet and a finite dimensional output. For channels with a
finite dimensional quantum mechanical input and output, the idea of a universal
encoder allows us to approximate the Holevo capacity using the same method. In
particular, we show that the problem of approximating the Holevo capacity can
be reduced to a multidimensional integration problem. For families of quantum
channels fulfilling a certain assumption we show that the complexity to derive
an $\varepsilon$-close solution to the Holevo capacity is subexponential or
even polynomial in the problem size. We provide several examples to illustrate
the performance of the approximation scheme in practice.Comment: 36 pages, 1 figur
“…As shown in the previous section, the crucial element for our approximation method is Algorithm A to approximate the gradient G ν (λ) that is given in (37). In this section we propose two candidates and discuss their corresponding complexity function C Φ,A .…”
Section: B Gradient Approximationmentioning
confidence: 99%
“…The main difficulty in this approach is how to obtain samples {X i } n i=1 according to the density Q given above and in particular quantifying its computational complexity. It is well known that if the density Q has a particular structure this samples can be drawn efficiently, e.g., if Q is a log-concave density in polynomial time [37]. Providing assumptions on the channel Φ such that sampling according to Q can be done efficiently is a topic of further research.…”
Section: Second Approach: Importance Samplingmentioning
confidence: 99%
“…Our analysis up to now assumes availability of exact first-order information, namely we assumed that the gradients ∇G ν (λ) and ∇F (λ) are exactly available for any λ. However, in many cases, e.g., in the presence of an additional input cost constraint (Remark 3.13), the evaluation of those gradients requires solving another auxiliary optimization problem or a multi-dimensional integral (37), which only can be done approximately. This motivates the question of how to solve (34) in the case of inexact first-order information which indeed has been studied in detail in [32].…”
Section: A Inexact First-order Informationmentioning
confidence: 99%
“…We will discuss later in Remark 4.9 how Assumption 4.2 can be removed at the cost of computational complexity proportional to ε −1 log ε −1 where ε is the preassigned approximation error, i.e., considering ε as a constant Assumption 4.2 can be automatically satisfied. As detailed in the preceding section and summarized in Algorithm 1, for the approximation of the Holevo capacity one requires to efficiently evaluate the gradient ∇G ν (λ) for an arbitrary λ ∈ Λ given by (37), which involves two integrations over R. Definition 4.3 (Gradient oracle complexity). Given a family of channels {Φ} N,M , the computational complexity for Algorithm A to provide an estimate Gν (λ) for any λ ∈ Λ of the form…”
Section: A Computational Complexitymentioning
confidence: 99%
“…The second approach invokes a non-trivial sampling method, known as importance sampling [35]. Define the function f λ (x) := tr [Φ(E(x))λ] − H(Φ(E(x))) such that the gradient of G ν (λ), given in (37), can be expressed as…”
Section: Example 413 (Familiy Of Channels With An Arbitrary Lipschitz...mentioning
We propose an iterative method for approximating the capacity of
classical-quantum channels with a discrete input alphabet and a finite
dimensional output, possibly under additional constraints on the input
distribution. Based on duality of convex programming, we derive explicit upper
and lower bounds for the capacity. To provide an $\varepsilon$-close estimate
to the capacity, the presented algorithm requires $O(\tfrac{(N \vee M) M^3
\log(N)^{1/2}}{\varepsilon})$, where $N$ denotes the input alphabet size and
$M$ the output dimension. We then generalize the method for the task of
approximating the capacity of classical-quantum channels with a bounded
continuous input alphabet and a finite dimensional output. For channels with a
finite dimensional quantum mechanical input and output, the idea of a universal
encoder allows us to approximate the Holevo capacity using the same method. In
particular, we show that the problem of approximating the Holevo capacity can
be reduced to a multidimensional integration problem. For families of quantum
channels fulfilling a certain assumption we show that the complexity to derive
an $\varepsilon$-close solution to the Holevo capacity is subexponential or
even polynomial in the problem size. We provide several examples to illustrate
the performance of the approximation scheme in practice.Comment: 36 pages, 1 figur
This rigorous, self-contained book describes mathematical and, in particular, stochastic and graph theoretic methods to assess the performance of complex networks and systems. It comprises three parts: the first is a review of probability theory; Part II covers the classical theory of stochastic processes (Poisson, Markov and queueing theory), which are considered to be the basic building blocks for performance evaluation studies; Part III focuses on the rapidly expanding new field of network science. This part deals with the recently obtained insight that many very different large complex networks – such as the Internet, World Wide Web, metabolic and human brain networks, utility infrastructures, social networks – evolve and behave according to general common scaling laws. This understanding is useful when assessing the end-to-end quality of Internet services and when designing robust and secure networks. Containing problems and solved solutions, the book is ideal for graduate students taking courses in performance analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.