2018 IEEE Wireless Communications and Networking Conference (WCNC) 2018
DOI: 10.1109/wcnc.2018.8377199
|View full text |Cite
|
Sign up to set email alerts
|

On the equivalence of double maxima and KL-means for information bottleneck-based source coding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…Details on its convergence are discussed in [ 3 ]. In addition to the iterative information bottleneck algorithm from [ 2 ], many other information bottleneck algorithms appeared in the literature (e.g., [ 57 , 58 ]). These algorithms can be understood as the work horses of the information bottleneck method, as they can determine the compression mapping for a given , and a desired cardinality .…”
Section: The Information Bottleneck Methods and Coarsely Quantized In...mentioning
confidence: 99%
See 1 more Smart Citation
“…Details on its convergence are discussed in [ 3 ]. In addition to the iterative information bottleneck algorithm from [ 2 ], many other information bottleneck algorithms appeared in the literature (e.g., [ 57 , 58 ]). These algorithms can be understood as the work horses of the information bottleneck method, as they can determine the compression mapping for a given , and a desired cardinality .…”
Section: The Information Bottleneck Methods and Coarsely Quantized In...mentioning
confidence: 99%
“…In the very important special case of aiming to preserve a maximum desired amount of I (i.e., for a given cardinality ), the clustering of described by becomes a hard clustering [ 58 ]. In this case, it is easy to limit the compression information I by a proper choice of the cardinality of the compressed representation.…”
Section: The Information Bottleneck Methods and Coarsely Quantized In...mentioning
confidence: 99%
“…The Information Bottleneck (Tishby, Pereira, and Bialek 2000;Dimitrov and Miller 2001;Samengo 2002) (IB) method provides a principled approach to this problem, which compresses the source random variable to keep the information relevant for predicting the target random variable while discarding all irrelevant information. The IB method has been applied to various domains such as classification (Hecht, Noor, and Tishby 2009), clustering (Slonim et al 2005), coding theory (Hassanpour, Wübben, and Dekorsy 2018;Zeitler et al 2008), and quantization (Strouse and Schwab 2017;Cheng et al 2019). Recent research also demonstrate that the IB method can produce well-generalized representations (Shamir, Sabato, and Tishby 2010;Vera, Piantanida, and Vega 2018;Amjad and Geiger 2019) and may be promising on explaining the learning behaviors of neural networks (Tishby and Zaslavsky 2015;Shwartz-Ziv and Tishby 2017;Saxe et al 2019).…”
Section: Introductionmentioning
confidence: 99%
“…We seek a coding scheme that generates a code from the output of the channel X which is as informative as possible about the original source signal Y and can be transmitted at a small rate . Therefore, this problem is equivalent to the the formulation of the information bottleneck [ 11 ].…”
Section: Introductionmentioning
confidence: 99%