The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2017
DOI: 10.1109/icassp.2017.7952457
|View full text |Cite
|
Sign up to set email alerts
|

Feature encoding in band-limited distributed surveillance systems

Abstract: Distributed surveillance systems have become popular in recent years due to security concerns. However, transmitting high dimensional data in bandwidth-limited distributed systems becomes a major challenge. In this paper, we address this issue by proposing a novel probabilistic algorithm based on the divergence between the probability distributions of the visual features in order to reduce their dimensionality and thus save the network bandwidth in distributed wireless smart camera networks. We demonstrate the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 27 publications
0
7
0
Order By: Relevance
“…It includes various popular matrix norms as special cases such as the Frobenius norm (p = q = 2), the max norm (p = q = ∞), and the 1-norm (p = 1, q = ∞). This class has had numerous applications in machine learning, statistics, and signal processing (Kowalski, 2009;Ding et al, 2006;Kong et al, 2011;Nie et al, 2010;Zhaoshui and Cichocki, 2008;Rahimpour et al, 2017;Kashlak and Kong, 2021;Cai et al, 2011). Moreover, unlike some other matrix norm classes (like Schatten or induced norm classes) the entrywise L(p, q) class is quite interpretable; for instance, the L(1, 1) loss simply sums up the absolute differences between the overall scores given by reviewers and those given by the function f .…”
Section: Loss Functionsmentioning
confidence: 99%
“…It includes various popular matrix norms as special cases such as the Frobenius norm (p = q = 2), the max norm (p = q = ∞), and the 1-norm (p = 1, q = ∞). This class has had numerous applications in machine learning, statistics, and signal processing (Kowalski, 2009;Ding et al, 2006;Kong et al, 2011;Nie et al, 2010;Zhaoshui and Cichocki, 2008;Rahimpour et al, 2017;Kashlak and Kong, 2021;Cai et al, 2011). Moreover, unlike some other matrix norm classes (like Schatten or induced norm classes) the entrywise L(p, q) class is quite interpretable; for instance, the L(1, 1) loss simply sums up the absolute differences between the overall scores given by reviewers and those given by the function f .…”
Section: Loss Functionsmentioning
confidence: 99%
“…We consider an "entrywise norm" of the type l p,q for some p, q ≥ 1 defined for a matrix X as (see [37]):…”
Section: Norms and Errorsmentioning
confidence: 99%
“…Representation learning is key to computer vision tasks. Recently with the explosion of data availability, it is crucial for the representation to be computationally efficient as well [1,2,3]. Consequently learning high-quality binary representation is tempting due to its compactness and representation capacity.…”
Section: Introductionmentioning
confidence: 99%