2022
DOI: 10.1109/access.2022.3147670
|View full text |Cite
|
Sign up to set email alerts
|

NSVQ: Noise Substitution in Vector Quantization for Machine Learning

Abstract: Machine learning algorithms have been shown to be highly effective in solving optimization problems in a wide range of applications. Such algorithms typically use gradient descent with backpropagation and the chain rule. Hence, the backpropagation fails if intermediate gradients are zero for some functions in the computational graph, because it causes the gradients to collapse when multiplying with zero. Vector quantization is one of those challenging functions for machine learning algorithms, since it is a pi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 28 publications
0
1
0
Order By: Relevance
“…These methods are covered briefly below. Some studies focused on the learning of the codebook of VQ using machine learning [4] and convolutional neural networks (CNN) [5]. Another approach reported the usage of evolutionary algorithms (EA) [6][7][8] for better codebook design.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…These methods are covered briefly below. Some studies focused on the learning of the codebook of VQ using machine learning [4] and convolutional neural networks (CNN) [5]. Another approach reported the usage of evolutionary algorithms (EA) [6][7][8] for better codebook design.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The vector quantization (VQ) algorithm [1][2][3][4][5] is a lossy image compression technique. It divides an image into image blocks and each such image block forms a training vector.…”
Section: Introductionmentioning
confidence: 99%
“…This simultaneously helped stabilise training and reduced the required loss functions to two. Vali et al [47] then reduced the required losses to one by introducing the NSVQ technique. Here the vector quantization error is approximated by substituting it for a product of the original error and a normalised noise vector.…”
Section: Vector Quantized Modelsmentioning
confidence: 99%
“…In this paper, we propose creating a data-driven representation of sign language that can be used as a replacement for expensive linguistic annotation. Our approach learns a codebook of motions from continuous 3D pose data using a Noise Substitution Vector Quantization (NSVQ) model [47]. The codebook can be considered the lexicon of our new representation and used to tokenise a continuous pose sequence into a sequence of discrete codes.…”
Section: Introductionmentioning
confidence: 99%
“…Since the number of non-zero components in the codebook is relatively small, the performance of VQGAE latent vectors in virtual screening tasks might be reduced. One way to increase codebook utilisation is to improve the vector quantization operation by applying noise substitution as suggested in 27 or codebook initialisation techniques such as kmeans 28 . In the second example (Figure 9b), for one codebook vector more than 5K different structures of fragments from 42K molecules were extracted.…”
Section: Analysis Of Fragments Learned By Vqgaementioning
confidence: 99%