Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 2019
DOI: 10.24963/ijcai.2019/627
|View full text |Cite
|
Sign up to set email alerts
|

A Quantum-inspired Classical Algorithm for Separable Non-negative Matrix Factorization

Abstract: Non-negative Matrix Factorization (NMF) asks to decompose a (entry-wise) non-negative matrix into the product of two smaller-sized nonnegative matrices, which has been shown intractable in general. In order to overcome this issue, separability assumption is introduced which assumes all data points are in a conical hull. This assumption makes NMF tractable and is widely used in text analysis and image processing, but still impractical for huge-scale datasets. In this paper, inspired by recent development on deq… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 10 publications
0
11
0
Order By: Relevance
“…This runtime is only polynomially slower than the corresponding quantum algorithm, except in the ε parameter. 7 Theorem 3.7 also dequantizes QSVT for block-encodings derived from (purifications of) density operators [22,Lemma 45] that come from some well-structured classical data. The situation in this case is even nicer, since density operators are already normalized.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…This runtime is only polynomially slower than the corresponding quantum algorithm, except in the ε parameter. 7 Theorem 3.7 also dequantizes QSVT for block-encodings derived from (purifications of) density operators [22,Lemma 45] that come from some well-structured classical data. The situation in this case is even nicer, since density operators are already normalized.…”
Section: Resultsmentioning
confidence: 99%
“…Tang's algorithm crucially exploits the structure of the input assumed by the quantum algorithm, which is used for efficiently preparing states. Subsequent work relies on similar techniques to dequantize a wide range of QML algorithms, including those for principal component analysis and supervised clustering [44], lowrank linear system solving [9,21], low-rank semidefinite program solving [8], support vector machines [13], nonnegative matrix factorization [7], and minimal conical hull [18]. These results show that the advertised exponential speedups of many QML algorithms disappear if the corresponding classical algorithms can use input assumptions analogous to the state preparation assumptions of the quantum algorithms.…”
Section: Introduction 1motivationmentioning
confidence: 99%
“…In [32], Tang also give a quantum-inspired classical algorithm for recommendation systems by using these similar assumptions. As pointed out in [35,36,38,43], there is a low-overhead data structure that satisfies the sampling assumption. We first describe the data structure for a vector, then a matrix.…”
Section: Sample Model and Data Structurementioning
confidence: 99%
“…Recently, Tang [32] proposed a classical algorithm for recommendation systems within logarithmic time by using the efficient low rank approximation techniques of Frieze, Kannan and Vempala [33]. Motivated by the dequantizing techniques, other papers are also proposed to deal with some low-rank matrix operations, such as matrix inversion [34,35,36], singular value transformation [37], non-negative matrix factorization [38], support vector machine [39], general minimum conical hull problems [40]. Dequantizing techniques in those algorithms involve two technologies, the Monte-Carlo singular value decomposition and sampling techniques, which could efficiently simulate some special operations on low rank matrices.…”
Section: Introductionmentioning
confidence: 99%
“…Studies have been conducted to seek unique and exact solutions to NMF, despite the non-convexity nature of the problem. Most of these studies are based on the separability assumption (Chen et al 2019;Degleris and Gillis 2019). This condition states that the columns of the bases W , which should be a subset of dataset X, i.e., W ⊆ X, span a convex hull/simplex/conical hull/cone which includes all data points X (Zhou, Bian, and Tao 2013).…”
Section: Introductionmentioning
confidence: 99%