2017 IEEE International Conference on Big Data (Big Data) 2017
DOI: 10.1109/bigdata.2017.8258152
|View full text |Cite
|
Sign up to set email alerts
|

A distributed proximal gradient descent method for tensor completion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 21 publications
0
7
0
1
Order By: Relevance
“…For calculating the CP decomposition, we exploit the well-known Alternating Least Squares (ALS) method [30] when we deal with a full-value problem, and the two Proximal methods proposed in [31] when we deal with missing value problems. The methods proposed in [31]-Gen-ProxSGD (nondistributed) and StrProxSGD (distributed, suitable for big data)-tackle the optimization problem in (2) by solving local minimization problems rather than solving the entire problem at once.…”
Section: Complexitymentioning
confidence: 99%
See 2 more Smart Citations
“…For calculating the CP decomposition, we exploit the well-known Alternating Least Squares (ALS) method [30] when we deal with a full-value problem, and the two Proximal methods proposed in [31] when we deal with missing value problems. The methods proposed in [31]-Gen-ProxSGD (nondistributed) and StrProxSGD (distributed, suitable for big data)-tackle the optimization problem in (2) by solving local minimization problems rather than solving the entire problem at once.…”
Section: Complexitymentioning
confidence: 99%
“…In this experiment, we computed the accuracy and the AUC of the proposed method against state-of-the-art MIL algorithms. We report results for each of the algorithms employing the features extracted by Kandemir et al [4], and features extracted by the proposed method computing the PARAFAC decomposition from full values using the ALS algorithm [30] and from 10% randomly selected observed values using the StrProxSGD algorithm [31]. We should note here that the features extracted by Kandemir et al [4] are applicationspecific in contrast to our extracted features that are problem-independent and can be obtained directly from any raw multidimensional data with the same procedure.…”
Section: Breast Cancer Diagnosis From Histopathology Imagesmentioning
confidence: 99%
See 1 more Smart Citation
“…Works that employ Stochastic Gradient Descent (SGD) on shared memory and distributed systems for sparse tensor factorization and completion include [19], [20], [21], [22]. In [19], the authors describe a TC approach which uses the CPD model and employs a proximal SGD algorithm, that can be implemented in a distributed environment.…”
Section: Introductionmentioning
confidence: 99%
“…Αναφέρουμε και συγκρίνουμε τα αποτελέσματα, για κάθε έναν από τους αλγορίθμους, χρησιμοποιώντας τόσο τα χαρακτηριστικά που εξήγαγαν, για το συγκεκριμένο σύνολο δεδομένων, οι Kandemir κ.α [50],. όσο και τα χαρακτηριστικά που εξάγουμε με την προτεινόμενη μέθοδο, υπολογίζοντας την αποδόμηση CP τόσο από πλήρες τιμές μέσω του ALS όσο και από 10% τυχαία παρατηρούμενες τιμές με τον αλγόριθμο StrProxSGD[130]. Ο βαθμός της τανυστικής αποδόμησης CP επιλέχθηκε πειραματικά να είναι 𝑅 = 120.Πίνακας 2: Η…”
unclassified