2014 IEEE International Workshop on Machine Learning for Signal Processing (MLSP) 2014
DOI: 10.1109/mlsp.2014.6958896
|View full text |Cite
|
Sign up to set email alerts
|

Study of different strategies for the Canonical Polyadic decomposition of nonnegative third order tensors with application to the separation of spectra in 3D fluorescence spectroscopy

Abstract: In this communication, the problem of blind source separation in chemical analysis and more precisely in the fluorescence spectroscopy framework is addressed. Classically multi-linear Canonical Polyadic (CP or Candecomp/Parafac) decompositions algorithms are used to perform that task. Yet, as the constituent vectors of the loading matrices should be nonnegative since they stand for nonnegative quantities (spectra and concentrations), we focus on NonNegative CP decomposition algorithms (NNCP). In the unconstrai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…Indeed, from lemma 1, we can easily conclude that α 2,1 is greater than A * . If we replace α and A by D T F and DD T F, respectively, then we obtain equation (3).…”
Section: A Problem Formulationmentioning
confidence: 99%
“…Indeed, from lemma 1, we can easily conclude that α 2,1 is greater than A * . If we replace α and A by D T F and DD T F, respectively, then we obtain equation (3).…”
Section: A Problem Formulationmentioning
confidence: 99%
“…When simulated data are used, the true tensor rank trueR¯ is known, while the tensor rank that will be used for the model and thus for the decomposition is denoted by trueR^. In the case of simulated data, we can consider the 2 error indices that have already been used by Vu et al instead of the reconstruction error scriptTtrueT^F2, which is classically used with real experimental data. The first error index, denoted by E 1 , measures the error of estimation but discarding the overfactoring part.…”
Section: Numerical Simulationsmentioning
confidence: 99%
“…We choose the same α A = α B = α C =10 −3 and l 1 ‐norm regularization terms for each factor during the first half of the iterations, then in the second half, they are discarded to avoid a bound in the performances. We choose l 1 ‐norm regularization terms because it was shown in that they lead to the best results. The same random initializations are used for all the algorithms.…”
Section: Nonnegative Cp Decomposition Of Three‐way Arraysmentioning
confidence: 99%
“…Moreover, regularization terms (that might be different from one loading matrix to another one) are also added in order to reinforce certain properties (sparseness or smoothness of the solution). As shown in [43], they improve the robustness of algorithms versus model errors such as a possible overestimation of the tensor rank (which corresponds to the number of fluorescent compounds effectively present in the studied samples, yet, this rank is unknown and it can only be estimated). Without these regularization terms, spurious compounds might appear.…”
Section: Introductionmentioning
confidence: 99%