2010
DOI: 10.1364/ao.49.006591
|View full text |Cite
|
Sign up to set email alerts
|

Nonnegative least-squares truncated singular value decomposition to particle size distribution inversion from dynamic light scattering data

Abstract: The weak symmetry relationship between the relative error and solution norm holds in our developed nonnegative least-squares truncated singular value decomposition method. By using this relationship to specify the optimal regularization parameters, we applied the proposed algorithm to recover particle size distribution from dynamic light scattering (DLS) data. Simulated results and experimental validity demonstrate that the proposed method, which compliments the CONTIN algorithm, might serve as a powerful and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 34 publications
(22 citation statements)
references
References 24 publications
0
22
0
Order By: Relevance
“…We found that the technique based on VSM works well under the condition that the length of the measured data ‖E‖ is equal (or comparable) to the average length of the column vectors of the matrix 1 n P n j1 ‖A j ‖ [see Eq. (19)]. This is because the Euclidean distance is very sensitive to the lengths of the vectors.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We found that the technique based on VSM works well under the condition that the length of the measured data ‖E‖ is equal (or comparable) to the average length of the column vectors of the matrix 1 n P n j1 ‖A j ‖ [see Eq. (19)]. This is because the Euclidean distance is very sensitive to the lengths of the vectors.…”
Section: Discussionmentioning
confidence: 99%
“…To date, a number of inversion algorithms have been proposed, such as the constrained least squares [7][8][9], Tikhonov regularization [10][11][12][13][14], singular value decomposition (SVD) [15][16][17][18][19], and iterative methods [20][21][22][23][24]. Although these methods have been proven to be very effective in finding the solution of the linear equations, there are still problems or limitations to be solved.…”
Section: Introductionmentioning
confidence: 99%
“…16 According to this condition, the numerator u T i ỹ should decay faster than the singular values w i such that the overall norm of the SVD components |u T i y˜/w i | is small. 27,28 Tikhonov Regularization. 8 and truncation parameter is chosen until the Picard condition is satisfied.…”
Section: Review Of Regularization Methodsmentioning
confidence: 99%
“…These formulations range from simply setting the negative values in the estimated solutions to zero 26 to mathematically more rigorous formulations involving quadratic programming problem with bounds. 27,28 Tikhonov Regularization. A least squares formulation seeks to minimize the norm of the residual between the estimated and the measured values given by kỹ À Huk 2 2 .…”
Section: Review Of Regularization Methodsmentioning
confidence: 99%
“…In measurement, because of the presence of the noises and rounding errors, the existence, uniqueness, and stability of solutions are difficult to guarantee. To solve this problem, numerous methods have been proposed, such as cumulant method [5], exponential sampling method [6], CONTIN method [7], double exponential method [8], nonnegative least-square (NNLS) method [9], Laplace method [10], the neural network approach [11], the genetic algorithm [12], and NNLSs truncated singular value decomposition (TSVD) [13]. However, these methods are carried out in singlescale grid space.…”
Section: Introductionmentioning
confidence: 99%