SUMMARYModern instrumentation in chemistry routinely generates two-dimensional (second-order) arrays of data. Considering that most analyses need to compare several samples, the analyst ends up with a threedimensional (third-order) array which is difficult to visualize or interpret,with the conventional statistical tools.Some of these data arrays follow the so-called trilinear model,These trilinear arrays of data are known to have unique factor analysis decompositions which correspond to the true physical factors that form the data, i.e. given the array R, a unique solution can be found in many cases for each order X , Y and Z . This is in contrast to the well-known second-order bilinear data factor analysis, where the abstract solutions obtained are not unique and at best cannot be easily compared with the underlying physical factors owing to a rotational ambiguity. Trilinear decompositions have had the disadvantage, however, that a non-linear optimization with many parameters is necessary to reach a least-squares solution. This paper will introduce a method for reducing the problem to a rectangular generalized eigenvalue-eigenvector equation where the eigenvectors are the contravariant form (pseudo-inverse) of the actual factors. It is shown that the method works well when the factors are linearly independent in at least two orders (e.g. X,, and Y,, are full rank matrices).Finally, it is shown how trilinear decompositions relate to multicomponent calibration, curve resolution and chemical analysis.
SUMMARYAn improved algorithm for the generalized rank annihilation method (GRAM) is presented. GRAM is a method for rnulticomponent calibration using two-dimensional instruments, such as GC-MS. In this paper an orthonormal base is first computed and used to project the calibration and unknown sample response matrices into a lower-dimensional subspace. The resulting generalized eigenproblem is then solved using the QZ algorithm. The result of these improvements is that GRAM is computationally more stable, particularly in the case where the calibration sample contains chemical constituents not present in the unknown sample and the unknown contains constituents not present in the calibration (the most general case).
SUMMARYTensorial calibration provides a useful approach to calibration in general. For calibration of instruments that produce two-dimensional (second-order) arrays of data per sample, tensorial concepts are as natural a way of solving the calibration problem as vectorial concepts are for the multivariate problem. Similarly, for third-and higher-order data, the tensorial description of calibration is also useful. This paper introduces second-order calibration from a tensorial point of view. Univariate, multivariate and bilinear approaches to calibration are presented. The generalized rank annihilation method (GRAM) is described from the tensorial perspective, and it is shown that GRAM is equivalent to finding a second-order tensorial base that spans both tensors (calibration and unknown) with respective diagonal component matrices. GRAM uses a single calibration sample for multicomponent analysis even in the presence of interferences. Second-order bilinear calibration is extended to multiple calibration samples where the effect of collinearities is reduced.
Many analytical instruments now produce one‐, two‐ or n‐dimensional arrays of data that must be used for the analysis of samples. An integrated approach to linear calibration of such instruments is presented from a tensorial point of view. The data produced by these instruments are seen as the components of a first‐, second‐ or nth‐order tensor respectively. In this first paper, concepts of linear multivariate calibration are developed in the framework of first‐order tensors, and it is shown that the problem of calibration is equivalent to finding the contravariant vector corresponding to the analyte being calibrated. A model of the subspace spanned by the variance in the calibration must be built to compute the contravarian vectors. It is shown that the only difference between methods such as least squares, principal components regression, latent root regression, ridge regression and partial least squres resides in the choice of the model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.