45Publications

10,268Citation Statements Received

2,337Citation Statements Given

How they've been cited

14,824

298

9,937

33

How they cite others

2,034

208

2,107

22

Publications

Order By: Most citations

Principal component analysis (pca) is a multivariate technique that analyzes a data table in which observations are described by several inter-correlated quantitative dependent variables. Its goal is to extract the important information from the table, to represent it as a set of new orthogonal variables called principal components, and to display the pattern of similarity of the observations and of the variables as points in maps. The quality of the pca model can be evaluated using cross-validation techniques such as the bootstrap and the jackknife. Pca can be generalized as correspondence analysis (ca) in order to handle qualitative variables and as multiple factor analysis (mfa) in order to handle heterogenous sets of variables. Mathematically, pca depends upon the eigen-decomposition of positive semi-definite matrices and upon the singular value decomposition (svd) of rectangular matrices.

Partial least squares (pls) regression (a.k.a projection on latent structures) is a recent technique that combines features from and generalizes principal component analysis (pca) and multiple linear regression. Its goal is to predict a set of dependent variables from a set of independent variables or predictors. This prediction is achieved by extracting from the predictors a set of orthogonal factors called latent variables which have the best predictive power. These latent variables can be used to create displays akin to pca displays. The quality of the prediction obtained from a pls regression model is evaluated with cross-validation techniques such as the bootstrap and jackknife. There are two main variants of pls regression: The most common one separates the rôles of independent and independent variables; the second one-used mostly to analyze brain imaging data-gives the same rôles to dependent and independent variables.

Evidence of category specificity from neuroimaging in the human visual system is generally limited to a few relatively coarse categorical distinctions—e.g., faces versus bodies, or animals versus artifacts—leaving unknown the neural underpinnings of fine-grained category structure within these large domains. Here we use functional magnetic resonance imaging (fMRI) to explore brain activity for a set of categories within the animate domain, including six animal species—two each from three very different biological classes: primates, birds, and insects. Patterns of activity throughout ventral object vision cortex reflected the biological classes of the stimuli. Specifically, the abstract representational space—measured as dissimilarity matrices defined between species-specific multivariate patterns of brain activity—correlated strongly with behavioral judgments of biological similarity of the same stimuli. This biological class structure was uncorrelated with structure measured in retinotopic visual cortex, which correlated instead with a dissimilarity matrix defined by a model of V1 cortex for the same stimuli. Additionally, analysis of the shape of the similarity space in ventral regions provides evidence for a continuum in the abstract representational space—with primates at one end and insects at the other. Further investigation into the cortical topography of activity that contributes to this category structure reveals the partial engagement of brain systems active normally for inanimate objects in addition to animate regions.

Multiple factor analysis (MFA, also called multiple factorial analysis) is an extension of principal component analysis (PCA) tailored to handle multiple data tables that measure sets of variables collected on the same observations, or, alternatively, (in dual‐MFA) multiple data tables where the same variables are measured on different sets of observations. MFA proceeds in two steps: First it computes a PCA of each data table and ‘normalizes’ each data table by dividing all its elements by the first singular value obtained from its PCA. Second, all the normalized data tables are aggregated into a grand data table that is analyzed via a (non‐normalized) PCA that gives a set of factor scores for the observations and loadings for the variables. In addition, MFA provides for each data table a set of partial factor scores for the observations that reflects the specific ‘view‐point’ of this data table. Interestingly, the common factor scores could be obtained by replacing the original normalized data tables by the normalized factor scores obtained from the PCA of each of these tables. In this article, we present MFA, review recent extensions, and illustrate it with a detailed example. WIREs Comput Stat 2013, 5:149–179. doi: 10.1002/wics.1246This article is categorized under: Data: Types and Structure > Categorical Data Statistical Learning and Exploratory Methods of the Data Sciences > Exploratory Data Analysis Statistical and Graphical Methods of Data Analysis > Multivariate Analysis

WOS: 000306407200001International audienceFor food scientists and industrials, descriptive profiling is an essential tool that involves the evaluation of both the qualitative and quantitative sensory characteristics of a product by a panel. Recently, in response to industrial demands to develop faster and more cost-effective methods of descriptive analysis, several methods have been offered as alternatives to conventional profiling. These methods can be classified in three families: (i) verbal-based methods (flash profile and check-all-that-apply), (ii) similarity-based methods (free sorting task and projective mapping aka Napping (R)) and (iii) reference-based methods (polarised sensory positioning and pivot profile). We successively present these three classes of methods in terms of origin, principles, statistical analysis, applications to food products, variations of the methods and the Pros and Cons

Partial least square (PLS) methods (also sometimes called projection to latent structures) relate the information present in two data tables that collect measurements on the same set of observations. PLS methods proceed by deriving latent variables which are (optimal) linear combinations of the variables of a data table. When the goal is to find the shared information between two tables, the approach is equivalent to a correlation problem and the technique is then called partial least square correlation (PLSC) (also sometimes called PLS-SVD). In this case there are two sets of latent variables (one set per table), and these latent variables are required to have maximal covariance. When the goal is to predict one data table the other one, the technique is then called partial least square regression. In this case there is one set of latent variables (derived from the predictor table) and these latent variables are required to give the best possible prediction. In this paper we present and illustrate PLSC and PLSR and show how these descriptive multivariate analysis techniques can be extended to deal with inferential questions by using cross-validation techniques such as the bootstrap and permutation tests.

scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.

hi@scite.ai

334 Leonard St

Brooklyn, NY 11211

Copyright © 2023 scite LLC. All rights reserved.

Made with 💙 for researchers

Part of the Research Solutions Family.