In this article, we address the problem of measuring and analyzing sensation, the subjective magnitude of one's experience. We do this in the context of the method of triads: The sensation of the stimulus is evaluated via relative judgments of the following form: "Is stimulus S i more similar to stimulus S j or to stimulus S k ?" We propose to use ordinal embedding methods from machine learning to estimate the scaling function from the relative judgments. We review two relevant and well-known methods in psychophysics that are partially applicable in our setting: nonmetric multidimensional scaling (NMDS) and the method of maximum likelihood difference scaling (MLDS). Considering various scaling functions, we perform an extensive set of simulations to demonstrate the performance of the ordinal embedding methods. We show that in contrast to existing approaches, our ordinal embedding approach allows, first, to obtain reasonable scaling functions from comparatively few relative judgments and, second, to estimate multidimensional perceptual scales. In addition to the simulations, we analyze data from two real psychophysics experiments using ordinal embedding methods. Our results show that in the one-dimensional perceptual scale, our ordinal embedding approach works as well as MLDS, while in higher dimensions, only our ordinal embedding methods can produce a desirable scaling function. To make our methods widely accessible, we provide an R-implementation and general rules of thumb on how to use ordinal embedding in the context of psychophysics.
indicates a shared first-authorship Highlights. A novel thresholding method for brain networks based on k-nearest neighbors (kNN) kNN applied on resting state fMRI from a big cohort of healthy subjects BASE-II kNN built networks present greater small world properties than density threshold kNN built networks present scale-free properties whereas density threshold did not
AbstractIn recent years, there has been a massive effort to analyze the topological properties of brain networks. Yet, one of the challenging questions in the field is how to construct brain networks based on the connectivity values derived from neuroimaging methods. From a theoretical point of view, it is plausible that the brain would have evolved to minimize energetic costs of information processing, and therefore, maximizes efficiency as well as to redirect its function in an adaptive fashion, that is, resilience. A brain network with such features, when characterized using graph analysis, would present small-world and scale-free properties.In this paper, we focused on how the brain network is constructed by introducing and testing an alternative method: k-nearest neighbor (kNN). In addition, we compared the kNN method with one of the most common methods in neuroscience: namely the density threshold. We performed our analyses on functional connectivity matrices derived from resting state fMRI of a big imaging cohort (N=434) of young and older healthy participants. The topology of networks was characterized by the graph measures degree, characteristic path length, clustering coefficient and small world. In addition, we verified whether kNN produces scale-free networks. We showed that networks built by kNN presented advantages over traditional thresholding methods, namely greater values for small-world (linked to efficiency of networks) than those derived by means of density thresholds and moreover, it presented also scale-free properties (linked to the resilience of networks), where density threshold did not. A brain network with such properties would have advantages in terms of efficiency, rapid adaptive reconfiguration and resilience, features of brain networks that are relevant for plasticity and cognition as well as neurological diseases as stroke and dementia.
We consider machine learning in a comparison-based setting where we are given a set of points in a metric space, but we have no access to the actual distances between the points. Instead, we can only ask an oracle whether the distance between two points i and j is smaller than the distance between the points i and k. We are concerned with data structures and algorithms to find nearest neighbors based on such comparisons. We focus on a simple yet effective algorithm that recursively splits the space by first selecting two random pivot points and then assigning all other points to the closer of the two (comparison tree). We prove that if the metric space satisfies certain expansion conditions, then with high probability the height of the comparison tree is logarithmic in the number of points, leading to efficient search performance. We also provide an upper bound for the failure probability to return the true nearest neighbor. Experiments show that the comparison tree is competitive with algorithms that have access to the actual distance values, and needs less triplet comparisons than other competitors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.