The development of large-scale functional brain networks is a complex, lifelong process that can be investigated using resting-state functional connectivity MRI (rs-fcMRI). In this study, we aimed to decode the developmental dynamics of the whole-brain functional network in seven decades (8–79 years) of the human lifespan. We first used parametric curve fitting to examine linear and nonlinear age effect on the resting human brain, and then combined manifold learning and support vector machine methods to predict individuals' “brain ages” from rs-fcMRI data. We found that age-related changes in interregional functional connectivity exhibited spatially and temporally specific patterns. During brain development from childhood to senescence, functional connections tended to linearly increase in the emotion system and decrease in the sensorimotor system; while quadratic trajectories were observed in functional connections related to higher-order cognitive functions. The complex patterns of age effect on the whole-brain functional network could be effectively represented by a low-dimensional, nonlinear manifold embedded in the functional connectivity space, which uncovered the inherent structure of brain maturation and aging. Regression of manifold coordinates with age further showed that the manifold representation extracted sufficient information from rs-fcMRI data to make prediction about individual brains' functional development levels. Our study not only gives insights into the neural substrates that underlie behavioral and cognitive changes over age, but also provides a possible way to quantitatively describe the typical and atypical developmental progression of human brain function using rs-fcMRI.
We consider the distributed statistical learning problem over decentralized systems that are prone to adversarial attacks. This setup arises in many practical applications, including Google's Federated Learning . Formally, we focus on a decentralized system that consists of a parameter server and m working machines; each working machine keeps N/m data samples, where N is the total number of samples. In each iteration, up to q of the m working machines suffer Byzantine faults -- a faulty machine in the given iteration behaves arbitrarily badly against the system and has complete knowledge of the system. Additionally, the sets of faulty machines may be different across iterations. Our goal is to design robust algorithms such that the system can learn the underlying true parameter, which is of dimension d , despite the interruption of the Byzantine attacks. In this paper, based on the geometric median of means of the gradients, we propose a simple variant of the classical gradient descent method. We show that our method can tolerate q Byzantine failures up to 2(1+ε) q ≤ for an arbitrarily small but fixed constant ε > 0. The parameter estimate converges in O (log N ) rounds with an estimation error on the order of max{√ dq/N , √ d/N , which is larger than the minimax-optimal error rate √ d/N in the centralized and failure-free setting by at most a factor of √ q . The total computational complexity of our algorithm is of O (( Nd/m ) log N ) at each working machine and O ( md + kd log 3 N ) at the central server, and the total communication cost is of O ( m d log N ). We further provide an application of our general results to the linear regression problem. A key challenge arises in the above problem is that Byzantine failures create arbitrary and unspecified dependency among the iterations and the aggregated gradients. To handle this issue in the analysis, we prove that the aggregated gradient, as a function of model parameter, converges uniformly to the true gradient function.
Background: Dysfunctional integration of distributed brain networks is believed to be the cause of schizophrenia, and resting-state functional connectivity analyses of schizophrenia have attracted considerable attention in recent years. Unfortunately, existing functional connectivity analyses of schizophrenia have been mostly limited to linear associations.Objective: The objective of the present study is to evaluate the discriminative power of non-linear functional connectivity and identify its changes in schizophrenia.Method: A novel measure utilizing the extended maximal information coefficient was introduced to construct non-linear functional connectivity. In conjunction with multivariate pattern analysis, the new functional connectivity successfully discriminated schizophrenic patients from healthy controls with relative higher accuracy rate than the linear measure.Result: We found that the strength of the identified non-linear functional connections involved in the classification increased in patients with schizophrenia, which was opposed to its linear counterpart. Further functional network analysis revealed that the changes of the non-linear and linear connectivity have similar but not completely the same spatial distribution in human brain.Conclusion: The classification results suggest that the non-linear functional connectivity provided useful discriminative power in diagnosis of schizophrenia, and the inverse but similar spatial distributed changes between the non-linear and linear measure may indicate the underlying compensatory mechanism and the complex neuronal synchronization underlying the symptom of schizophrenia.
This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state. We focus on the impact of the adversarial agents on the performance of consensus-based non-Bayesian learning, where non-faulty agents combine local learning updates with consensus primitives. In particular, we consider the scenario where an unknown subset of agents suffer Byzantine faults -agents suffering Byzantine faults behave arbitrarily.We propose two learning rules.-In our first update rule, each agent updates its local beliefs as (up to normalization) the product of (1) the likelihood of the cumulative private signals and (2) the weighted geometric average of the beliefs of its incoming neighbors and itself. Under reasonable assumptions on the underlying network structure and the global identifiability of the network, we show that all the non-faulty agents asymptotically agree on the true state almost surely. For the case when every agent is failure-free, we show that (with high probability) each agent's beliefs on the wrong hypotheses decrease at rate O(exp(−Ct 2 )), where t is the number of iterations, and C is a constant. -In general when agents may be adversarial, network identifiability condition specified for the above learning rule scales poorly in the number of state candidates m. In addition, the computation complexity per agent per iteration of this learning rule is forbiddingly high. Thus, we propose a modification of our first learning rule, whose complexity per iteration per agent is O(m 2 n log n), where n is the number of agents in the network. We show that this modified learning rule works under a much weaker network identifiability condition. In addition, this new condition is independent of m.⋆
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.