Practical statistical analysis of diffusion tensor images is considered, and we focus primarily on methods that use metrics based on Euclidean distances between powers of diffusion tensors. First we describe a family of anisotropy measures based on a scale invariant power-Euclidean metric, which are useful for visualisation. Some properties of the measures are derived and practical considerations are discussed, with some examples. Second we discuss weighted Procrustes methods for diffusion tensor interpolation and smoothing, and we compare methods based on different metrics on a set of examples as well as analytically. We establish a key relationship between the principal-square-root-Euclidean metric and the size-and-shape Procrustes metric on the space of symmetric positive semi-definite tensors. We explain, both analytically and by experiments, why the size-and-shape Procrustes metric may be preferred in practical tasks of interpolation, extrapolation, and smoothing, especially when observed tensors are degenerate or when a moderate degree of tensor swelling is desirable. Third we introduce regularisation methodology, which is demonstrated to be useful for highlighting features of prior interest and potentially for segmentation. Finally, we compare several metrics in a dataset of human brain diffusion-weighted MRI, and point out similarities between several of the non-Euclidean metrics but important differences with the commonly used Euclidean metric.
A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.
Abstract. In the one-way trading problem, a seller has some product to be sold to a sequence σ of buyers u1, u2, . . . , uσ arriving online and he needs to decide, for each ui, the amount of product to be sold to ui at the then-prevailing market price pi. The objective is to maximize the seller's revenue. We note that most previous algorithms for the problem need to impose some artificial upper bound M and lower bound m on the market prices, and the seller needs to know either the values of M and m, or their ratio M/m, at the outset. Moreover, the performance guarantees provided by these algorithms depend only on M and m, and are often too loose; for example, given a one-way trading algorithm with competitive ratio Θ(log(M/m)), its actual performance can be significantly better when the actual highest to actual lowest price ratio is significantly smaller than M/m. This paper gives a one-way trading algorithm that does not impose any bounds on market prices and whose performance guarantee depends directly on the input. In particular, we give a class of one-way trading algorithms such that for any positive integer h and any positive number ϵ, we have an algorithm A h,ϵ that has competitive ratio O(log r, the ratio of the highest market price p * = maxi pi and the first price p1, is large and satisfy log (h) r * > 1, where log (i) x denotes the application of the logarithm function i times to x ; otherwise, A h,ϵ has a constant competitive ratio Γ h . We also show that our algorithms are near optimal by showing that given any positive integer h and any one-way trading algorithm A, we can construct a sequence of buyers σ with log (h) r * > 1 such that the ratio between the optimal revenue and the revenue obtained by A is at least Ω(log r * (log (2) r * ) . . . (log (h−1) r * )(log (h) r * )).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.