2016
DOI: 10.1109/jstsp.2016.2578878
|View full text |Cite
|
Sign up to set email alerts
|

An Informed Multitask Diffusion Adaptation Approach to Study Tremor in Parkinson's Disease

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 24 publications
(17 citation statements)
references
References 37 publications
0
17
0
Order By: Relevance
“…In the incremental update step, first the nodes exchange local data {y k,i , H k,i , R k,i } with their neighbors and at every time instant i compute ψ ψ ψ k,i ←x k,i|i−1 and P k,i ← P k,i|i−1 . Then, each node performs KF with the available data to obtain the intermediate estimates ψ ψ ψ k,i as given by (2).…”
Section: B Diffusion Kalman Filtering Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…In the incremental update step, first the nodes exchange local data {y k,i , H k,i , R k,i } with their neighbors and at every time instant i compute ψ ψ ψ k,i ←x k,i|i−1 and P k,i ← P k,i|i−1 . Then, each node performs KF with the available data to obtain the intermediate estimates ψ ψ ψ k,i as given by (2).…”
Section: B Diffusion Kalman Filtering Algorithmmentioning
confidence: 99%
“…To evaluate the relative variance combination rule (52), the nodes need to know the variance products, µ 2 lk , of their neighbors, which are often not available beforehand. Therefore, an adaptive combination rule is desirable, where individual nodes learn their combination coefficients (52) using the available data.…”
Section: Adaptive Solutionmentioning
confidence: 99%
“…The cost at node i, i.e., f i (x i ) may be some squared error or the negative log-likelihood (the former can be regarded as a special case of the latter when the noise is Gaussian) with respect to the local data observed by node i. The link cost g ij (x i , x j ) for a link (i, j) can be used to enforce similarity between neighbor nodes, e.g., x i − x j 2 2 in multitask adaptive networks in [19], [21]. February…”
Section: A the Statement Of The Problemmentioning
confidence: 99%
“…Note that the objective function in (14) is a convex quadratic function. Hence, the necessary and sufficient condition for optimality of problem (14) is that the gradient of the objective function vanishes. The gradient of the objective function, which is denoted as J k n (T ), with respect to x n and v n,i can be computed as follows:…”
Section: ) Updating X and Vmentioning
confidence: 99%