2016
DOI: 10.1007/978-3-319-46475-6_46
|View full text |Cite
|
Sign up to set email alerts
|

Normalized Cut Meets MRF

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
44
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 19 publications
(44 citation statements)
references
References 44 publications
0
44
0
Order By: Relevance
“…The first regularization term leverages the pattern of neighboring nodes that have high possibility of sharing same label in a graph. This regularization was realized by using the Markov Random Field (MRF) (Tang et al, 2016;Kohli et al, 2009). The combination of Ncut and MRF so far has been only used to process static images (Tang et al, 2016), not to parcellate the brain through processing BOLD signal time courses in rs-fMRI data.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The first regularization term leverages the pattern of neighboring nodes that have high possibility of sharing same label in a graph. This regularization was realized by using the Markov Random Field (MRF) (Tang et al, 2016;Kohli et al, 2009). The combination of Ncut and MRF so far has been only used to process static images (Tang et al, 2016), not to parcellate the brain through processing BOLD signal time courses in rs-fMRI data.…”
Section: Discussionmentioning
confidence: 99%
“…Distance as a strict constraint may not be suitable for brain network parcellation since some networks (e.g., the default mode network) consist of several disjoint regions. Markov Random Field (MRF) defined over a graph 8 (Tang et al, 2016;Kohli et al, 2009) and balanced by a tuning parameter r was used to cluster spatially contiguous voxels into the network while r was adjusted so that disjoint regions could be clustered into the same network. Weighted MRF was one regularization adopted in the RNcut.…”
Section: Regularized-ncut (Rncut)mentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, we also present the performance gap between weak and full mask training to provide a more fair comparison in Table 2. In Table 2, the performance results of full mask training (64.1 %), GrabCut [22], NormalizedCut [23], and KernelCut [24] are acquired from [23].…”
Section: Methodsmentioning
confidence: 99%
“…mIoU Gap between full and weak supervision With Full Masks [23] 64.1 -GrabCut [22] 55.5 8.6 NormalizedCut [23] 58.7 5.4 KernelCut [24] 59 training (61.6%) by 2.4% and 1.38% at selection of σ FH = 0.8 and σ FHBest , respectively. Two example images from the generated set, i.e.…”
Section: Methodsmentioning
confidence: 99%