2016
DOI: 10.3389/fninf.2016.00035
|View full text |Cite
|
Sign up to set email alerts
|

MINC 2.0: A Flexible Format for Multi-Modal Images

Abstract: It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
48
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 66 publications
(48 citation statements)
references
References 16 publications
0
48
0
Order By: Relevance
“…An average template was created from all images in the study using iterative linear and nonlinear image registration as previously described . Pydpiper, MINC tools, and ANTs were used for nonlinear registration. The centroid of each tumor was calculated and rendered using ITK‐SNAP's “Convert3D” tool, which contains a list of functions for 3D image manipulation and format conversion.…”
Section: Methodsmentioning
confidence: 99%
“…An average template was created from all images in the study using iterative linear and nonlinear image registration as previously described . Pydpiper, MINC tools, and ANTs were used for nonlinear registration. The centroid of each tumor was calculated and rendered using ITK‐SNAP's “Convert3D” tool, which contains a list of functions for 3D image manipulation and format conversion.…”
Section: Methodsmentioning
confidence: 99%
“…All raw images were then preprocessed using the minc-bpipe-library (https://github.com/CobraLab/minc-bpipe-library; Sadedin, Pope, & Oshlack, 2012;Vincent et al, 2016). All raw images were then preprocessed using the minc-bpipe-library (https://github.com/CobraLab/minc-bpipe-library; Sadedin, Pope, & Oshlack, 2012;Vincent et al, 2016).…”
Section: Image Analysismentioning
confidence: 99%
“…All acquired T1-weighted images underwent visual quality assessment and were excluded if excessive motion or scanner artifacts were observed. All raw images were then preprocessed using the minc-bpipe-library (https://github.com/CobraLab/minc-bpipe-library; Sadedin, Pope, & Oshlack, 2012;Vincent et al, 2016). This pipeline uses a "clean_and_center" stage to uniformize the direction of cosines and the zero-point of the scan to the center of the image, followed by a bias field correction for contrast inhomogeneity using the N4ITK algorithm (Tustison et al, 2010), and a brain extraction step to isolate the brain from nonbrain tissues based on nonlocal segmentation technique (BEaST; Eskildsen et al, 2012).…”
Section: Image Analysismentioning
confidence: 99%
“…(https://github.com/thomshaw92/nifti_normalise), a nifti implementation of 'mincnorm' from the medical imaging network common (MINC) data toolkit (Vincent et al, 2016) that normalises image intensities between two percent-critical thresholds, removing outlying intensity values V. If multiple repetitions of the dedicated T2w scan are available, non-linear realignment of these scans to reduce motion artefacts and increase the sharpness of the scans as in Shaw et al (2019).…”
Section: Preprocessing and Cross Sectional Processingmentioning
confidence: 99%