Markov networks are frequently used in sciences to represent conditional independence relationships underlying observed variables arising from a complex system. It is often of interest to understand how an underlying network differs between two conditions. In this paper, we develop a methodology for performing valid statistical inference for difference between parameters of Markov network in a high-dimensional setting where the number of observed variables is allowed to be larger than the sample size. Our proposal is based on the regularized Kullback-Leibler Importance Estimation Procedure that allows us to directly learn the parameters of the differential network, without requiring for separate or joint estimation of the individual Markov network parameters. This allows for applications in cases where individual networks are not sparse, such as networks that contain hub nodes, but the differential network is sparse. We prove that our estimator is regular and its distribution can be well approximated by a normal under wide range of data generating processes and, in particular, is not sensitive to model selection mistakes. Furthermore, we develop a new testing procedure for equality of Markov networks, which is based on a max-type statistics. A valid bootstrap procedure is developed that approximates quantiles of the test statistics. The performance of the methodology is illustrated through extensive simulations and real data examples.
In patients with dense breasts or at high risk of breast cancer, dynamic contrast enhanced MRI (DCE-MRI) is a highly sensitive diagnostic tool. However, its specificity is highly variable and sometimes low; quantitative measurements of contrast uptake parameters may improve specificity and mitigate this issue. To improve diagnostic accuracy, data need to be captured at high spatial and temporal resolution. While many methods exist to accelerate MRI temporal resolution, not all are optimized to capture breast DCE-MRI dynamics. We propose a novel, flexible, and powerful framework for the reconstruction of highly-undersampled DCE-MRI data: enhancement-constrained acceleration (ECA). Enhancement-constrained acceleration uses an assumption of smooth enhancement at small time-scale to estimate points of smooth enhancement curves in small time intervals at each voxel. This method is tested in silico with physiologically realistic virtual phantoms, simulating state-of-the-art ultrafast acquisitions at 3.5s temporal resolution reconstructed at 0.25s temporal resolution (demo code available here). Virtual phantoms were developed from real patient data and parametrized in continuous time with arterial input function (AIF) models and lesion enhancement functions. Enhancement-constrained acceleration was compared to standard ultrafast reconstruction in estimating the bolus arrival time and initial slope of enhancement from reconstructed images. We found that the ECA method reconstructed images at 0.25s temporal resolution with no significant loss in image fidelity, a 4x reduction in the error of bolus arrival time estimation in lesions (p < 0.01) and 11x error reduction in blood vessels (p < 0.01). Our results suggest that ECA is a powerful and versatile tool for breast DCE-MRI.
Markov networks are frequently used in sciences to represent conditional independence relationships underlying observed variables arising from a complex system. It is often of interest to understand how an underlying network differs between two conditions. In this paper, we develop methods for comparing a pair of high‐dimensional Markov networks where we allow the number of observed variables to increase with the sample sizes. By taking the density ratio approach, we are able to learn the network difference directly and avoid estimating the individual graphs. Our methods are thus applicable even when the individual networks are dense as long as their difference is sparse. We prove finite‐sample Gaussian approximation error bounds for the estimator we construct under significantly weaker assumptions than are typically required for model selection consistency. Furthermore, we propose bootstrap procedures for estimating quantiles of a max‐type statistics based on our estimator, and show how they can be used to test the equality of two Markov networks or construct simultaneous confidence intervals. The performance of our methods is demonstrated through extensive simulations. The scientific usefulness is illustrated with an analysis of a new fMRI data set.
Algorithmic stability is a concept from learning theory that expresses the degree to which changes to the input data (e.g., removal of a single data point) may affect the outputs of a regression algorithm. Knowing an algorithm's stability properties is often useful for many downstream applications-for example, stability is known to lead to desirable generalization properties and predictive inference guarantees. However, many modern algorithms currently used in practice are too complex for a theoretical analysis of their stability properties, and thus we can only attempt to establish these properties through an empirical exploration of the algorithm's behavior on various data sets. In this work, we lay out a formal statistical framework for this kind of black box testing without any assumptions on the algorithm or the data distribution, and establish fundamental bounds on the ability of any black box test to identify algorithmic stability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.