Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CTA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CTA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (1) (semi-)automatically detect and quantify stenosis on CTA, in comparison with quantitative coronary angiography (QCA) and CTA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CTA, in comparison with expert's manual annotation. A database consisting of 48 multicenter multivendor cardiac CTA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/.
The demand for high-performance computing (HPC) has been increasing since the invention of computing technology. This led to the invocation of sophisticated multi/many-core processors with high performance. Graphical processing units (GPUs) have emerged as an important innovation in the manycore era as it features a high number of processors. The GPU acts as a computational accelerator that can significantly reduce the computational time of the HPC, as it can offer standout parallelism for high-end computing applications such as graphics designing. However, increasing the resources has resulted in higher power consumption and heat dissipation which has been a challenging problem for modern HPC Units. On the other hand, because of the dynamic nature of workload, a large amount of parallelism, offered by these many-core processors, is often underutilized. An ideal system would be smart enough to efficiently utilize resources and save power where less workload is available. Reducing the resources dynamically has direct implications on the performance of the system. However, if less workload is available, reducing the resources would not harm the performance, rather it would save power with less to no trade-off in overall throughput of the system. In this paper, a smart power and performance efficient resource management controller for general purpose-GPU architecture is presented. The proposed controller, based on a feedback mechanism, keeps on analyzing the current frequency of central processing unit (CPU) and GPU, number of active cores of the CPU and utilization of CPU and GPU. On the basis of collected data, the proposed controller which features fuzzy type-2 as an optimizing mechanism tries to create a balance between performance and power consumption. The results are evaluated against various benchmarks on NVIDIA TK1 GPU kit and by using dynamic voltage and frequency scaling and core gating, up to 47 % reduction in power consumption has been achieved.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.