AI is one of the biggest megatrends towards the 4th industrial revolution. Although these technologies promise business sustainability as well as product and process quality, it seems that the ever-changing market demands, the complexity of technologies and fair concerns about privacy, impede broad application and reuse of Artificial Intelligence (AI) models across the industry. To break the entry barriers for these technologies and unleash its full potential, the knowlEdge project will develop a new generation of AI methods, systems, and data management infrastructure. Subsequently, as part of the knowlEdge project we propose several major innovations in the areas of data management, data analytics and knowledge management including (i) a set of AI services that allows the usage of edge deployments as computational and live data infrastructure as well as a continuous learning execution pipeline on the edge, (ii) a digital twin of the shop-floor able to test AI models, (iii) a data management framework deployed along the edge-to-cloud continuum ensuring data quality, privacy and confidentiality, (iv) Human-AI Collaboration and Domain Knowledge Fusion tools for domain experts to inject their experience into the system, (v) a set of standardisation mechanisms for the exchange of trained AI models from one context to another, and (vi) a knowledge
One of the purposes of HPC benchmarks is to identify limitations and bottlenecks in hardware. This functionality is particularly influential when assessing performance on emerging tasks, the nature and requirements of which may not yet be fully understood. In this setting, a proper benchmark can steer the design of next generation hardware by properly identifying said requirements, and quicken the deployment of novel solutions. With the increasing popularity of deep learning workloads, benchmarks for this family of tasks have been gaining popularity. Particularly for image based tasks, which rely on the most well established family of deep learning models: Convolutional Neural Networks. Significantly, most benchmarks for CNN use low-resolution and fixed-shape (LR&FS) images. While this sort of inputs have been very successful for certain purposes, they are insufficient for some domains of special interest (e.g., medical image diagnosis or autonomous driving) where one requires higher resolutions and variable-shape (HR&VS) images to avoid loss of information and deformation. As of today, it is still unclear how does image resolution and shape variability affect the nature of the problem from a computational perspective. In this paper we assess the differences between training with LR&FS and HR&VS, as means to justify the importance of building benchmarks specific for the latter. Our results on three different HPC clusters show significant variations in time, resources and memory management, highlighting the differences between LR&FS and HR&VS image deep learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.