BackgroundCurrent multi-petaflop supercomputers are powerful systems, but present challenges when faced with problems requiring large machine learning workflows. Complex algorithms running at system scale, often with different patterns that require disparate software packages and complex data flows cause difficulties in assembling and managing large experiments on these machines.ResultsThis paper presents a workflow system that makes progress on scaling machine learning ensembles, specifically in this first release, ensembles of deep neural networks that address problems in cancer research across the atomistic, molecular and population scales. The initial release of the application framework that we call CANDLE/Supervisor addresses the problem of hyper-parameter exploration of deep neural networks.ConclusionsInitial results demonstrating CANDLE on DOE systems at ORNL, ANL and NERSC (Titan, Theta and Cori, respectively) demonstrate both scaling and multi-platform execution.
An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework.
The quality of wireless links suffers from timevarying channel degradations such as interference, flat-fading, and frequency-selective fading. Current radios are limited in their ability to adapt to these channel variations because they are designed with fixed values for most system parameters such as frame length, error control, and processing gain. The values for these parameters are usually a compromise between the requirements for worst-case channel conditions and the need for low implementation cost. Therefore, in benign channel conditions these commercial radios can consume more battery energy than needed to maintain a desired link quality, while in a severely degraded channel they can consume energy without providing any quality-of-service (QoS). While techniques for adapting radio parameters to channel variations have been studied to improve link performance, in this paper they are applied to minimize battery energy. Specifically, an adaptive radio is being designed that adapts the frame length, error control, processing gain, and equalization to different channel conditions, while minimizing battery energy consumption. Experimental measurements and simulation results are presented in this paper to illustrate the adaptive radio's energy savings.
LAGER is an integrated computer-aided design (CAD) system for algorithm-specific h c g r a c d circuit(1C) design, targeted at applications such as speech processing, image processing, telecommunications, and robot control. LAGER provides user interfaces at behavioral, structural, and physical levels and allows easy integration of new CAD tools. LAGER consists of a behavioral mapper and a silicon assembler. The behavioral mapper maps the behavior onto a parameterized structure to produce microcode and parameter values. The silicon assembler then translates the filled-out structural description into a physical layout and with the aid of simulation tools, the user can fine tune the data path by iterating this process. The silicon assembler can also be used without the behavioral mapper for high sample rate applications. A number of algorithm-specific IC's designed with LAGER have been fabricated and tested, and as examples, a robot arm controller chip and a real-time image segmentation chip will be described.
The lack of high-level design tools hampers the widespread adoption of adaptive computing systems. Application developers have to master a wide range of functions, from the high-level architecture design, to the timing of actual control and data signals. These systems are extremely cumbersome and error-prone, making it difficult for adaptive computing to enter mainstream computing. In this paper we describe DEFACTO, an end-to-end design environment aimed at bridging the gap in tools for adaptive computing by bringing together parallelizing compiler technology and synthesis techniques. IntroductionAdaptive computing systems consisting of configurable computing logic can offer significant performance advantages over conventional processors as they can be tailored to the particular computational needs of a given application (e.g., template-based matching, Monte Carlo simulation, and string matching algorithms). Unfortunately, developing programs that incorporate configurable computing units (CCUs) is extremely cumbersome, demanding that software developers also assume the role of hardware designers. At present, developing applications on most such systems requires low-level VHDL coding, and complex management of communication and control. While a few application developers tools are being designed, these have been narrowly focused on a single application or a specific configurable architecture [1]. The absence of general-purpose, highlevel programming tools for adaptive computing applications has hampered the widespread adoption of this technology; currently, this area is only accessible to a very small collection of specially trained individuals. This paper describes DEFACTO, an end-to-end design environment for developing applications mapped to adaptive computing architectures. A user of DEFACTO develops an application in a high-level programming language such as C, possibly augmented by pragmas that specify variable arithmetic precision and timing requirements. The system maps this application to an adaptive computing architecture that consists of multiple FPGAs as coprocessors to a conventional general-purpose processor. Other inputs to the system include a description of the architecture (e.g., how many FPGAs, communication time and bandwidth), and applicationspecific information such as representative program inputs. DEFACTO leverages parallelizing compiler technology based on the Stanford SUIF compiler. While much existing compiler technology is directly applicable to this domain, adaptive computing environments present new challenges to a compiler, particularly the requirement of defining or selecting the functionality of the target architecture. Thus, a design environment for adaptive computing must also leverage CAD research to manage mapping configurable computations to actual hardware. DEFACTO combines compiler technology, CAD environments and techniques specially developed for adaptive computing in a single system. The remainder of the paper is organized into four sections and a conclusion. In ...
To design an inherently safe sodium-cooled fast reactor (SFR), it must be demonstrated that he net reactivity coefficient is negative, such any event that causes the core power to increase initially will be quickly followed by a response that tends to decrease the core power and return the reactor to a safe operating condition. This response in the core reactivity is caused by several mechanisms (which may compete with each other), including coolant density changes, the fuel Doppler effect, and changes in core geometry. Simulating the latter mechanism, changes in core geometry, is the focus of the multi-physics demonstration in this report. In particular, the focus is on the focus of radial core expansion caused by the motion of fuel assemblies in response to thermal expansion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.