Dayside transients, such as hot flow anomalies, foreshock bubbles, magnetosheath jets, flux transfer events, and surface waves, are frequently observed upstream from the bow shock, in the magnetosheath, and at the magnetopause. They play a significant role in the solar wind-magnetosphere-ionosphere coupling. Foreshock transient phenomena, associated with variations in the solar wind dynamic pressure, deform the magnetopause, and in turn generates field-aligned currents (FACs) connected to the auroral ionosphere. Solar wind dynamic pressure variations and transient phenomena at the dayside magnetopause drive magnetospheric ultra low frequency (ULF) waves, which can play an important role in the dynamics of Earth’s radiation belts. These transient phenomena and their geoeffects have been investigated using coordinated in-situ spacecraft observations, spacecraft-borne imagers, ground-based observations, and numerical simulations. Cluster, THEMIS, Geotail, and MMS multi-mission observations allow us to track the motion and time evolution of transient phenomena at different spatial and temporal scales in detail, whereas ground-based experiments can observe the ionospheric projections of transient magnetopause phenomena such as waves on the magnetopause driven by hot flow anomalies or flux transfer events produced by bursty reconnection across their full longitudinal and latitudinal extent. Magnetohydrodynamics (MHD), hybrid, and particle-in-cell (PIC) simulations are powerful tools to simulate the dayside transient phenomena. This paper provides a comprehensive review of the present understanding of dayside transient phenomena at Earth and other planets, their geoeffects, and outstanding questions.
Web browsing is an activity that billions of mobile users perform on a daily basis. Battery life is a primary concern to many mobile users who often find their phone has died at most inconvenient times. The heterogeneous multi-core architecture is a solution for energy-efficient processing. However, the current mobile web browsers rely on the operating system to exploit the underlying hardware, which has no knowledge of individual web contents and often leads to poor energy efficiency. This paper describes an automatic approach to render mobile web workloads for performance and energy efficiency. It achieves this by developing a machine learning based approach to predict which processor to use to run the web rendering engine and at what frequencies the processors should operate. Our predictor learns offline from a set of training web workloads. The built predictor is then integrated into the browser to predict the optimal processor configuration at runtime, taking into account the web workload characteristics and the optimisation goal: whether it is load time, energy consumption or a trade-off between them. We evaluate our approach on a representative ARM big.LITTLE mobile architecture using the hottest 500 webpages. Our approach achieves 80% of the performance delivered by an ideal predictor. We obtain, on average, 45%, 63.5% and 81% improvement respectively for load time, energy consumption and the energy delay product, when compared to the Linux heterogeneous multi-processing scheduler.
The web has become a ubiquitous application development platform for mobile systems. Yet, web access on mobile devices remains an energy-hungry activity. Prior work in the field mainly focuses on the initial page loading stage, but fails to exploit the opportunities for energy-efficiency optimization while the user is interacting with a loaded page. This paper presents a novel approach for performing energy optimization for interactive mobile web browsing. At the heart of our approach is a set of machine learning models, which estimate at runtime the frames per second for a given user interaction input by running the computation-intensive web render engine on a specific processor core under a given clock speed. We use the learned predictive models as a utility function to quickly search for the optimal processor setting to carefully trade responsive time for reduced energy consumption. We integrate our techniques to the opensource Chromium browser and apply it to two representative mobile user events: scrolling and pinching (i.e., zoom in and out). We evaluate the developed system on the landing pages of the top-100 hottest websites and two big.LITTLE heterogeneous mobile platforms. Our extensive experiments show that the proposed approach reduces the system-wide energy consumption by over 36% on average and up to 70%. This translates to an over 17% improvement on energy-efficiency over a state-of-the-art event-based web browser scheduler, but with significantly fewer violations on the quality of service.
The recent advances in deep neural networks (DNNs) make them attractive for embedded systems. However, it can take a long time for DNNs to make an inference on resourceconstrained computing devices. Model compression techniques can address the computation issue of deep inference on embedded devices. This technique is highly attractive, as it does not rely on specialized hardware, or computation-offloading that is often infeasible due to privacy concerns or high latency. However, it remains unclear how model compression techniques perform across a wide range of DNNs. To design efficient embedded deep learning solutions, we need to understand their behaviors. This work develops a quantitative approach to characterize model compression techniques on a representative embedded deep learning architecture, the NVIDIA Jetson Tx2. We perform extensive experiments by considering 11 influential neural network architectures from the image classification and the natural language processing domains. We experimentally show that how two mainstream compression techniques, data quantization and pruning, perform on these network architectures and the implications of compression techniques to the model storage size, inference time, energy consumption and performance metrics. We demonstrate that there are opportunities to achieve fast deep inference on embedded systems, but one must carefully choose the compression settings. Our results provide insights on when and how to apply model compression techniques and guidelines for designing efficient embedded deep learning systems.
Web technology underpins many interactive mobile applications. However, energy-efficient mobile web interactions is an outstanding challenge. Given the increasing diversity and complexity of mobile hardware, any practical optimization scheme must work for a wide range of users, mobile platforms and web workloads. This paper presents CAMEL, a novel energy optimization system for mobile web interactions. CAMEL leverages machine learning techniques to develop a smart, adaptive scheme to judiciously trade performance for reduced power consumption. Unlike prior work, CAMEL directly models how a given web content affects the user expectation and uses this to guide energy optimization. It goes further by employing transfer learning and conformal predictions to tune a previously learned model in the end-user environment and improve it over time. We apply CAMEL to Chromium and evaluate it on four distinct mobile systems involving 1,000 testing webpages and 30 users. Compared to four state-of-the-art web-event optimizers, CAMEL delivers 22% more energy savings, but with 49% fewer violations on the quality of user experience, and exhibits orders of magnitudes less overhead when targeting a new computing environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.