The astonishing development of diverse and different hardware platforms is twofold: on one side, the challenge for the exascale performance for big data processing and management; on the other side, the mobile and embedded devices for data collection and human machine interaction. This drove to a highly hierarchical evolution of programming models. GVirtuS is the general virtualization system developed in 2009 and firstly introduced in 2010 enabling a completely transparent layer among GPUs and VMs. This paper shows the latest achievements and developments of GVirtuS, now supporting CUDA 6.5, memory management and scheduling. Thanks to the new and improved remoting capabilities, GVirtus now enables GPU sharing among physical and virtual machines based on x86 and ARM CPUs on local workstations, computing clusters and distributed cloud appliances.
Summary
Low‐power devices are usually highly constrained in terms of CPU computing power, memory, and GPGPU resources for real‐time applications to run. In this paper, we describe RAPID, a complete framework suite for computation offloading to help low‐powered devices overcome these limitations. RAPID supports CPU and GPGPU computation offloading on Linux and Android devices. Moreover, the framework implements lightweight secure data transmission of the offloading operations. We present the architecture of the framework, showing the integration of the CPU and GPGPU offloading modules. We show by extensive experiments that the overhead introduced by the security layer is negligible. We present the first benchmark results showing that Java/Android GPGPU code offloading is possible. Finally, we show the adoption of the GPGPU offloading into BioSurveillance, a commercial real‐time face recognition application. The results show that, thanks to RAPID, BioSurveillance is being successfully adapted to run on low‐power devices. The proposed framework is highly modular and exposes a rich application programming interface to developers, making it highly versatile while hiding the complexity of the underlying networking layer.
While the everything as a sensor is a typical data gathering pattern in the Internet of Things (IoT) applications in contexts such as smart cities, smart factories, and precision agriculture, among others, the use of the same technique in the coastal marine environment is still not explored at full potential. Nevertheless, when it comes to maritime scenarios, the application of IoT and networks of distributed sensors and actuators are still limited, even though the development of marine electronics and extreme network technologies are present for decades also in this area. In this paper, we first introduce the concept of the Internet of Floating Things (IoFT), which extends the IoT to the maritime scenario. Next, we present our latest implementation of the DYNAMO (Distributed leisure Yachts sensor Network for Atmosphere and Marine Observations) system, a framework for coastal data collection from sensors and devices deployed in marine equipment. To demonstrate the importance of IoFT data collection in the real-world environmental science context, we consider a scientific workflow for coastal water quality. The selected application focuses on predicting the spatial and temporal pattern of sea pollutants and their possible presence and time of persistence in the proximity of mussel farm areas in the Bay of Pozzuoli in Italy. The pollutants are simple Lagrangian particles, so the ocean dynamics play an important role in the simulation. Our results show that integrating crowdsourced bathymetry data in the workflow numerical model setup improves the accuracy of the final results, allowing for a more detailed spatial distribution pattern of the sea current driving the Lagrangian tracers.
INDEX TERMSThe Internet of Floating Things, marine data crowdsourcing, food quality, mussel farm.
The minimisation of the total cost of ownership is hard to be faced by the owners of large scale computing systems, without affecting negatively the quality of service for the users. Modern datacenters, often included The minimisation of the total cost of ownership is hard to be faced by the owners of large scale computing systems, without affecting negatively the quality of service for the users. Modern datacenters, often included in distributed environments, appear to be “elastic”, i.e., they are able to shrink or enlarge the number of local physical or virtual resources, also by recruiting them from private/public clouds. This increases the degree of dynamicity, making the infrastructure management more and more complex. Here, we report some advances in the realisation of an adaptive scheduling controller (ASC) which, by interacting with the datacenter resource manager, allows an effective and an efficient usage of resources. In particular, we focus on the mathematical formalisation of the ASC’s kernel that allows to dynamically configure, in a suitable way, the datacenter resources manager. The described formalisation is based on a probabilistic approach that, starting from both a hystorical resources usage and on the actual users request of the datacenter resources, identifies a suitable probability distribution for queue time with the aim to perform a short term forecasting. The case study is the SCoPE datacenter at the University of Naples Federico II
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.