A suitable planting pattern and irrigation strategy are essential for optimizing winter wheat yield and water use efficiency (WUE). The study aimed to evaluate the impact of planting pattern and irrigation frequency on grain yield and WUE of winter wheat. During the 2013–2014 and 2014–2015 winter wheat growing seasons in the North China Plain, the effects of planting patterns and irrigation frequencies were determined on tiller number, grain yield, and WUE. The two planting patterns tested were wide-precision and conventional-cultivation. Each planting pattern had three irrigation regimes: irrigation (120 mm) at the jointing stage; irrigation (60 mm) at both the jointing and heading stages; and irrigation (40 mm) at the jointing, heading, and milking stages. In our study, tiller number was significantly higher in the wide-precision planting pattern than in the conventional-cultivation planting pattern. Additionally, the highest grain yields and WUE were observed when irrigation was applied at the jointing stage (120 mm) or at the jointing and heading stages (60 mm each) in the wide-precision planting pattern. These results could be attributed to higher tiller numbers as well as reduced water consumption due to reduced irrigation frequency. In both growing seasons, applying 60 mm of water at jointing and heading stages resulted in the highest grain yield among the treatments. Based on our results, for winter wheat production in semi-humid regions, we recommend a wide-precision planting pattern with irrigation (60 mm) at both the jointing and heading stages.
The computation for today's intelligent personal assistants such as Apple Siri, Google Now, and Microsoft Cortana, is performed in the cloud. This cloud-only approach requires significant amounts of data to be sent to the cloud over the wireless network and puts significant computational pressure on the datacenter. However, as the computational resources in mobile devices become more powerful and energy efficient, questions arise as to whether this cloud-only processing is desirable moving forward, and what are the implications of pushing some or all of this compute to the mobile devices on the edge. In this paper, we examine the status quo approach of cloud-only processing and investigate computation partitioning strategies that effectively leverage both the cycles in the cloud and on the mobile device to achieve low latency, low energy consumption, and high datacenter throughput for this class of intelligent applications. Our study uses 8 intelligent applications spanning computer vision, speech, and natural language domains, all employing state-of-the-art Deep Neural Networks (DNNs) as the core machine learning technique. We find that given the characteristics of DNN algorithms, a fine-grained, layer-level computation partitioning strategy based on the data and computation variations of each layer within a DNN has significant latency and energy advantages over the status quo approach. Using this insight, we design Neurosurgeon, a lightweight scheduler to automatically partition DNN computation between mobile devices and datacenters at the granularity of neural network layers. Neurosurgeon does not require per-application profiling. It adapts to various DNN architectures, hardware platforms, wireless networks, and server load levels, intelligently partitioning computation for
The computation for today's intelligent personal assistants such as Apple Siri, Google Now, and Microsoft Cortana, is performed in the cloud. This cloud-only approach requires significant amounts of data to be sent to the cloud over the wireless network and puts significant computational pressure on the datacenter. However, as the computational resources in mobile devices become more powerful and energy efficient, questions arise as to whether this cloud-only processing is desirable moving forward, and what are the implications of pushing some or all of this compute to the mobile devices on the edge. In this paper, we examine the status quo approach of cloud-only processing and investigate computation partitioning strategies that effectively leverage both the cycles in the cloud and on the mobile device to achieve low latency, low energy consumption, and high datacenter throughput for this class of intelligent applications. Our study uses 8 intelligent applications spanning computer vision, speech, and natural language domains, all employing state-of-the-art Deep Neural Networks (DNNs) as the core machine learning technique. We find that given the characteristics of DNN algorithms, a fine-grained, layer-level computation partitioning strategy based on the data and computation variations of each layer within a DNN has significant latency and energy advantages over the status quo approach. Using this insight, we design Neurosurgeon, a lightweight scheduler to automatically partition DNN computation between mobile devices and datacenters at the granularity of neural network layers. Neurosurgeon does not require per-application profiling. It adapts to various DNN architectures, hardware platforms, wireless networks, and server load levels, intelligently partitioning computation for
The exact transmission route of many respiratory infectious diseases remains a subject for debate to date. The relative contribution ratio of each transmission route is largely undetermined, which is affected by environmental conditions, human behavior, the host and the microorganism. In this study, a detailed mathematical model is developed to investigate the relative contributions of different transmission routes to a multiroute transmitted respiratory infection. It is illustrated that all transmission routes can dominate the total transmission risk under different scenarios. Influential parameters considered include dose-response rate of different routes, droplet governing size that determines virus content in droplets, exposure distance, and virus dose transported to the hand of infector. Our multi-route transmission model provides a comprehensive but straightforward method to evaluate the transmission efficiency of different transmission routes of respiratory diseases and provides a basis for predicting the impact of individual level intervention methods such as increasing close-contact distance and wearing protective masks. (Word count: 153) KeywordsMulti-route transmission, short-range airborne route, long-range airborne route, building ventilation, respiratory infection, influenza All rights reserved. No reuse allowed without permission.(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Highlights 1. A multi-route transmission model is developed by considering evaporation and motion of respiratory droplets with the respiratory jet and consequent exposure of the susceptible.2. We have illustrated that each transmission route may dominate during the influenza transmission, and the influential factors are revealed.3. The short-range airborne route and infection caused by direct inhalation of medium droplets are highlighted.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.