We consider an integrated production and delivery scheduling problem with non-stationary demand in a two-stage supply chain, where orders arrive dynamically and the demand is time-varying. Orders should be first processed on identical machines and then delivered to a single next-stage destination by the transporters with fixed departure times. The objective is to minimize the order waiting time via production-delivery scheduling. We formulate the problem into a Markov decision process model and develop an approximate dynamic programming (ADP) method. To shrink action (decision) space, we propose the shorter processing time first and first completion first delivery (SPTm/FCFD) principle to determine order processing sequences and order delivery, and then we establish two constraints to eliminate a fraction of inferior actions. Based on the SPTm/FCFD principle, we propose the SPT/FCFD rule, and show its optimality for two scenarios. In addition, we deploy five basis functions to approximate the value function. The superior performance of ADP policy is validated via numerical experiments, compared with four benchmark policies. We also empirically study the impact of demand features on the waiting time, and results show that these features significantly affect the performances of all polices. In practice, it is suggested to postpone the peak demand, when total demand exceeds the available production capacity.
Efficient management and utilization of edge server memory buffers are crucial for improving the efficiency of concurrent editing in the concurrent editing application scenario of large-scale video in edge computing. In order to elevate the efficiency of concurrent editing and the satisfaction of service users under the constraint of limited memory buffer resources, the allocation of memory buffers of concurrent editing servers is transformed into the bin-packing problem, which is solved using an ant colony algorithm to achieve the least loaded utilization batch. Meanwhile, a new distributed online concurrent editing algorithm for video streams is designed for the conflict problem of large-scale video editing in an edge computing environment. It incorporates dual-buffer read-and-write technology to solve the difficult problem of concurrent inefficiency of editing and writing disks. The experimental results of the simulation show that the scheme not only achieves a good performance in the scheduling of concurrent editing but also implements the editing resource allocation function in an efficient and reasonable way. Compared with the benchmark traditional single-exclusive editing scheme, the proposed optimized scheme can simultaneously enhance editing efficiency and user satisfaction under the restriction of providing the same memory buffer computing resources. The proposed model has a wide application to video real-time processing application scenarios in edge computing.
Concurrent access to large-scale video data streams in edge computing is an important application scenario that currently faces a high cost of network access equipment and high data packet loss rate. To solve this problem, a low-cost link aggregation video stream data concurrent transmission method is proposed. Data Plane Development Kit (DPDK) technology supports the concurrent receiving and forwarding function of multiple Network Interface Cards (NICs). The Q-learning data stream scheduling model is proposed to solve the load scheduling of multiple queues of multiple NICs. The Central Processing Unit (CPU) transmission processing unit was dynamically selected by data stream classification, as well as a reward function, to achieve the dynamic load balancing of data stream transmission. The experiments conducted demonstrate that this method expands the bandwidth by 3.6 times over the benchmark scheme for a single network port, and reduces the average CPU load ratio by 18%. Compared to the UDP and DPDK schemes, it lowers the average system latency by 21%, reduces the data transmission packet loss rate by 0.48%, and improves the overall system transmission throughput. This transmission optimization scheme can be applied in data centers and edge computing clusters to improve the communication performance of big data processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.