Proceedings of the 16th Workshop on Hot Topics in Operating Systems 2017
DOI: 10.1145/3102980.3102998
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Machine Learning

Abstract: Machine learning applications are increasingly deployed not only to serve predictions using static models, but also as tightly-integrated components of feedback loops involving dynamic, real-time decision making. These applications pose a new set of requirements, none of which are difficult to achieve in isolation, but the combination of which creates a challenge for existing distributed execution frameworks: computation with millisecond latency at high throughput, adaptive construction of arbitrary task graph… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 41 publications
(24 citation statements)
references
References 13 publications
(8 reference statements)
0
24
0
Order By: Relevance
“…Efficiently processing stream data is a key part in the framework as the tracking process need to be detached from the visual model via a robust communication interface without additional delays. Redis databases are also starting to play an important role in distributed learning an deployment of model free Reinforcement Learning (RL) [25] in several studies [26,27]. Due to the computational load and iterations required for deep reinforcement learning based control approaches, Redis lends itself very well for sharing distributed data through asynchronous controller updates [28].…”
Section: Distributed Frame and Data Streamingmentioning
confidence: 99%
“…Efficiently processing stream data is a key part in the framework as the tracking process need to be detached from the visual model via a robust communication interface without additional delays. Redis databases are also starting to play an important role in distributed learning an deployment of model free Reinforcement Learning (RL) [25] in several studies [26,27]. Due to the computational load and iterations required for deep reinforcement learning based control approaches, Redis lends itself very well for sharing distributed data through asynchronous controller updates [28].…”
Section: Distributed Frame and Data Streamingmentioning
confidence: 99%
“…For example, CIEL represents a program as an unstructured "dynamic task graph" in which tasks can tail-recursively spawn other tasks, and imperative control-flow constructs are transformed into continuation-passing style [29]. Nishihara et al recently described a system for "real-time machine learning" that builds on these ideas, and adds a decentralized and hierarchical scheduler to improve the latency of task dispatch [30]. Programming models based on dynamic task graphs are a direct fit for algorithms that make recursive traversals over dynamic data structures, such as parse trees [15].…”
Section: Related Workmentioning
confidence: 99%
“…Ray [29] optimizes this centralized design by enabling a worker to spawn and execute some tasks locally. If a task accesses data on other workers, however, it must still perform synchronous operations with a controller.…”
Section: Cloud Framework Control Planesmentioning
confidence: 99%
“…This tradeoff introduces a fundamental limit on how much these systems can parallelize a job. Increasing parallelization has many benefits, including improved fault tolerance, load balancing, and faster performance [17,27,29,30,43]. In practice, however, increasing parallelism past even a few hundred cores quickly hits the control plane's scaling limit, making jobs run slower [24,32].…”
Section: Introductionmentioning
confidence: 99%