2022
DOI: 10.1287/inte.2021.1109
|View full text |Cite
|
Sign up to set email alerts
|

Lenovo Schedules Laptop Manufacturing Using Deep Reinforcement Learning

Abstract: Lenovo Research teamed with members of the factory operations group at Lenovo’s largest laptop manufacturing facility, LCFC, to replace a manual production scheduling system with a decision-making platform built on a deep reinforcement learning architecture. The system schedules production orders at all LCFC’s 43 assembly manufacturing lines, balancing the relative priorities of production volume, changeover cost, and order fulfillment. The multiobjective optimization scheduling problem is solved using a deep … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 6 publications
0
1
0
Order By: Relevance
“…These algorithms, ranked from most to least frequently employed, include Qlearning, temporal difference TD(λ) algorithm, SARSA, ARL, informed Q-learning, dual Q-learning, approximate Q-learning, gradient descent TD(λ) algorithm, revenue sharing, Q-III learning, relational RL, relaxed SMART, and TD(λ)-learning. In the field of DRL, many value-based approaches have been employed, such as DQN (Deep Q-Learning Networks), loosely-coupled DRL, multiclass DQN, and the Q-network algorithm [48][49][50][51][52][53][54][55][56][57][58].…”
Section: Literature Reviewmentioning
confidence: 99%
“…These algorithms, ranked from most to least frequently employed, include Qlearning, temporal difference TD(λ) algorithm, SARSA, ARL, informed Q-learning, dual Q-learning, approximate Q-learning, gradient descent TD(λ) algorithm, revenue sharing, Q-III learning, relational RL, relaxed SMART, and TD(λ)-learning. In the field of DRL, many value-based approaches have been employed, such as DQN (Deep Q-Learning Networks), loosely-coupled DRL, multiclass DQN, and the Q-network algorithm [48][49][50][51][52][53][54][55][56][57][58].…”
Section: Literature Reviewmentioning
confidence: 99%