The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2020
DOI: 10.1609/aaai.v34i06.6611
|View full text |Cite
|
Sign up to set email alerts
|

Modular Robot Design Synthesis with Deep Reinforcement Learning

Abstract: Modular robots hold the promise of versatility in that their components can be re-arranged to adapt the robot design to a task at deployment time. Even for the simplest designs, determining the optimal design is exponentially complex due to the number of permutations of ways the modules can be connected. Further, when selecting the design for a given task, there is an additional computational burden in evaluating the capability of each robot, e.g., whether it can reach certain points in the workspace. This wor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
17
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(23 citation statements)
references
References 12 publications
0
17
0
Order By: Relevance
“…One such field is robotics [2,27]. For many of the works which adapt morphology in robotics, designs are first established in simulation and then later fabricated and tested in a realworld environment [8,21,40,47]. Learning physiology in conjunction with policy has also been used in the fields of evolutionary algorithms [5], deep learning [16], and deep reinforcement learning [24,35,48] which adapt simulated agents to succeed in entirely simulated tasks.…”
Section: Related Workmentioning
confidence: 99%
“…One such field is robotics [2,27]. For many of the works which adapt morphology in robotics, designs are first established in simulation and then later fabricated and tested in a realworld environment [8,21,40,47]. Learning physiology in conjunction with policy has also been used in the fields of evolutionary algorithms [5], deep learning [16], and deep reinforcement learning [24,35,48] which adapt simulated agents to succeed in entirely simulated tasks.…”
Section: Related Workmentioning
confidence: 99%
“…In [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23] explored different techniques for finding robot design such that the task-reaching a set of points in 3D-is feasible. While in [3,4,5,6,7,9,10,8] the authors analyzed kinematic requirements, such as dexterity [4,6], manipulability [10] and maximization of reachable space [5], in [11,12,23,13,15,16,17,18,20,19,21,22,14,20], the authors also incorporated torque specifications in their formulation.…”
Section: Related Workmentioning
confidence: 99%
“…In [18,16,17,19], the authors included static torque requirements in their approach. The total torque applied to the robot's actuator is composed of static and dynamic components.…”
Section: Related Workmentioning
confidence: 99%
“…There also has been a large body of research on modular robots [24]- [36] where a set of predefined reusable modules is designed to compose versatile robotic systems to solve a wide variety of tasks on the fly. [29], [33] formulate the design search as a Markov Decision Process. [29] learn to perform composing actions such as "link" and "unlink" through deep reinforcement learning.…”
Section: Related Workmentioning
confidence: 99%
“…[29] learn to perform composing actions such as "link" and "unlink" through deep reinforcement learning. [33] train an action value network to predict the expected return of different assemblies to plan which module to add next. [35] develop a platform that enables a human-in-the-loop iterative design process for customized assembly.…”
Section: Related Workmentioning
confidence: 99%