43Publications

298Citation Statements Received

781Citation Statements Given

How they've been cited

411

0

298

0

How they cite others

1,048

5

776

0

Publications

Order By: Most citations

This paper documents near-autonomous negotiation of synthetic and natural climbing terrain by a rugged legged robot, achieved through sequential composition of appropriate perceptually triggered locomotion primitives. The first, simple composition achieves autonomous uphill climbs in unstructured outdoor terrain while avoiding surrounding obstacles such as trees and bushes. The second, slightly more complex composition achieves autonomous stairwell climbing in a variety of different buildings. In both cases, the intrinsic motor competence of the legged platform requires only small amounts of sensory information to yield near-complete autonomy. Both of these behaviors were developed using X-RHex, a new revision of RHex that is a laboratory on legs, allowing a style of rapid development of sensorimotor tasks with a convenience near to that of conducting experiments on a lab bench. Applications of this work include urban search and rescue as well as reconnaissance operations in which robust yet simple-to-implement autonomy allows a robot access to difficult environments with little burden to a human operator. Abstract -This paper documents near-autonomous negotiation of synthetic and natural climbing terrain by a rugged legged robot, achieved through sequential composition of appropriate perceptually triggered locomotion primitives. The first, simple composition achieves autonomous uphill climbs in unstructured outdoor terrain while avoiding surrounding obstacles such as trees and bushes. The second, slightly more complex composition achieves autonomous stairwell climbing in a variety of different buildings. In both cases, the intrinsic motor competence of the legged platform requires only small amounts of sensory information to yield near-complete autonomy. Both of these behaviors were developed using X-RHex, a new revision of RHex that is a laboratory on legs, allowing a style of rapid development of sensorimotor tasks with a convenience near to that of conducting experiments on a lab bench. Applications of this work include urban search and rescue as well as reconnaissance operations in which robust yet simple-to-implement autonomy allows a robot access to difficult environments with little burden to a human operator.

This paper concerns optimal mode-scheduling in autonomous switched-mode hybrid dynamical systems, where the objective is to minimize a cost-performance functional defined on the state trajectory as a function of the schedule of modes. The controlled variable, namely the modes' schedule, consists of the sequence of modes and the switchover times between them. We propose a gradient-descent algorithm that adjusts a given mode-schedule by changing multiple modes over time-sets of positive Lebesgue measures, thereby avoiding the inefficiencies inherent in existing techniques that change the modes one at a time. The algorithm is based on steepest descent with Armijo step sizes along Gâteaux differentials of the performance functional with respect to schedule-variations, which yields effective descent at each iteration. Since the space of mode-schedules is infinite dimensional and incomplete, the algorithm's convergence is proved in the sense of Polak's framework of optimality functions and minimizing sequences. Simulation results are presented, and possible extensions to problems with dwelltime lower-bound constraints are discussed.

We present a framework for asynchronously solving convex optimization problems over networks of agents which are augmented by the presence of a centralized cloud computer. This framework uses a Tikhonov-regularized primal-dual approach in which the agents update the system's primal variables and the cloud updates its dual variables. To minimize coordination requirements placed upon the system, the times of communications and computations among the agents are allowed to be arbitrary, provided they satisfy mild conditions. Communications from the agents to the cloud are likewise carried out without any coordination in their timing. However, we require that the cloud keep the dual variable's value synchronized across the agents, and a counterexample is provided that demonstrates that this level of synchrony is indeed necessary for convergence. Convergence rate estimates are provided in both the primal and dual spaces, and simulation results are presented that demonstrate the operation and convergence of the proposed algorithm.

Information communicated within cyber-physical systems (CPSs) is often used in determining the physical states of such systems, and malicious adversaries may intercept these communications in order to infer future states of a CPS or its components. Accordingly, there arises a need to protect the state values of a system. Recently, the notion of differential privacy has been used to protect state trajectories in dynamical systems, and it is this notion of privacy that we use here to protect the state trajectories of CPSs. We incorporate a cloud computer to coordinate the agents comprising the CPSs of interest, and the cloud offers the ability to remotely coordinate many agents, rapidly perform computations, and broadcast the results, making it a natural fit for systems with many interacting agents or components. Striving for broad applicability, we solve infinite-horizon linear-quadratic-regulator (LQR) problems, and each agent protects its own state trajectory by adding noise to its states before they are sent to the cloud. The cloud then uses these state values to generate optimal inputs for the agents. As a result, private data is fed into feedback loops at each iteration, and each noisy term affects every future state of every agent. In this paper, we show that the differentially private LQR problem can be related to the well-studied linear-quadratic-Gaussian (LQG) problem, and we provide bounds on how agents' privacy requirements affect the cloud's ability to generate optimal feedback control values for the agents. These results are illustrated in numerical simulations.

We present an optimization framework that solves constrained multi-agent optimization problems while keeping each agent's state differentially private. The agents in the network seek to optimize a local objective function in the presence of global constraints. Agents communicate only through a trusted cloud computer and the cloud also performs computations based on global information. The cloud computer modifies the results of such computations before they are sent to the agents in order to guarantee that the agents' states are kept private. We show that under mild conditions each agent's optimization problem converges in mean-square to its unique solution while each agent's state is kept differentially private. A numerical simulation is provided to demonstrate the viability of this approach.

Multi-agent coordination algorithms with randomized interactions have seen use in a variety of settings in the multi-agent systems literature. In some cases, these algorithms can be random by design, as in a gossip-like algorithm, and in other cases they are random due to external factors, as in the case of intermittent communications. Targeting both of these scenarios, we present novel convergence rate estimates for consensus problems solved over random graphs. Established results provide asymptotic convergence in this setting, and we provide estimates of the rate of convergence in two forms. First, we estimate decreases in a quadratic Lyapunov function over time to bound how quickly the agents' disagreement decays, and second we bound the probability of being at least a given distance from the point of agreement. Both estimates rely on (approximately) computing eigenvalues of the expected matrix exponential of a random graph's Laplacian, which we do explicitly in terms of the network's size and edge probability, without assuming that any relationship between them holds. Simulation results are provided to support the theoretical developments made. I. INTRODUCTIONDistributed agreement, often broadly referred to as the consensus problem, is a canonical problem in distributed coordination and has received attention in diverse fields such as physics [24], signal processing [20], robotics [19], power systems [16], and communications [14]. The goal in such problems is to drive all agents in a network to a common final state. A key feature of consensus problems is their distributed nature; consensus is typically carried out across a network of agents in which each agent communicates with some other agents, though generally not all of them. The wide range of fields which study distributed agreement has given rise to corresponding diversity among consensus problem formulations, and a number of variants of consensus have been studied in the literature.In this paper, we derive convergence rates for consensus over random graphs, studied previously in [10] where asymptotic convergence was shown. In some cases, the motivation for representing a communication network using random graphs comes from agents using an interaction protocol that is randomized by design, such as in a gossiplike algorithm [3]. In other cases, unreliable communications due to poor channel quality, interference, and other factors can be effectively represented by a random communication graph [15], and the work here applies to each of these scenarios. This problem formulation has each agent communicating with a random collection of other agents determined by a random graph. Each agent moves toward the average of its neighbors' states, then a new graph is

New architectures and algorithms are needed to reflect the mixture of local and global information that is available as multi-agent systems connect over the cloud. We present a novel architecture for multi-agent coordination where the cloud is assumed to be able to gather information from all agents, perform centralized computations, and disseminate the results in an intermittent manner. This architecture is used to solve a multi-agent optimization problem in which each agent has a local objective function unknown to the other agents and in which the agents are collectively subject to global inequality constraints. Leveraging the cloud, a dual problem is formulated and solved by finding a saddle point of the associated Lagrangian.

scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.

hi@scite.ai

334 Leonard St

Brooklyn, NY 11211

Copyright © 2023 scite Inc. All rights reserved.

Made with 💙 for researchers