We are interested in a new class of optimal control problems for Discrete Event Systems (DES). We adopt the formalism of supervisory control theory [7] and model the system as a finite state machine (FSM). Our control problem is characterized by the presence of uncontrollable as well as unobservable events, the notion of occurrence and control costs for events and a worst-case objective function. We first derive an observer for the partially unobservable FSM, which allows us to construct an approximation of the unobservable trajectory costs. We define the performance measure on this observer rather than on the original FSM itself. Further, we use the algorithm of [8] to synthesize an optimal submachine of the observer. This submachine leads to the desired supervisor for the system. Introduction and MotivationWe are interested in a new class of optimal control problems for Discrete Event Systems (DES) [7]. The system to be controlled is modeled as a finite state machine (FSM). Our control problem follows the theory in [8] and is characterized by the presence of uncontrollable events, the notion of occurrence and control costs for events and a worst-case objective function. However, compared to the work in [8] and compared to [3,6], we wish to take into account partial observability. Several concepts and properties of the supervisory control problem under partial observation were studied in [1, 4] among others. However, they only propose a qualitative theory for the control of DESs.
We are interested in a new class of optimal control problems for discrete event systems. We adopt the formalism of supervisory control theory (Proc. IEEE 77(1) (1989) 81) and model the system as a ÿnite state machine (FSM). Our control problem is characterized by the presence of uncontrollable as well as unobservable events, the notion of occurrence and control costs for events and a worst-case objective function. We ÿrst derive an observer for the partially unobservable FSM, which allows us to construct an approximation of the unobservable trajectory costs. We then deÿne the performance measure on this observer rather than on the original FSM itself. We then use the algorithm presented in Sengupta and Lafortune (SIAM J. Control Optim. 36(2) (1998)) to synthesize an optimal submachine of the C-observer. This submachine leads to the desired supervisor for the system. ?
This paper deals with a new type of optimal control for Discrete Event Systems. Our control problem extends the theory of 18], that is characterized by the presence of uncontrollable events, the notion of occurrence and control costs for events, and a w orst-case objective function. A signi cant di erence with the work in 18] is that our aim is to make the system evolve through a set of multiple goals, one by one, with no order necessarily pre-speci ed, whereas the previous theory only deals with a single goal. Our solution approach is divided into two steps. In the rst step, we use the optimal control theory in 18] t o s y n thesize individual controllers for each goal. In the second step, we d e v elop the solution of another optimal control problem, namely, how to modify if necessary and piece together, or schedule, all of the controllers built in the rst step in order to visit each of the goals with least total cost. We solve this problem by de ning the notion of a scheduler and then by mapping the problem of nding an optimal scheduler to an instance of the well-known Traveling Salesman Problem (TSP) 7]. We nally suggest various strategies to reduce the complexity of the TSP resolution while still preserving global optimality.
This paper deals with a new type of optimal control for Discrete Event Systems. Our control problem extends the theory of 18], that is characterized by the presence of uncontrollable events, the notion of occurrence and control costs for events, and a w orst-case objective function. A signiicant diierence with the work in 18] is that our aim is to make the system evolve through a set of multiple goals, one by one, with no order necessarily pre-speciied, whereas the previous theory only deals with a single goal. Our solution approach is divided into two steps. In the rst step, we use the optimal control theory in 18] t o s y n thesize individual controllers for each goal. In the second step, we d e v elop the solution of another optimal control problem, namely, how to modify if necessary and piece together, or schedule, all of the controllers built in the rst step in order to visit each of the goals with least total cost. We solve this problem by deening the notion of a scheduler and then by mapping the problem of nding an optimal scheduler to an instance of the well-known Traveling Salesman Problem (TSP) 7]. We nally suggest various strategies to reduce the complexity of the TSP resolution while still preserving global optimality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.