1986
DOI: 10.1287/opre.34.5.769
|View full text |Cite
|
Sign up to set email alerts
|

Markov and Markov-Regenerative pert Networks

Abstract: This paper investigates pert networks with independent and exponentially distributed activity durations. We model such networks as finite-state, absorbing, continuous-time Markov chains with upper triangular generator matrices. The state space is related to the network structure. We present simple and computationally stable algorithms to evaluate the usual performance criteria: the distribution and moments of project completion time, the probability that a given path is critical, and other related performance … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
109
0

Year Published

1986
1986
2017
2017

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 159 publications
(109 citation statements)
references
References 17 publications
0
109
0
Order By: Relevance
“…Additional work related to the problem of determining the expected project makespan with stochastic task durations includes papers by Van Slyke (1963) who suggests Monte Carlo Simulation as a viable method for constructing the project makespan distribution, Martin (1965) who defines a network reduction approach for determining the makespan Probability Density Function (PDF), Dodin (1984) who develops a heuristic approach to finding the k most critical paths through a project network, Dodin (1985a) who develops an approximation for the makespan CDF, Kleindorfer (1971); Robillard and Trahan (1976) and Dodin (1985b) who obtain bounds for the makespan PDF and Kulkarni and Adlakha (1986) who develop the makespan distribution for a project network with exponentially distributed task times using a Markov Pert Networks (MPN. Many of these, including Dodin (1985a), developed approximations using discretization of continuous density functions, simplifying the convolution of task densities.…”
Section: Fig 1: It Project Performancementioning
confidence: 99%
“…Additional work related to the problem of determining the expected project makespan with stochastic task durations includes papers by Van Slyke (1963) who suggests Monte Carlo Simulation as a viable method for constructing the project makespan distribution, Martin (1965) who defines a network reduction approach for determining the makespan Probability Density Function (PDF), Dodin (1984) who develops a heuristic approach to finding the k most critical paths through a project network, Dodin (1985a) who develops an approximation for the makespan CDF, Kleindorfer (1971); Robillard and Trahan (1976) and Dodin (1985b) who obtain bounds for the makespan PDF and Kulkarni and Adlakha (1986) who develop the makespan distribution for a project network with exponentially distributed task times using a Markov Pert Networks (MPN. Many of these, including Dodin (1985a), developed approximations using discretization of continuous density functions, simplifying the convolution of task densities.…”
Section: Fig 1: It Project Performancementioning
confidence: 99%
“…For Markovian PERT networks, Kulkarni and Adlakha [23] describe an exact method for deriving the distribution and moments of the earliest project completion time using continuous-time Markov chains (CTMCs), where it is assumed that each activity is started as soon as its predecessors are completed (an early-start schedule).…”
Section: Policy Classmentioning
confidence: 99%
“…, n − 1); we consider more general distributions in The problem of finding an optimal scheduling policy corresponds to optimizing a discounted criterion in a continuous-time Markov decision chain (CTMDC) on the state space Q, with Q containing all the states of the system that can be visited by the transitions (which are called feasible states); the decision set is described below. We apply a backward stochastic dynamic-programming (SDP) recursion to determine optimal deci-sions based on the CTMC described in Kulkarni and Adlakha [23]. The key instrument of the SDP recursion is the value function F (·), which determines the expected NPV of each feasible state at the time of entry of the state, conditional on the hypothesis that optimal decisions are made in all subsequent states and assuming that all 'past' modules (with all activities past) were successful.…”
Section: The Exponential Casementioning
confidence: 99%
“…For Markovian PERT networks, Kulkarni and Adlakha (1986) Kulkarni and Adlakha (1986) as a starting point to develop scheduling procedures that maximize an expected-NPV (eNPV) objective. All aforementioned studies, however, assume unlimited resources and exponentially distributed activity durations.…”
Section: Markov Decision Chainmentioning
confidence: 99%