This paper suggests a computationally economical alternative to the common method of using simulation for sensitivity studies, i.e., by making repeated simulation runs with incrementally changed inputs and then observing the magnitudes of the resulting changes in the outputs of interest. Although no theory now exists for constructing the requisite "metamodels" for direct sensitivity analysis, it is possible to construct them, as is illustrated by three examples in the field of operations research.
A gile manufacturing, fast-response micromarketing, and the rise of the virtual organization have led managers to focus on cross-functional business processes that link various divisions and organizations. These processes may be realized as one or more workflows, each of which is an instantiation of a process under certain conditions. Because an ability to adapt processes to workflow conditions is essential for organizational responsiveness, identifying and analyzing significant workflows is an important activity for managers, organization designers, and information systems specialists. A variety of software systems have been developed to aid in the structuring and implementation of workflow systems, but they are mostly visualization tools with few analytical capabilities. For example, they do not allow their users to easily determine which information elements are needed to compute other information elements, whether certain tasks depend on other tasks, and how resource availability affects information and tasks. Analyses of this type can be performed by inspection, but this gives rise to the possibility of error, especially in large systems. In this paper, we show how a mathematical construct called a metagraph can be used to represent workflows, so that such questions can be addressed through formal operations, leading to more effective design of organizational processes.
O rganizations today face increasing pressures to integrate their processes across disparate divisions and functional units, in order to remove inefficiencies as well as to enhance manageability. Process integration involves two major types of changes to process structure: (1) synthesizing processes from separate but interdependent subprocesses, and (2) decomposing aggregate processes into distinct subprocesses that are more manageable. We present an approach to facilitate this type of synthesis and decomposition through formal analysis of process structure using a mathematical structure called a metagraph.
A frequent complaint about neural net models is that they fail to explain their results in any useful way. The problem is not a lack of information, but an abundance of information that is difficult to interpret. When trained, neural nets will provide a predicted output for a posited input, and they can provide additional information in the form of interelement connection strengths. But this latter information is of little use to analysts and managers who wish to interpret the results they have been given. In this paper, we develop a measure of the relative importance of the various input elements and hidden layer elements, and we use this to interpret the contribution of these components to the outputs of the neural net.Index Terms-Clustering methods, hidden element contribution, input element contribution, measurement index, neural network architecture.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.