T his paper describes a novel theoretical and empirical approach to tasks such as business process redesign and knowledge management. The project involves collecting examples of how different organizations perform similar processes, and organizing these examples in an on-line "process handbook." The handbook is intended to help people: (1) redesign existing organizational processes, (2) invent new organizational processes (especially ones that take advantage of information technology), and (3) share ideas about organizational practices.A key element of the work is an approach to analyzing processes at various levels of abstraction, thus capturing both the details of specific processes as well as the "deep structure" of their similarities. This approach uses ideas from computer science about inheritance and from coordination theory about managing dependencies. A primary advantage of the approach is that it allows people to explicitly represent the similarities (and differences) among related processes and to easily find or generate sensible alternatives for how a given process could be performed. In addition to describing this new approach, the work reported here demonstrates the basic technical feasibility of these ideas and gives one example of their use in a field study.
An organization is architecturally competent if it has the ability to acquire, use and sustain the skills and knowledge necessary to carry out architecture-related practices that lead to systems that serve the organization's business goals. This paper presents some principles of architecture competence, based on four models that aid in explaining, measuring, and improving the architecture competence of an individual or an organization with respect to these principles, The principles are based on a set of fundamental beliefs about software architecture. .
Abstract-In commercial-off-the-shelf (COTS) multi-core systems, a task running on one core can be delayed by other tasks running simultaneously on other cores due to interference in the shared DRAM main memory. Such memory interference delay can be large and highly variable, thereby posing a significant challenge for the design of predictable real-time systems. In this paper, we present techniques to provide a tight upper bound on the worst-case memory interference in a COTS-based multi-core system. We explicitly model the major resources in the DRAM system, including banks, buses and the memory controller. By considering their timing characteristics, we analyze the worstcase memory interference delay imposed on a task by other tasks running in parallel. To the best of our knowledge, this is the first work bounding the request re-ordering effect of COTS memory controllers. Our work also enables the quantification of the extent by which memory interference can be reduced by partitioning DRAM banks. We evaluate our approach on a commodity multi-core platform running Linux/RK. Experimental results show that our approach provides an upper bound very close to our measured worst-case interference.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.