Process mining techniques rely on event logs: the extraction of a process model (discovery) takes an event log as the input, the adequacy of a process model (conformance) is checked against an event log, and the enhancement of a process model is performed by using available data in the log. Several notations and formalisms for event log representation have been proposed in the recent years to enable efficient algorithms for the aforementioned process mining problems. In this paper we show how Conditional Partial Order Graphs (CPOGs), a recently introduced formalism for compact representation of families of partial orders, can be used in the process mining field, in particular for addressing the problem of compact and easy-to-comprehend representation of event logs with data. We present algorithms for extracting both the control flow as well as the relevant data parameters from a given event log and show how CPOGs can be used for efficient and effective visualisation of the obtained results. We demonstrate that the resulting representation can be used to reveal the hidden interplay between the control and data flows of a process, thereby opening way for new process mining techniques capable of exploiting this interplay.
Barrier primitives provided by standard parallel programming APIs are the primary means by which applications implement global synchronisation. Typically these primitives are fully-committed to synchronisation in the sense that, once a barrier is entered, synchronisation is the only way out. For message-passing applications, this raises the question of what happens when a message arrives at a thread that already resides in a barrier. Without a satisfactory answer, barriers do not interact with message-passing in any useful way. In this paper, we propose a new refutable barrier primitive that combines with message-passing to form a simple, expressive, efficient, well-defined API. It has a clear semantics based on termination detection, and supports the development of both globally-synchronous and asynchronous parallel applications. To evaluate the new primitive, we implement it in a prototype large-scale message-passing machine with 49,152 RISC-V threads distributed over 48 FPGAs. We show that hardware support for the primitive leads to a highly-efficient implementation, capable of synchronisation rates that are an order-of-magnitude higher than what is achievable in software. Using the primitive, we implement synchronous and asynchronous versions of a range of applications, observing that each version can have significant advantages over the other, depending on the application. Therefore, a barrier primitive supporting both styles can greatly assist the development of parallel programs.
Phospholipid membranes surround the cell and its internal organelles, and their multicomponent nature allows the formation of domains that are important in cellular signalling, the immune system, and bacterial infection. Cytoplasmic compartments are also created by the phase separation of intrinsically disordered proteins into biomolecular condensates. The ubiquity of lipid membranes and protein condensates raises the question of how three-dimensional droplets might interact with two-dimensional domains, and whether this coupling has physiological or pathological importance. Here, we explore the equilibrium morphologies of a dilute phase of a model disordered protein interacting with an ideal-mixing, two-component lipid membrane using coarse-grained molecular simulations. We find that the proteins can wet the membrane with and without domain formation, and form phase separated droplets bound to membrane domains. Results from much larger simulations performed on a novel non-von-Neumann compute architecture called POETS, which greatly accelerates their execution compared to conventional hardware, confirm the observations. Reducing the wall clock time for such simulations requires new architectures and computational techniques. We demonstrate here an inter-disciplinary approach that uses real-world biophysical questions to drive the development of new computing hardware and simulation algorithms.
A sparse matrix-matrix multiplication (SpMM) accelerator with 48 heterogeneous cores and a reconfigurable memory hierarchy is fabricated in 40-nm CMOS. The compute fabric consists of dedicated floating-point multiplication units, and general-purpose Arm Cortex-M0 and Cortex-M4 cores. The on-chip memory reconfigures scratchpad or cache, depending on the phase of the algorithm. The memory and compute units are interconnected with synthesizable coalescing crossbars for efficient memory access. The 2.0-mm × 2.6-mm chip exhibits 12.6× (8.4×) energy efficiency gain, 11.7× (77.6×) off-chip bandwidth efficiency gain, and 17.1× (36.9×) compute density gains against a high-end CPU (GPU) across a diverse set of synthetic and real-world power-law graph-based sparse matrices. Index Terms-Decoupled access execution, reconfigurablility and accelerator, sparse matrix multiplier, synthesizable crossbar. I. INTRODUCTIONT HE emergence of big data and massive social networks has led to increased importance of graph analytics and machine learning workloads. One of the fundamental kernels that drive these workloads is matrix multiplication. Traditionally, matrix multiplication workloads focused on performing
Asynchronous circuits can be useful in many applications, however, they are yet to be widely used in industry. The main reason for this is a steep learning curve for concurrency models, such Signal Transition Graphs, that are developed by the academic community for specification and synthesis of asynchronous circuits. In this paper we introduce a compositional design flow for asynchronous circuits using concepts -a set of formalised descriptions for system requirements. Our aim is to simplify the process of capturing system requirements in the form of a formal specification, and promote the concepts as a means for design reuse. The proposed design flow is applied to the development of an asynchronous buck converter.
Although graphics processing units (GPUs) are capable of high compute throughput, their memory systems need to supply the arithmetic pipelines with data at a sufficient rate to avoid stalls. For benchmarks that have divergent access patterns or cause the L1 cache to run out of resources, the link between the GPU's load/store unit and the L1 cache becomes a bottleneck in the memory system, leading to low utilization of compute resources. While current GPU memory systems are able to coalesce requests between threads in the same warp, we identify a form of spatial locality between threads in multiple warps. We use this locality, which is overlooked in current systems, to merge requests being sent to the L1 cache. This relieves the bottleneck between the load/store unit and the cache, and provides an opportunity to prioritize requests to minimize cache thrashing. Our implementation, WarpPool , yields a 38% speedup on memory throughput-limited kernels by increasing the throughput to the L1 by 8% and the reducing the number of L1 misses by 23%. We also demonstrate that WarpPool can improve GPU programmability by achieving high performance without the need to optimize workloads' memory access patterns. A Verilog implementation including place-androute shows WarpPool requires 1.0% added GPU area and 0.8% added power.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.