Garbage collection offers numerous software engineering advantages, but it interacts poorly with virtual memory managers. Most existing garbage collectors visit many more pages than the application itself and touch pages without regard to which ones are in memory, especially during full-heap garbage collection. The resulting paging can cause throughput to plummet and pause times to spike up to seconds or even minutes. We present a garbage collector that avoids paging. This bookmarking collector cooperates with the virtual memory manager to guide its eviction decisions. It records summary information ("bookmarks") about evicted pages that enables it to perform in-memory full-heap collections. In the absence of memory pressure, the bookmarking collector matches the throughput of the best collector we tested while running in smaller heaps. In the face of memory pressure, it improves throughput by up to a factor of five and reduces pause times by up to a factor of 45 over the next best collector. Compared to a collector that consistently provides high throughput (generational mark-sweep), the bookmarking collector reduces pause times by up to 218x and improves throughput by up to 41x.
Because most application data is dynamically allocated, the memory manager plays a crucial role in application performance by determining the spatial locality of heap objects. Previous generalpurpose allocators have focused on reducing fragmentation, while most locality-improving allocators have either focused on improving the locality of the allocator (not the application) or required information supplied by the programmer or obtained by profiling. We present a high-performance memory allocator that builds on previous allocator designs to achieve low fragmentation while transparently improving application locality. Our allocator, called Vam, improves page-level locality by managing the heap in page-sized chunks and aggressively giving up free pages to the virtual memory manager. By eliminating object headers, using fine-grained size classes, and by allocating objects using a reap-based algorithm, Vam improves cache-level locality. Over a range of large footprint benchmarks, Vam improves application performance by an average of 4%-8% versus the Lea (Linux) and FreeBSD allocators. When memory is scarce, Vam improves application performance by up to 2X compared to the FreeBSD allocator, and by over 10X compared to the Lea allocator. We show that synergy between Vam's layout algorithms and the Linux swap clustering algorithm increases its swap prefetchability, further improving its performance when paging.
Garbage collection offers numerous software engineering advantages, but it interacts poorly with virtual memory managers. Most existing garbage collectors visit many more pages than the application itself and touch pages without regard to which ones are in memory, especially during full-heap garbage collection. The resulting paging can cause throughput to plummet and pause times to spike up to seconds or even minutes. We present a garbage collector that avoids paging. This bookmarking collector cooperates with the virtual memory manager to guide its eviction decisions. It records summary information ("bookmarks") about evicted pages that enables it to perform in-memory full-heap collections. In the absence of memory pressure, the bookmarking collector matches the throughput of the best collector we tested while running in smaller heaps. In the face of memory pressure, it improves throughput by up to a factor of five and reduces pause times by up to a factor of 45 over the next best collector. Compared to a collector that consistently provides high throughput (generational mark-sweep), the bookmarking collector reduces pause times by up to 218x and improves throughput by up to 41x.
We conduct the first systematical adoption of the Semantic Web solution in the integration, management, and utilization of TCM information and knowledge resources. As the results, the largest TCM Semantic Web ontology is engineered as the uniform knowledge representation mechanism; the ontology-based query and search engine is deployed, mapping legacy and heterogeneous relational databases to the Semantic Web layer for query and search across database boundaries; the first global herb-drug interaction network is mapped through semantic integration, and the semantic graph mining methodology is implemented for discovering and interpreting interesting patterns from this network. The platform and underlying methodology are proved effective in TCM-related drug usage, discovery, and safety analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.