2015
DOI: 10.1109/lca.2014.2357805
|View full text |Cite
|
Sign up to set email alerts
|

Thread Lock Section-Aware Scheduling on Asymmetric Single-ISA Multi-Core

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0
2

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 10 publications
0
3
0
2
Order By: Relevance
“…The machine learning is frequently used to achieve better performance in systems with transactional memory [56]. The focus in [11]- [13] is on the fairness of scheduling. Formulas for calculating fairness based on time or execution progress on each type of core are presented.…”
Section: A Single-isa Asymmetric Multicorementioning
confidence: 99%
See 1 more Smart Citation
“…The machine learning is frequently used to achieve better performance in systems with transactional memory [56]. The focus in [11]- [13] is on the fairness of scheduling. Formulas for calculating fairness based on time or execution progress on each type of core are presented.…”
Section: A Single-isa Asymmetric Multicorementioning
confidence: 99%
“…The prior research in Single-ISA asymmetric multiprocessors proposed moving a thread to the right core to improve performance. This move is achievable on a fine-grain level by moving only a small part of the thread [2], [4]- [6] or a coarse-grain by moving the whole thread [7]- [13]. Usually the fine-grain thread migration is performed by hardware, while coarse-grain is performed by the OS scheduler.…”
Section: Introductionmentioning
confidence: 99%
“…First, we propose the kernel-to-user-mode transition-aware hardware scheduling (KUTHS) policy, which extends our previous policy 9 to apply to larger many-core systems and systems in which the last-level caches (LLCs) are private to the group of cores. KUTHS was influenced by the fairness-aware scheduling and bottleneck identification techniques, and thereby aims at reducing thread serialization and improving parallel thread performance.…”
mentioning
confidence: 99%
“…Otros autores han propuesto soporte específico para acelerar la ejecución de aplicaciones multi-hilo sobre AMPs [3,32,54,26,27,39,25]. La mayoría de estas propuestas utilizan los cores rápidos para acelerar fases secuenciales de ejecución y otros cuellos de botella presentes en las aplicaciones paralelas, empleando distintas técnicas software [3,54] o hardware [26,27,39]. Algunas de estas propuestas explotan la interacción del SO con elruntime system, que se ejecuta en modo usuario [32,54].…”
Section: Soporte Para Aplicaciones Multi-hilounclassified
“…Más recientemente, los mismos autores propusieron UBA [27], un mecanismo que extiende BIS e incluye soporte para la aceleración de hilos rezagados (lagging threads) es decir, hilos que tardan más en ejecutar que otros a causa del desbalance de carga u otros aspectos microarquitectónicos como por ejemplo fallos de caché. En [39], los autores proponen el algoritmo de planificación Thread Lock Section Scheduler (TLSS), una técnica de hardware que identifica cuellos de botella de aplicaciones multi-hilo. En comparación con BIS y UBA, TLSS no requiere extensiones ISA y requiere muchas menos extensiones del hardware.…”
Section: Soporte Para Aplicaciones Multi-hilounclassified