Exploring locking & partitioning for predictable shared caches on multi-cores
Multi-core architectures consisting of multiple processing cores on a chip have become increasingly prevalent. Synthesizing hard real-time applications onto these platforms is quite challenging, as the contention among the cores for various shared resources leads to inherent timing unpredictability....
Saved in:
| Published in | 2008 45th ACM/IEEE Design Automation Conference pp. 300 - 303 |
|---|---|
| Main Authors | , |
| Format | Conference Proceeding |
| Language | English |
| Published |
New York, NY, USA
ACM
08.06.2008
IEEE |
| Series | ACM Conferences |
| Subjects | |
| Online Access | Get full text |
| ISBN | 1605581151 9781605581156 |
| ISSN | 0738-100X |
| DOI | 10.1145/1391469.1391545 |
Cover
| Summary: | Multi-core architectures consisting of multiple processing cores on a chip have become increasingly prevalent. Synthesizing hard real-time applications onto these platforms is quite challenging, as the contention among the cores for various shared resources leads to inherent timing unpredictability. This paper proposes the use of shared cache in a predictable manner through a combination of locking and partitioning mechanisms. We explore possible design choices and evaluate their effects on the worst-case application performance. Our study reveals certain design principles that strongly dictate the performance of a predictable memory hierarchy. |
|---|---|
| ISBN: | 1605581151 9781605581156 |
| ISSN: | 0738-100X |
| DOI: | 10.1145/1391469.1391545 |