OMAHA: Opportunistic Message Aggregation for pHase-based Algorithms

In the cloud computing context, several applications run concurrently over the same underlying physical infrastructure. Phase-based algorithms are key building blocks for many distributed applications such as DBMS or transaction validation services. Indeed, these applications rely on consensus or at...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE 28th Pacific Rim International Symposium on Dependable Computing (PRDC) pp. 150 - 160
Main Authors Mahamdi, Celia, Lejeune, Jonathan, Sopena, Julien, Sens, Pierre, Makpangou, Mesaac
Format Conference Proceeding
LanguageEnglish
Published IEEE 24.10.2023
Subjects
Online AccessGet full text
ISSN2473-3105
DOI10.1109/PRDC59308.2023.00027

Cover

More Information
Summary:In the cloud computing context, several applications run concurrently over the same underlying physical infrastructure. Phase-based algorithms are key building blocks for many distributed applications such as DBMS or transaction validation services. Indeed, these applications rely on consensus or atomic validation solved by phase-based algorithms (Paxos, ZAB, two-phase commit ...). In each phase, at least one participant broadcasts a message and waits for the responses from a subset of the recipients before starting the next phase. For a given phase-based algorithm, it is then possible to predict future communications for each node. Based on this observation, we propose a generic and low-intrusive solution to save network bandwidth in a cloud context by aggregating messages sent by applications in an opportunistic way. We propose a new API to easily apply our mechanism with applications using phase-based algorithms. The core of this API is the overloading of the send primitive where the users can define a trade-off between message saving and latency degradation. We evaluate our mechanisms using multiple instances of the same algorithm (3 variants of the Paxos consensus and the Zookeeper Atomic Broadcast algorithm) running concurrently. Our results show that a good tuning of the new send primitive saves a large amount of bandwidth with little latency degradation.
ISSN:2473-3105
DOI:10.1109/PRDC59308.2023.00027