MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion Control in Real Networks

Fast and efficient transport protocols are the foundation of an increasingly distributed world. The burden of continuously delivering improved communication performance to support next-generation applications and services, combined with the increasing heterogeneity of systems and network technologie...

Full description

Saved in:
Bibliographic Details
Published inIEEE/IFIP Network Operations and Management Symposium pp. 1 - 10
Main Authors Galliera, Raffaele, Morelli, Alessandro, Fronteddu, Roberto, Suri, Niranjan
Format Conference Proceeding
LanguageEnglish
Published IEEE 08.05.2023
Subjects
Online AccessGet full text
ISSN2374-9709
DOI10.1109/NOMS56928.2023.10154210

Cover

More Information
Summary:Fast and efficient transport protocols are the foundation of an increasingly distributed world. The burden of continuously delivering improved communication performance to support next-generation applications and services, combined with the increasing heterogeneity of systems and network technologies, has promoted the design of Congestion Control (CC) algorithms that perform well under specific environments. The challenge of designing a generic CC algorithm that can adapt to a broad range of scenarios is still an open research question. To tackle this challenge, we propose to apply a novel Reinforcement Learning (RL) approach. Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return and models the learning process as an infinite-horizon task. We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch that researchers have encountered when applying RL to CC. We evaluated our solution on the task of file transfer and compared it to TCP Cubic. While further research is required, results have shown that MARLIN can achieve comparable results to TCP with little hyperparameter tuning, in a task significantly different from its training setting. Therefore, we believe that our work represents a promising first step towards building CC algorithms based on the maximum entropy RL framework.
ISSN:2374-9709
DOI:10.1109/NOMS56928.2023.10154210