Talks and Poster Presentations (with Proceedings-Entry):
M. Bachl, T. Zseby, J. Fabini:
"Rax: Deep Reinforcement Learning for Congestion Control";
Talk: IEEE International Conference on Communications (ICC 2019),
- 05-24-2019; in: "2019 IEEE International Conference on Communications (ICC)",
This paper proposes Reactive Adaptive eXperience based congestion control (Rax), a new method of congestion control that uses online reinforcement learning to maintain an optimum congestion window with respect to a given utility function and based on current network conditions. We use a neural network based approach that can be initialized either with random weights or with a previously trained neural network to improve stability and convergence time. As the processing of rewards in congestion control depends on the arrival of acknowledgments, which are delayed and received one by one, the problem is not suitable for current implementations of deep reinforcement learning. As a remedy we propose Partial Action Learning, a formulation of deep reinforcement learning that supports delayed and partial rewards. We show that our method converges to a stable, close-to-optimum solution within minutes and outperforms existing congestion control algorithms in typical networks. Thus, this paper demonstrates that deep reinforcement learning can be done online and can compete with classic congestion control schemes such as Cubic.
Electronic version of the publication:
Created from the Publication Database of the Vienna University of Technology.