[Back]


Talks and Poster Presentations (with Proceedings-Entry):

A. Hartl, M. Bachl, J. Fabini, T. Zseby:
"Explainability and Adversarial Robustness for RNNs";
Talk: The Sixth IEEE International Conference on Big Data Computing Service and Machine Learning Applications, Oxford, United Kingdom; 08-03-2020 - 08-06-2020; in: "The Sixth IEEE International Conference on Big Data Computing Service and Machine Learning Applications", IEEE, (2020), ISBN: 978-1-7281-7022-0; 148 - 156.



English abstract:
Recurrent Neural Networks (RNNs) yield attractive properties for constructing Intrusion Detection Systems (IDSs) for network data. With the rise of ubiquitous Machine Learning (ML) systems, malicious actors have been catching up quickly to find new ways to exploit ML vulnerabilities for profit. Recently developed adversarial ML techniques focus on computer vision and their applicability to network traffic is not straightforward: Network packets expose fewer features than an image, are sequential and impose several constraints on their features.

We show that despite these completely different characteristics, adversarial samples can be generated reliably for RNNs. To understand a classifier´s potential for misclassification, we extend existing explainability techniques and propose new ones, suitable particularly for sequential data. Applying them shows that already the first packets of a communication flow are of crucial importance and are likely to be targeted by attackers. Feature importance methods show that even relatively unimportant features can be effectively abused to generate adversarial samples. Since traditional evaluation metrics such as accuracy are not sufficient for quantifying the adversarial threat, we propose the Adversarial Robustness Score (ARS) for comparing IDSs, capturing a common notion of adversarial robustness, and show that an adversarial training procedure can significantly and successfully reduce the attack surface.


"Official" electronic version of the publication (accessed through its Digital Object Identifier - DOI)
http://dx.doi.org/10.1109/BigDataService49289.2020.00030

Electronic version of the publication:
https://publik.tuwien.ac.at/files/publik_289125.pdf


Created from the Publication Database of the Vienna University of Technology.