[Zurück]


Zeitschriftenartikel:

F. Iglesias Vazquez, J. Milosevic, T. Zseby:
"Fuzzy classification boundaries against adversarial network attacks";
Fuzzy Sets and Systems, 368 (2018), S. 20 - 35.



Kurzfassung englisch:
Adversarial machine learning copes with the development of methods to prevent machine learning algorithms from being misled by malicious users. This field is especially relevant for applications where machine learning lies in the core of security systems. In the field of network security, adversarial samples are actually novel network attacks or old attacks with tuned properties. This paper proposes to blur classification boundaries in order to enhance machine learning robustness and improve the detection of adversarial samples that exploit learning weaknesses. We test this concept by an experimental setup with network traffic in which linear decision trees are wrapped by a one-class-membership scoring algorithm. We benchmark our proposal with plain linear decision trees and fuzzy decision trees. Results show that evasive attacks (i.e., false negatives) tend to be ranked with low class-membership levels, meaning that they are located in zones close to classification thresholds. In addition, classification performances improve when membership scores are added as new features. Using fuzzy class boundaries is highly consistent with the interpretation of many network traffic features used for malware detection; moreover, it prevents network attackers from exploiting classification boundaries as attack objectives.

Schlagworte:
learning; fuzzy system models; data analysis; adversarial machine learning; network security


"Offizielle" elektronische Version der Publikation (entsprechend ihrem Digital Object Identifier - DOI)
http://dx.doi.org/10.1016/j.fss.2018.11.004

Elektronische Version der Publikation:
https://www.sciencedirect.com/science/article/pii/S0165011418308716?via%3Dihub


Erstellt aus der Publikationsdatenbank der Technischen Universität Wien.