[Back]


Talks and Poster Presentations (with Proceedings-Entry):

A. Marchisio, G. Pira, M. Martina, G. Masera, M. Shafique:
"DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks";
Talk: 2021 International Joint Conference on Neural Networks, Virtual Conference; 2021-07-18 - 2021-07-22; in: "Proceedings of the 2021 International Joint Conference on Neural Networks", (2021).



English abstract:
Spiking Neural Networks (SNNs), despite being energy-efficient when implemented on neuromorphic hardware and coupled with event-based Dynamic Vision Sensors (DVS), are vulnerable to security threats, such as adversarial attacks, i.e., small perturbations added to the input for inducing a misclassification. Toward this, we propose DVS-Attacks, a set of stealthy yet efficient adversarial attack methodologies targeted to perturb the event sequences that compose the input of the SNNs. First, we show that noise filters for DVS can be used as defense mechanisms against adversarial attacks. Afterwards, we implement several attacks and test them in the presence of two types of noise filters for DVS cameras. The experimental results show that the filters can only partially defend the SNNs against our proposed DVS-Attacks. Using the best settings for the noise filters, our proposed Mask Filter-Aware Dash Attack reduces the accuracy by more than 20% on the DVS-Gesture dataset and by more than 65% on the MNIST dataset, compared to the original clean frames. The source code of all the proposed DVS-Attacks and noise filters is released at https://github.com/albertomarchisio/DVS-Attacks.


"Official" electronic version of the publication (accessed through its Digital Object Identifier - DOI)
http://dx.doi.org/10.1109/IJCNN52387.2021.9534364


Created from the Publication Database of the Vienna University of Technology.