[Zurück]


Vorträge und Posterpräsentationen (mit Tagungsband-Eintrag):

S. Grünbacher, R. Hasani, M. Lechner, J. Cyranka, S. Smolka, R. Grosu:
"On The Verification of Neural ODEs with Stochastic Guarantees";
Vortrag: 35th AAAI Conference on Artificial Intelligence (AAAI-21), Online; 02.02.2021 - 09.02.2021; in: "Proceedings of the AAAI Conference on Artificial Intelligence", Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35 No. 13: AAAI-21 Technical Tracks 13 (2021), S. 11525 - 11535.



Kurzfassung englisch:
We show that Neural ODEs, an emerging class of time-continuous neural networks, can be verified by solving a set of global-optimization problems. For this purpose, we introduce Stochastic Lagrangian Reachability (SLR), an abstraction-based technique for constructing a tight Reachtube (an over-approximation of the set of reachable states over a given time-horizon), and provide stochastic guarantees in the form of confidence intervals for the Reachtube bounds. SLR inherently avoids the infamous wrapping effect (accumulation of over-approximation errors) by performing local optimization steps to expand safe regions instead of repeatedly forward-propagating them as is done by deterministic reachability methods. To enable fast local optimizations, we introduce a novel forward-mode adjoint sensitivity method to compute gradients without the need for backpropagation. Finally, we establish asymptotic and non-asymptotic convergence rates for SLR.

Erstellt aus der Publikationsdatenbank der Technischen Universität Wien.