[Back]


Talks and Poster Presentations (with Proceedings-Entry):

S. Grünbacher, R. Hasani, M. Lechner, J. Cyranka, S. Smolka, R. Grosu:
"On The Verification of Neural ODEs with Stochastic Guarantees";
Talk: 35th AAAI Conference on Artificial Intelligence (AAAI-21), Online; 2021-02-02 - 2021-02-09; in: "Proceedings of the AAAI Conference on Artificial Intelligence", Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35 No. 13: AAAI-21 Technical Tracks 13 (2021), 11525 - 11535.



English abstract:
We show that Neural ODEs, an emerging class of time-continuous neural networks, can be verified by solving a set of global-optimization problems. For this purpose, we introduce Stochastic Lagrangian Reachability (SLR), an abstraction-based technique for constructing a tight Reachtube (an over-approximation of the set of reachable states over a given time-horizon), and provide stochastic guarantees in the form of confidence intervals for the Reachtube bounds. SLR inherently avoids the infamous wrapping effect (accumulation of over-approximation errors) by performing local optimization steps to expand safe regions instead of repeatedly forward-propagating them as is done by deterministic reachability methods. To enable fast local optimizations, we introduce a novel forward-mode adjoint sensitivity method to compute gradients without the need for backpropagation. Finally, we establish asymptotic and non-asymptotic convergence rates for SLR.

Created from the Publication Database of the Vienna University of Technology.