[Back]


Talks and Poster Presentations (with Proceedings-Entry):

M. Lechner, R. Hasani, D. Rus, R. Grosu:
"Gershgorin Loss Stabilizes the Recurrent Neural Network Compartment of an End-to-end Robot Learning Scheme";
Talk: 2020 IEEE International Conference on Robotics and Automation (ICRA), Virtual - Paris France; 2020-05-31; in: "2020 IEEE International Conference on Robotics and Automation (ICRA)", (2020), 5446 - 5452.



English abstract:
Traditional robotic control suits require profound task-specific knowledge for designing, building and testing control software.
The rise of Deep Learning has enabled end-to-end solutions to be learned entirely from data, requiring minimal knowledge about the application area. We design a learning scheme to train end-to-end linear dynamical systems (LDS)s by gradient descent in imitation learning robotic domains. We introduce a new regularization loss component together with a learning algorithm that improves the stability of the learned autonomous system, by forcing the eigenvalues of the internal state updates of an LDS to be negative reals.
We evaluate our approach on a series of real-life and simulated robotic experiments, in comparison to linear and nonlinear Recurrent Neural Network (RNN) architectures.
Our results show that our stabilizing method significantly improves test performance of LDS, enabling such linear models to match the performance of contemporary nonlinear RNN architectures.
A video of the obstacle avoidance performance of our method on a mobile robot, in unseen environments, compared to other methods can be viewed at https://youtu.be/mhEsCoNao5E.

Keywords:
dynamical systems learning, learn to control robots, continuous-time recurrent neural networks


"Official" electronic version of the publication (accessed through its Digital Object Identifier - DOI)
http://dx.doi.org/10.1109/ICRA40945.2020.9196608

Electronic version of the publication:
https://publik.tuwien.ac.at/files/publik_292274.pdf