[Zurück]


Zeitschriftenartikel:

M. Lechner, R. Hasani, A. Amini, T. Henzinger, D. Rus, R. Grosu:
"Neural Circuit Policies Enabling Auditable Autonomy";
Nature Machine Intelligence, 2 (2020), S. 642 - 652.



Kurzfassung englisch:
A central goal of artificial intelligence in high-stakes decision-making applications is to design a single algorithm that simultaneously expresses generalizability by learning coherent representations of their world and interpretable explanations of its dynamics. Here, we combine brain-inspired neural computation principles and scalable deep learning architectures to design compact neural controllers for task-specific compartments of a full-stack autonomous vehicle control system. We discover that a single algorithm with 19 control neurons, connecting 32 encapsulated input features to outputs by 253 synapses, learns to map high-dimensional inputs into steering commands. This system shows superior generalizability, interpretability and robustness compared with orders-of-magnitude larger black-box learning systems. The obtained neural agents enable high-fidelity autonomy for task-specific parts of a complex autonomous system.


"Offizielle" elektronische Version der Publikation (entsprechend ihrem Digital Object Identifier - DOI)
http://dx.doi.org/10.1038/s42256-020-00237-3

Elektronische Version der Publikation:
https://publik.tuwien.ac.at/files/publik_292280.pdf


Erstellt aus der Publikationsdatenbank der Technischen Universität Wien.