[Zurück]


Vorträge und Posterpräsentationen (mit Tagungsband-Eintrag):

M. Lechner, R. Hasani, R. Grosu:
"Interpretable Neuronal Circuit Policies for Reinforcement Learning Environments";
Vortrag: Workshop on Explainable Artificial Intelligence (XAI2018) at IJCAI-ECAI 2018, Stockholm, Sweden; 13.07.2018 - 18.07.2018; in: "Proceedings of the 2nd Workshop on Explainable Artificial Intelligence", IJCAI-ECAI 2018, (2018), S. 79 - 84.



Kurzfassung englisch:
We propose an effective way to create interpretable control agents, by re-purposing the function of a biological neural circuit model, to govern simulated and real world reinforcement learning (RL) test-beds.
We model the tap-withdrawal (TW) neural circuit of the nematode, C. elegans, a circuit responsible for the worm´s reflexive response to external mechanical touch stimulations, and learn its synaptic
and neuronal parameters as a policy for controlling basic RL tasks. We also autonomously park a real rover robot on a pre-defined trajectory, by deploying such neuronal circuit policies learned in a simulated
environment. For reconfiguration of the purpose of the TW neural circuit, we adopt a search-based RL algorithm. We show that our neuronal policies perform as good as deep neural network policies with
the advantage of realizing interpretable dynamics at the cell level.

Erstellt aus der Publikationsdatenbank der Technischen Universität Wien.