S. von der Thannen:

"Bestärkendes Lernen in agentenbasierter Modellierung";

Supervisor: F. Breitenecker, N. Popper; Analysis and Scientific Computing, 2018; final examination: 2018-11-22.

In recent years, huge progress has been made in machine learning using neural net- works as function approximators. Especially in reinforcement learning, extensive research is ongoing and a lot of breakthroughs were achieved (e.g. playing atari games and AlphaGo by Google Deep Mind). Most of these problems involve a single agent thrown into an environment where it has to figure out how to perform optimally based on given rewards for each action.

Using these techniques, the thesis aims to develop a general framework for agent based modelling using reinforcement learning and evaluate the results on a predator- prey model using usual approches such as the Lotka-Volterra equations or rule based models.

As some models require a classification of agents in groups, as it is for the predator- prey model, each group of agents demand their own reward function in order to find its optimal policy. This policy function, which will be approximated by a neural network, gives the agent advice for the best action to take with focus on maximising the agentīs expected total future rewards based on their current state. Compared to usual agent based models, this approach can simplify the modelling process while decreasing the bias of the model since hard coded behavioural rules are replaced by a reward function.

This thesis tries to explore different approaches to find both, a meaningful re- ward function and good parameters to assure global convergence when modelling complex interactions between agents in an environment.

Reinforcement learning / neural networks / Agent-based modelling

Created from the Publication Database of the Vienna University of Technology.