Alessandro Abate
Thu 16 May 2019, 11:00 - 12:00
IF 4.31/4.33

If you have a question about this talk, please contact: Gareth Beedham (gbeedham)

Speaker: Alessandro Abate, U. Oxford CS

 

Title: Certified Reinforcement Learning with Logic Guidance

 

Abstract: A model-free Reinforcement Learning (RL) framework is proposed, to synthesise policies for an unknown, and possibly continuous-state, Markov Decision Process (MDP), such that a given linear temporal property is satisfied.

We convert the given property into an automaton, namely a finite-state machine expressing the property. Exploiting the structure of the automaton, we shape an adaptive reward function on-the-fly, so that the RL algorithm can synthesise a policy resulting in traces that probabilistically satisfy the linear temporal property.

Under the assumption that the MDP has finite number of states, theoretical guarantees are provided on the convergence of the RL algorithm. Whenever the MDP has a continuous state space, we empirically show that our framework finds satisfying policies, if existing. Additionally, the proposed algorithm can handle time-varying periodic environments.

The performance of the proposed architecture is evaluated via a set of numerical examples and benchmarks, where we observe an improvement of one order of magnitude in the number of iterations required for the policy synthesis, compared to existing approaches (when available).