Faculty of Science and Technology

Artificial intelligence

Campus: Mt Helen

Discipline: Information Technology

Research field: Artificial Intelligence

Key words: reinforcement learning, multiobjective decision making, safe artificial intelligence

Supervisor(s): The Federation Learning Agents Group (FLAG) – Peter Vamplew, Richard Dazeley, Cameron Foale, Dean Webb

Contact details:

Email: p.vamplew@federation.edu.au

Phone (optional) 5327 9616

Brief Supervisor Bio

FLAG carries out research in the area of reinforcement learning (RL). This is a form of artificial intelligence in which an autonomous agent learns to carry out a sequential decision making task, with its behaviour guided by a reward (for example learning to play a game like Go with a reward of +1 when it wins and -1 when it loses). FLAG is a world-leading group in the field of multiobjective RL (applying RL to problems with multiple, conflicting objectives). We also have expertise in assisted RL (where the agent can receive additional advice or feedback from a human), and an interest in applying MORL in the context of creating AI agents with safety constraints to avoid negative outcomes.

Project title:

For 2018, we propose the following projects related to AI safety with MORL.

  1. Robustness to reward misspecification

One concern raised about reinforcement learning methods is that the behaviour of the agent may be unsafe, or at least sub-optimal, if the reward function specified by the system designer is not accurately aligned with the designer’s actual goals (Amodei et al, 2016). Several authors have reported examples of such occurrences, indicating that robustness to reward misspecification is a very desirable property for an RL agent. We have recently received seed funding to use Amazon’s Mechanical Turk service to gather multiple reward specifications for a moderately complex RL environment such as Mario RL. This project will utilise this crowd-sourced data as a basis for investigating the robustness of MORL to reward misspecification.

We propose to first evaluate the actual performance of a single-objective reinforcement learning agent (in terms of in-game score) trained using each of the crowdsourced definitions of reward. We anticipate that a significant proportion of these definitions will prove to be misspecified, and that the corresponding agents will score poorly. These results will then be compared against that of a multiobjective RL agent which receives feedback from several different reward specifications, and applies some form of non-linear combination of those rewards (such as averaging after discarding outliers). We hypothesise that the agent based on this ‘ensemble of rewards’ will outperform the agents trained independently on each reward specification.

  1. Mild optimization using multiobjective RL

Another concern raised in the literature about the safety of RL is that it focuses exclusively on maximising the reward function, ignoring any other environmental factors not included in that reward (Omohundro, 2008; Bostrom, 2014; Taylor, 2016). This project will apply and empirically evaluate the effectiveness of multiobjective RL methods to overcome such problems. Each factor of interest can be defined as a separate component of a vector-valued reward, and a multiobjective RL agent can then apply a non-linear action selection mechanism to select the most appropriate action, taking into account all factors. For example, the agent might threshold objectives and then apply a lexicographic ordering, enabling it to maximize the primary objective, but only subject to meeting certain constraints on the minimal performance with respect to the other objectives.

We are also open to supervising projects related to other areas of RL, or other aspects of machine learning such as deep neural networks.