My name is Robby Costales and I conduct research on artificial intelligence (AI), with the long-term aim of developing generally capable AI systems for the benefit of humanity and other sentient creatures. I am a computer science PhD student at the University of Southern California, advised by Prof. Stefanos Nikolaidis of the ICAROS lab.

My research focuses on improving reinforcement learning agents’ abilities to generalize and adapt over complex task distributions by introducing various structural inductive biases, with the aim of reducing the need for significant human supervision on which current AI methods heavily rely.


Experience

University of Southern California (Viterbi School of Engineering)

University of Southern California (Viterbi School of Engineering)

Doctorate in Computer Science (present)

Google Research (Brain Team)

Google Research (Brain Team)

Student Researcher (2022)

Columbia University (Fu Foundation School of Engineering)

Columbia University (Fu Foundation School of Engineering)

B.S. in Computer Science - Intelligent Systems (2020)

Bard College at Simon's Rock

Bard College at Simon's Rock

A.A. (2017) and B.A. in Computer Science (2020)


Publications

ALMA: Hierarchical Learning for Composite Multi-Agent Tasks

ALMA: Hierarchical Learning for Composite Multi-Agent Tasks [Paper] [Code]

S Iqbal, R Costales, F Sha

Neural Information Processing Systems (NeurIPS) 2022

A general learning method for leveraging structured multi-agent tasks, resulting in sophisticated coordination behavior and outperforming competitive MARL baselines.

Possibility Before Utility: Learning And Using Hierarchical Affordances

Possibility Before Utility: Learning And Using Hierarchical Affordances [Paper] [Code]

R Costales, S Iqbal, F Sha

(Spotlight) Int. Conference on Learning Representations (ICLR) 2022

A hierarchical reinforcement learning (HRL) approach that learns a model of affordances to proune impossible subtasks for more effective learning.

Live Trojan Attacks on Deep Neural Networks

Live Trojan Attacks on Deep Neural Networks [Paper] [Code]

R Costales, C Mao, R Norwitz, B Kim, J Yang

IEEE/CVF CVPR 2020 Workshop

We introduce a live attack on deep learning systems that patches a minimal set of model parameters in memory to induce malicious behavior.