profile picture

Alessandro Abate

ml: reinforcement learning algorithms

stability

switched systems

ml: lifelong and continual learning

ml: applications

prs: planning under uncertainty

safe ai

ru: sequential decision making

srai: safe decision making under uncertainty

srai: formal methods for ai systems

srai: robust ai systems

srai: safe ai systems

srai: safe control

prs: planning with markov models (mdps

pomdps)

5

presentations

13

number of views

SHORT BIO

Alessandro Abate is a Professor of Verification and Control (formerly, Associate Professor) in the Department of Computer Science at the University of Oxford, wherehe is also Deputy Head of Department. He is Fellow and Tutor at St Hugh’s College Oxford, and Faculty Fellow at the Alan Turing Institute in London. I am an IEEE Fellow. Born in Milan in 1978, he grew up in Padua and received a Laurea degree in Electrical Engineering (summa cum laude) in October 2002 from the University of Padua. As an undergraduate he also studied at UC Berkeley and RWTH Aachen. He earned an MS in May 2004 and a PhD in December 2007, both in Electrical Engineering and Computer Sciences, at UC Berkeley, working on Systems and Control Theory with S. Sastry. Meanwhile he was an International Fellow in the CS Lab at SRI International in Menlo Park (CA). Thereafter, he was a PostDoctoral Researcher at Stanford University, in the Department of Aeronautics and Astronautics, working with C. Tomlin on Systems Biology in affiliation with the Stanford School of Medicine. From June 2009 to mid 2013 he has been an Assistant Professor at the Delft Center for Systems and Control, TU Delft - Delft University of Technology, working with my research group on Verification and Control of complex systems. His research interests lie on the analysis, formal verification, and control theory of heterogeneous and complex dynamical models – in particular of stochastic hybrid systems – and in their applications in cyber-physical systems (particularly involving safety-critical applications, energy, and biological networks). He blends in techniques from machine learning and AI, such as Bayesian inference, reinforcement learning, and game theory.

Presentations

Safeguarded Progress in Reinforcement Learning: Safe Bayesian Exploration for Control Policy Synthesis | VIDEO

Rohan Mitta and 5 other authors

Stability Analysis of Switched Linear Systems with Neural Lyapunov Functions

Virginie Debauche and 3 other authors

Low Emission Building Control with Zero-Shot Reinforcement Learning

Scott Jeen and 2 other authors

Probabilities Are Not Enough: Formal Controller Synthesis for Stochastic Dynamical Models with Epistemic Uncertainty

Thom S. Badings and 3 other authors