Research
My research broadly covers the intersection of programming languages and machine learning. In particular, I am most interested in developing techniques for proving the safety of systems with machine learning components. My early work in this domain focused on using abstraction refinement to prove local robustness properties (PLDI'19). More recently I have worked on incorporating ideas from PL research to develop a framework for deep reinforcement learning with formally guaranteed safety (NeurIPS'20 and ICLR'23).
In addition, I have also done some work on using machine learning to improve program analysis and program synthesis tools. For example, the aformentioned PLDI paper used machine learning to figure out how to perform abstraction refinement. Before that I worked on a system which uses machine learning to automatically learn appropriate predicates for a predicate abstraction based synthesis tool (CAV'18).