I am an assistant professor in the computer science department at Reed College. My research focuses on the intersection of program analysis and machine learning, especially with an eye toward ensuring the safety of machine learning systems. Specifically, my recent work has explored the use of neurosymbolic programming to develop agents which can interact safely with an environment.

Prior to joining Reed, I earned my PhD at UT Austin, advised by Isil Dillig and Swarat Chaudhuri.

Research

My research broadly covers the intersection of programming languages and machine learning. In particular, I am most interested in developing techniques for proving the safety of systems with machine learning components. My early work in this domain focused on using abstraction refinement to prove local robustness properties (PLDI'19). More recently I have worked on incorporating ideas from PL research to develop a framework for deep reinforcement learning with formally guaranteed safety (NeurIPS'20 and ICLR'23) and robustness (SaTML '24).

In addition, I have also done some work on using machine learning to improve program analysis and program synthesis tools. For example, the aformentioned PLDI paper used machine learning to figure out how to perform abstraction refinement. Before that I worked on a system which uses machine learning to automatically learn appropriate predicates for a predicate abstraction based synthesis tool (CAV'18).

Teaching

All classes were taught at Reed unless otherwise noted.

Publications