I am an assistant professor in the computer science department at Reed College. My research focuses on the intersection of program analysis and machine learning, especially with an eye toward ensuring the safety of machine learning systems. Specifically, my recent work has explored the use of neurosymbolic programming to develop agents which can interact safely with an environment.
Prior to joining Reed, I earned my PhD at UT Austin, advised by Isil Dillig and Swarat Chaudhuri.
Research
My research broadly covers the intersection of programming languages and machine learning. In particular, I am most interested in developing techniques for proving the safety of systems with machine learning components. My early work in this domain focused on using abstraction refinement to prove local robustness properties (PLDI'19). More recently I have worked on incorporating ideas from PL research to develop a framework for deep reinforcement learning with formally guaranteed safety (NeurIPS'20 and ICLR'23) and robustness (SaTML '24).
In addition, I have also done some work on using machine learning to improve program analysis and program synthesis tools. For example, the aformentioned PLDI paper used machine learning to figure out how to perform abstraction refinement. Before that I worked on a system which uses machine learning to automatically learn appropriate predicates for a predicate abstraction based synthesis tool (CAV'18).
Teaching
All classes were taught at Reed unless otherwise noted.- CSCI 121: Computer Science I (Fall 2024)
- CSCI 221: Computer Science II (Fall 2023, Spring 2024, Spring 2025)
- CSCI 378: Deep Learning (Spring 2024, Spring 2025)
- CSCI 384: Programming Languages (Fall 2023)
- CSCI 389: Computer Systems (Fall 2024)
- (UT Austin) CS 342: Neural Networks (Spring 2021)
Publications
- Certifiably Robust Reinforcement Learning through Model-Based Abstract Interpretation. Chenxi Yang, Greg Anderson, Swarat Chaudhuri. SatML '24.
- Guiding Safe Exploration with Weakest Preconditions. Greg Anderson, Swarat Chaudhuri, Isil Dillig. At ICLR '23. Tool available here.
- Neurosymbolic Reinforcement Learning with Formally Verified Exploration. Greg Anderson, Abhinav Verma, Isil Dillig, Swarat Chaudhuri. At NeurIPS '20. Tool available here .
- Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness. Greg Anderson, Shankara Pailoor, Isil Dillig, and Swarat Chaudhuri. At PLDI '19 (Distinguished Paper). Tool available here.
- Learning Abstractions for Program Synthesis. Xinyu Wang, Greg Anderson, Isil Dillig, Ken McMillan. At CAV '18.
- Formal Analysis of the Compact Position Reporting Algorithm. Aaron Dutle, Mariano Moscato, Laura Titolo, César Muñoz, Gregory Anderson, and François Bobot. Formal Aspects of Computing. '20.