I am a PhD student at UC Berkeley working on adversarial machine learning for learning-enabled robotic systems. My interests lie in AI safety, security, and AI for security. I’m also interested in the societal implications of digital technologies. I’m currently being advised by Claire Tomlin and Shankar Sastry and have affliations with Berkeley’s Artificial Intelligence Research (BAIR) Lab and the Institute of Transportation Studies (ITS).
TECHNICAL EXPERTISE AND INTERESTS
- Cybersecurity: I interned at Facebook in Summer 2021 working as a network security engineer and developing programs to detect unauthorized network behaviors. Recently, I was the team lead of an AI security project where we developed an automated repair program that can detect and patch vulnerabilities in open source code using AI.
- Safe Machine Learning and Control: We have demonstrated the effects of adversarial attacks on autonomous systems through their machine learning components. We’ve studied ways of guaranteeing safe learning and control for robotic systems, using methods like reachability analysis and gaussian processes. When combining reachability with reinforcement learning, we’ve shown (1) that you can use methods like decomposition, warm-starting, and adaptive grids to speed up computation for practical safe learning, (2) how to use reachability to find more effective robotic safety criterion for designing better safety controllers, and (3) how to incorporate better information structures of human behavior to help reduce conservativeness safe autonomous driving.
- Autonomous Driving + Traffic Control: Autonomous driving and traffic control has been one of our main focus domains for AI and control. We’ve explored frameworks for developing control strategies for human-in-the-loop traffic flow smoothing on real highways and have shown how to use explicit-MPC and LQR control to solve routing games for optimal traffic flow.