I received my Ph.D. from Princeton, where I was advised by Sanjeev Arora (research group page, ML theory at Princeton).
I focus on machine learning theory and applied probability, and also have broad interests in theoretical computer science and related math.
Although machine learning (and deep learning in particular) has made great advances in recent years, our mathematical understanding of it is shallow. Learning problems can be highly nonconvex, yet tractable in practice. What hidden structure do these problems have, and how can we design algorithms to take advantage of it?
Current interests include:
- Probabilistic modeling (Bayesian inference): How to design provable algorithms for learning probability distributions and sampling from them? How can we improve classical algorithms like Markov Chain Monte Carlo, or test the quality of the samples?
- Control theory and reinforcement learning: It is a well-studied problem how to find the optimal control for known linear dynamical system. However, reinforcement learning deals with learning how to act unknown, combinatorially complex systems; algorithms are heuristic and slow. How to bridge this gap?
- Neural networks: Neural networks tackle highly nonconvex problems but do very well in practice. Why? What kind of algorithmic improvements can we come up with by understanding their theoretical foundations more deeply?
- Natural language understanding: Language is a fundamental part of human intelligence and a big frontier for machine learning. How do we create machines that can understand “grammar” and “semantics”?
The publication list is available as pdf.
Estimating Normalizing Constants for Log-Concave Distributions: Algorithms and Lower Bounds
with Rong Ge and Jianfeng Lu.
In submission. [arXiv, pdf]
Online Sampling from Log-Concave Distributions
with Oren Mangoubi and Nisheeth Vishnoi.
NeurIPS 2019. [arXiv, pdf] [webpage](Online Sampling from Log-Concave Distributions.html)
Beyond Log-concavity: Provable Guarantees for Sampling Multi-modal Distributions using Simulated Tempering Langevin Monte Carlo. webpage
with Rong Ge and Andrej Risteski.
Reinforcement learning and control theory
Statistical Guarantees for Learning an Autoregressive Filter
with Cyril Zhang. Preprint. [arxiv, pdf]
Spectral Filtering for General Linear Dynamical Systems
with Elad Hazan, Karan Singh, Cyril Zhang, and Yi Zhang. [arxiv, pdf]
NeurIPS 2018 (oral).
Towards Provable Control for Unknown Linear Dynamical Systems.
with Sanjeev Arora, Elad Hazan, Karan Singh, Cyril Zhang, and Yi Zhang. [ICLR page, pdf]
ICLR workshop 2018.
Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets.
with Rohith Kuditipudi, Xiang Wang, Yi Zhang, Zhiyuan Li, Wei Hu, Rong Ge, and Sanjeev Arora.
NeurIPS 2019. [arXiv, pdf]
On the Ability of Neural Nets to Express Distributions.
with Rong Ge, Tengyu Ma, Andrej Risteski, and Sanjeev Arora. [arXiv, pdf, PMLR 65:1271-1296, webpage]
Quadratic polynomials of small modulus cannot represent OR.
Unpublished, 2015. [arXiv, pdf]