Research
Posted: 2017-08-07 , Modified: 2024-08-29
Tags: math
Parent: Math
Children: Online Sampling from Log-Concave Distributions, Simulated tempering Langevin Monte Carlo, l-adic properties of partition functions
Posted: 2017-08-07 , Modified: 2024-08-29
Tags: math
Parent: Math
Children: Online Sampling from Log-Concave Distributions, Simulated tempering Langevin Monte Carlo, l-adic properties of partition functions
I received my Ph.D. from Princeton, where I was advised by Sanjeev Arora.
I focus on machine learning theory and applied probability, and also have broad interests in theoretical computer science and related math.
Current interests include:
The publication list is available as pdf.
[A] denotes alphabetical order of authors.
Learning mixtures of gaussians using diffusion models
[A] Khashayar Gatmiry, Jonathan Kelner, Holden Lee.
Preprint, 2024. arxiv
Provable benefits of score matching
Chirag Pabbaraju, Dhruv Rohatgi, Anish Prasad Sevekari, Holden Lee, Ankur Moitra, Andrej Risteski.
NeurIPS 2023 (Spotlight). [arxiv]
The probability flow ODE is provably fast
[A] Sitan Chen, Sinho Chewi, Holden Lee, Yuanzhi Li, Jianfeng Lu, Adil Salim
NeurIPS 2023. [arxiv]
Improved Analysis of Score-based Generative Modeling: User-Friendly Bounds under Minimal Smoothness Assumptions
[A] Hongrui Chen, Holden Lee, and Jianfeng Lu.
ICML 2023. [arxiv]
Pitfalls of Gaussians as a noise distribution in NCE
Holden Lee, Chirag Pabbaraju, Anish Sevekari, and Andrej Risteski.
ICLR 2023, NeurIPS 2022 Workshop on Self-Supervised Learning. [arxiv]
Convergence of score-based generative modeling for general data distributions
[A] Holden Lee, Jianfeng Lu, and Yixin Tan.
ALT 2023, NeurIPS 2022 Workshop on Score-Based Methods. [arxiv]
Convergence for score-based generative modeling with polynomial complexity
[A] Holden Lee, Jianfeng Lu, and Yixin Tan.
NeurIPS 2022 (oral). [arxiv, slides]
Universal Approximation for Log-concave Distributions using Well-conditioned Normalizing Flows.
Holden Lee, Chirag Pabbaraju, Anish Sevekari, Andrej Risteski.
Sampling from the Continuous Random Energy Model in Total Variation Distance
[A] Holden Lee, Qiang Wu.
Preprint, 2024. arxiv
Convergence Bounds for Sequential Monte Carlo on Multimodal Distributions using Soft Decomposition
[A] Holden Lee, Matheau Santana-Gijzen.
Preprint, 2024. arxiv
Sampling List Packings
[A] Evan Camrud, Ewan Davies, Alex Karduna, Holden Lee.
Preprint, 2023. arxiv
Parallelising Glauber Dynamics
Holden Lee.
RANDOM 2024. arxiv, presentation.
Fisher information lower bounds for sampling
[A] Sinho Chewi, Patrik Gerber, Holden Lee, Chen Lu.
ALT 2023. [arxiv]
Sampling Approximately Low-Rank Ising Models: MCMC meets Variational Methods
[A] Frederic Koehler, Holden Lee, and Andrej Risteski.
Approximation algorithms for the random-field Ising model
[A] Tyler Helmuth, Holden Lee, Will Perkins, Mohan Ravichandran, and Qiang Wu.
SIAM Journal on Discrete Mathematics 37 (3), 1610-1629. 2024. [arXiv, pdf]
Efficient sampling from the Bingham distribution
[A] Rong Ge, Holden Lee, Jianfeng Lu, and Andrej Risteski.
Estimating Normalizing Constants for Log-Concave Distributions: Algorithms and Lower Bounds
[A] Rong Ge, Holden Lee, and Jianfeng Lu.
STOC 2020. [arXiv, pdf, STOC 2020:579–586, slides, video]
Online Sampling from Log-Concave Distributions
[A] Holden Lee, Oren Mangoubi, and Nisheeth Vishnoi.
Beyond Log-concavity: Provable Guarantees for Sampling Multi-modal Distributions using Simulated Tempering Langevin Monte Carlo. webpage
[A] Rong Ge, Holden Lee, and Andrej Risteski.
Extracting Latent State Representations with Linear Dynamics from Rich Observations
Abraham Frandsen, Rong Ge, and Holden Lee.
Improved rates for identification of partially observed linear dynamical systems
Holden Lee.
No-Regret Prediction in Marginally Stable Systems
[A] Udaya Ghai, Holden Lee, Karan Singh, Cyril Zhang, and Yi Zhang.
COLT 2020. [arxiv, pdf, slides, summary slide, videos]
Statistical Guarantees for Learning an Autoregressive Filter
[A] Holden Lee and Cyril Zhang.
Spectral Filtering for General Linear Dynamical Systems
[A] Elad Hazan, Holden Lee, Karan Singh, Cyril Zhang, and Yi Zhang.
Towards Provable Control for Unknown Linear Dynamical Systems.
[A] Sanjeev Arora, Elad Hazan, Holden Lee, Karan Singh, Cyril Zhang, and Yi Zhang.
When is a Language Process a Language Model?
Li Du, Holden Lee, Jason Eisner, Ryan Cotterell.
ACL 2024.
Principled Gradient-based Markov Chain Monte Carlo for Text Generation
Li Du, Afra Amini, Lucas Torroba Hennigen, Xinyan Velocity Yu, Jason Eisner, Holden Lee, Ryan Cotterell.
Preprint, 2023. arxiv
Connecting Pre-trained Language Model and Downstream Task via Properties of Representation
Chenwei Wu, Holden Lee, Rong Ge.
Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets.
Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Rong Ge, and Sanjeev Arora.
On the Ability of Neural Nets to Express Distributions.
Holden Lee, Rong Ge, Tengyu Ma, Andrej Risteski, and Sanjeev Arora.
COLT 2017. [arXiv, pdf, PMLR 65:1271-1296, webpage]
How Flawed is ECE? An Analysis via Logit Smoothing
[A] Muthu Chidambaram, Holden Lee, Colin McSwiggen, Semon Rezchikov.
ICML 2024. arxiv
Quadratic polynomials of small modulus cannot represent OR.
Holden Lee
l-adic properties of partition functions.
[A] Eva Belmont, Holden Lee, Alexandra Musat, and Sarah Trebat-Leder.
Monatshefte für Mathematik, 173(1), 1-34, 2014. [arXiv, pdf, presentation, webpage]