Consider adding smoothing (regularizer). Smoothing is widely used in NLP. It would keep gradient updates from blowing up near the edge of \(\De_n\).
TODO: Read Mike Collins’s notes on NLP.
Nicolaus Boumal sent me references on optimization on manifolds, which I glanced through. TODO: Spend a day, or a few days reading through Optimization on Manifolds.
Started reading Vishnoi’s slime mold paper. TODO: keep going.
Reviewed Ch. 9-10 of Convex Optimization by BV.
TODO: Make sure I have a good understanding of intro convex opt (ex. look at a course), and find more advanced books.