Intro
- Contributions
- Formalizes representation learning, unifying disparate settings.
- Quantifies “utility” from representation learning.
- Prove separation results between representation learning and simpler algorithms.
- “Bayes+”
- Why representation learning?
- Allows semi-supervised learning.
- Simpler methods need too many samples.
- Provably better than manifold learning in some cases.
- The framework
- Many-to-one map \(x\mapsto h\). \(h\) is “high-level” representation.
- Generative model \(h\to x\).
- Similarity in the latent space (of \(h\)) is more informative.
- Defintion: A \((\ga,\be)\)-valid decoder has \(\Pj(\ve{f(x)-h}\le (1-\ga) \ve{h})\ge \be\). (Think of \(\ga,\be\approx 1\).)
- Utility: If \(C\) is \(\al\)-Lipschitz, \(\ve{C(f(x)) - C(h)}_\iy\le (1-\ga)\al \ve{h}\).
- Examples
- Clustering
- Manifold
- Kernel learning
- Non-examples
- Nearest neighbor (provably weaker in some settings)
- LSH - this preserves distance, which is not our goal.
- Contrast [HM16], which is assumption-free and basically lossless compression. (ex. Lempel-Ziv) This notion is different, ex. allows throwing away noise.
- Compare to the usual: Maximize log probability (MLE), then \(\amax_h p_\te(h|x)\).
- Unlike Bayesian which gives a distribution over \(h\), we output single \(h\).
2
2.2
- Encoder exists \(\implies\) given \(x\), \(h\) is concentrated around \(f(x)\), almost uniquely defined.
- Having concentration is stronger than just being able to do inference.
3
3.1
Topic modeling
- \(k\) topices
- Each distribution on \(M\) words. \(A_i\in \R^M\).
- Mixture coefficients \(h_i\).
- Draw bag of words \(x\sim \sum h_i A_i\), \(x\in \Z^N\).
3.2
Loglinear model: (continued below)
\[p(x,h) = p(h)p(x|h).\]
4
4.1
Topic modeling: want \(\ell_\iy\to \ell_1\) condition number to be small.
4.2
- \(h\in \R^d\) randomly on unit sphere.
- \(\Pj(x|h)\propto e^{\an{W_x,h}}\).
- \(W_x = Bv\), \(B=O(1)\), \(v\sim N(0,I)\).
- Take \(f(x) = \nv{\sum_i W_{x_i}}\).
5
5.1 Lower bounds for nearest neighbors
- \(M\) movies, \(k\) genres
- \(\ve{h}_0 = s\), \(h\in \{0,1\}^k\)
- Draw movies \(\ve{x}_0=T\).
- For \(T\ll \sqrt m\), can’t learn using NN because
- Users will share few movies in common.
- Users who share movies won’t share genres. (Construct example where some movies belong to all genres.)
5.3
- \(k\) genres
- \(T=\Om(\ln M)\) ratings per user
- \(s\) genres per user
- \(\ell(h) = \sgn(\an{w,2h-1})\).
Can do semi-supervised learning by doing representation learn using [AKM16]. (S4.1)
Notes
- Check Thm. 4.1 using [AKM16] - review “condition number”.
- Check Thm. 5.1. Look up background on NN.
- Check Thm. 5.4.