Posted: 2016-10-10
, Modified: 2016-10-10
Tags: none
Threads
- PMI - get some results!
- SoS - lectures 2 and 3
- DL: do experiments suggested in DL generalization
- (*) NN learns DL. (Mon, Tue) - Partially written up. Running into trouble with some inequalities.
- DL generalizations.
- Stability of SGD
- Graphical models reading notes (Thu)
- Alexa references
- Learn BCO (Tue, Wed)
- Papers (finish reading, summarize) - see BCO notes above and HMM notes (Tue, Wed)
Analyze Arora and Ge’s NMF algorithm in the presence of noise. Exactly how much noise can it tolerate?
- TODO Read this paper: [CFP16] Assessing significance in a Markov chain without mixing
Talk with Arora 10/13 (Thu)
- DL experiments
- Convergence by initializing with samples?
- Convergence with backprop?
- NNDL
- PMI - fix training problem, @Yingyu for advice on code
- Explaining backprop, the Network chin rule \(\pd fv = \sum_i \pd f{y_i} \pd{y_i}v\).
- Come up with a class of MDPs on exponential space that is interesting and tractable. Thoughts
Other
[HMR16]
“correct initial state that generated \(y_t\)” Evolution is deterministic!