Interests

Posted: 2016-03-13 , Modified: 2016-03-13

Tags: none

“Standard” topics

  1. How can we give theoretical guarantees for machine learning? Sanjeev Arora’s page.
    • Model the input distribution (e.g. sparsity).
    • Explain why standard approaches (e.g. neural nets) work (e.g., with respect to the input distribution). Neural net modeling
    • Use the theoretical understanding to obtain better algorithms.
  2. Apply math (analysis, geometry, probability) to machine learning problems.
    • For example, understand threshold phenomenon in community detection and other graph problems.
    • Many problems in convex optimization can be understood in terms of choosing a right kernel, doing random walks, convex geometry, simulated annealing (cf. statistical physics), etc. “Manifold learning,” etc.
    • Understand the “random” case of objects like graphs and neural nets and analyze using ideas from statistical physics, random matrix theory, etc.
    • Questions in data science can involve topology, differential geometry, etc.

Other topics

  1. How can we obtain theoretical guarantees for behavior of an AI (e.g. on the level of logic)? See my question. Formulate the question of AI control mathematically and prove theorems. Paul Christiano.
  2. How can we combine ML/data and logical approaches in domains that require modeling with logic, ex. automated theorem proving, NLP (ex. combine CFG-based approaches with neural net approaches), reasoning about series of events with causality, etc.? What are the structure of these problems that allow tractable inference (ex. sentences in natural language are “mostly unambiguous” if you understand them semantically, unlike worst-case).
  3. Build modular AI systems. Ex. how to make AlphaGo something that anyone can implement a baby version of?
  4. Explore creativity. Ex. make a program that writes poetry or stories, or generates maps for a RPG. cf. Neural net dreams. cf. Hofstadter’s FARG.
  5. How do humans reason? What are the learning problems that humans face, and how are our neural algorithms optimized or not for those problems? How does “bounded computation” come into play? cf. Jacob Steinhardt. Ref: Rationally Speaking #154, Tom Griffiths.

Sanjeev Arora’s group

Summaries/thoughts on the papers.