Weekly summary 2017-02-25

Posted: 2017-02-21 , Modified: 2017-02-21

Tags: neural nets

Adversarial examples in neural networks

main page

Statement

Neural networks can be easily fooled—ex. an adversary adding a small amount of noise can change the classification from “dog” to “cat” with high confidence. It can be fooled even by a weak adversary with just black-box access!

Related to making NN’s resistant: Have NN’s give a confidence bound.

Ideas:

Literature

Experiments

Theory

Diversity in ML

Literature

GANs

The original formulation of GANs is plagued by many mathematical problems. What are mathematically better alternatives?

Decision theory and logical induction

See page on decision/game theory.

POMDP

Anchor POMDPs.

What are real-life problems involving POMDPs?

Alexa

Logic and ML

???

Meta

What is good research?

Blogging

Learning