Lower-bounds for descent algorithms

Emmanuel Abbe

We derive new lower-bounds for descent algorithms (e.g., gradient descent, coordinate descent) that imply that deep leaning, i.e., neural nets trained by gradient descent, cannot efficiently learn certain classes of functions that are efficiently learnable by other algorithms. Joint work with Colin Sandon (MIT).