Abstract: Maxout polytopes are defined by feedforward neural networks with
maxout activation function and non-negative weights after the first layer.
We characterize the parameter spaces and extremal f-vectors of maxout
polytopes for shallow networks, and we study the separating hypersurfaces
which arise when a layer is added to the network. We also show that maxout
polytopes are cubical for generic networks without bottlenecks.
Joint work with Andrei Balakin, Shelby Cox, Georg Loho.