A study of minima structure of the Loss Function Landscape for single-hidden layer and multi-hidden layers neural networks.
We build two very simple models of feed-forward neural networks: modelA with 3 hidden layers and modelB with 1 hidden layer. Both of them have just 112 trainable parameters.
The networks are trained on the Pima Indian dataset, available on Kaggle: https://www.kaggle.com/uciml/pima-indians-diabetes-database.
The evolution of the networks is studied, looking for differences in the structure of the landscape potential between the two models.