There is theoretical proof that any one hidden layer network with enough number of sigmoid function is able to learn any decision boundary. Empirical practice, however, posits us that learning good data representations demands deeper networks, like the last year’s ImageNet winner ResNet.
There are two important findings of this work. The first is,we need convolution, for at least image recognition problems, and the second is deeper is always better . Their results are so decisive on even small dataset like CIFAR-10.
They also give a good little paragraph explaining a good way to curate best possible shallow networks based on the deep teachers.
– train state of deep models
– form an ensemble by the best subset
– collect eh predictions on a large enough transfer test
– distill the teacher ensemble knowledge to shallow network.
(if you like to see more about how to apply teacher – student paradigm successfully refer to the paper. It gives very comprehensive set of instructions.)
Still, ass shown by the experimental results also, best possible shallow network is beyond the deep counterpart.
I believe the success of the deep versus shallow depends not the theoretical basis but the way of practical learning of the networks. If we think networks as representation machine which gives finer details to coerce concepts such as thinking to learn a face without knowing what is an eye, does not seem tangible. Due to the one way information flow of convolution networks, this hierarchy of concepts stays and disables shallow architectures to learn comparable to deep ones.
Then how can we train shallow networks comparable to deep ones, once we have such theoretical justifications. I believe one way is to add intra-layer connections which are connections each unit of one layer to other units of that layer. It might be a recursive connection or just literal connections that gives shallow networks the chance of learning higher abstractions.
Convolution is also obviously necessary. Although, we learn each filter from the whole input, still each filter is receptive to particular local commonalities. It is not doable by fully connected layers since it learns from the whole spatial range of the input.