Decompose the network structure into two networks F and G keeping a set of top layers T at the end. F and G are small and more advance network structures respectively. Thus F is cheap to execute with lower performance compared to G.
In order to reduce the whole computation and embrace both performance and computation gains provided by both networks, they suggest an incremental pass of input data through F to G.
Network F decides the salient regions on the input by using a gradient feedback and then these smaller regions are sent to network G to have better recognition performance.
Given an input image x, coarse network F is applied and then coarse representations of different regions of the given input is computed. These coarse representations are propagated to the top layers T and T computes the final output of the network which are the class predictions. An entropy measure is used to see that how each coerce representation effects the model’s uncertainty leading that if a regions is salient then we expect to have large change of the uncertainty with respect to its representation.
We select top k input regions as salient by the hint of computed entropy changes then these regions are given to fine network G obtain finer representations. Eventually, we merge all the coarse, fine representations and give to top layers T again and get the final predictions.
At the training time, all networks and layers trained simultaneously. However, still one might decide to train each network F and G separately by using the same top layers T. Authors posits that the simultaneous training is useful to keep fine and coarse representations similar so that the final layers T do not struggle too much to learn from two difference representation distribution.
I only try to give the overlooked idea here, if you like to see more detail and dwell into formulas please see the paper.
My discussion: There are some other works using attention mechanisms to improve final performance. However, this work is limited with the small datasets and small spatial dimensions. I really like to see whether it is also usefule for large problems like ImageNet or even larger.
Another caveat is the datasets used for the expeirments are not so cluttered. Therefore, it is easy to detect salient regions, even with by some algrithmic techniques. Thus, still this method obscure to me in real life problems.
There is theoretical proof that any one hidden layer network with enough number of sigmoid function is able to learn any decision boundary. Empirical practice, however, posits us that learning good data representations demands deeper networks, like the last year’s ImageNet winner ResNet.
There are two important findings of this work. The first is,we need convolution, for at least image recognition problems, and the second is deeper is always better . Their results are so decisive on even small dataset like CIFAR-10.
They also give a good little paragraph explaining a good way to curate best possible shallow networks based on the deep teachers.
– train state of deep models
– form an ensemble by the best subset
– collect eh predictions on a large enough transfer test
– distill the teacher ensemble knowledge to shallow network.
(if you like to see more about how to apply teacher – student paradigm successfully refer to the paper. It gives very comprehensive set of instructions.)
Still, ass shown by the experimental results also, best possible shallow network is beyond the deep counterpart.
I believe the success of the deep versus shallow depends not the theoretical basis but the way of practical learning of the networks. If we think networks as representation machine which gives finer details to coerce concepts such as thinking to learn a face without knowing what is an eye, does not seem tangible. Due to the one way information flow of convolution networks, this hierarchy of concepts stays and disables shallow architectures to learn comparable to deep ones.
Then how can we train shallow networks comparable to deep ones, once we have such theoretical justifications. I believe one way is to add intra-layer connections which are connections each unit of one layer to other units of that layer. It might be a recursive connection or just literal connections that gives shallow networks the chance of learning higher abstractions.
Convolution is also obviously necessary. Although, we learn each filter from the whole input, still each filter is receptive to particular local commonalities. It is not doable by fully connected layers since it learns from the whole spatial range of the input.
ML on imbalanced data
given a imbalanced learning problem with a large class and a small class with number of instances N and M respectively;
- cluster the larger class into M clusters and use cluster centers for training the model.
- If it is a neural network or some compatible model. Cluster the the large class into K clusters and use these clusters as pseudo classes to train your model. This method is also useful for training your network with small number of classes case. It pushes your net to learn fine-detailed representations.
- Divide large class into subsets with M instances then train multiple classifiers and use the ensemble.
- Hard-mining is a solution which is unfortunately akin to over-fitting but yields good results in some particular cases such as object detection. The idea is to select the most confusing instances from the large set per iteration. Thus, select M most confusing instances from the large class and use for that iteration and repeat for the next iteration.
- For specially batch learning, frequency based batch sampling might be useful. For each batch you can sample instances from the small class by the probability M/(M+N) and N/(M+N) for tha large class so taht you prioritize the small class instances for being the next batch. As you do data augmentation techniques like in CNN models, mostly repeating instances of small class is not a big problem.
Note for metrics, normal accuracy rate is not a good measure for suh problems since you see very high accuracy if your model just predicts the larger class for all the instances. Instead prefer ROC curve or keep watching Precision and Recall.
Please keep me updated if you know something more. Even, this is a very common issue in practice, still hard to find a working solution.
In this post, I like to compute what number of visual instances we observes over time, with the assumption that we visually perceive life as a constant video with a certain fps rate.
Let’s dive into the computation. Relying on , average person can see the world with 45 fps on average. It goes to extremes for such people like fighter pilots which is 225fps with the adrenaline kicked in. I took the average life time 71 years  equals to
(2 .24 billion) secs and we are awake almost
of it which makes
(1.49 billion) secs . Then we assume that on average there are
neurons in our brain . This is our model size.
Eventually and roughly, that means without any further investigation, we have a model with 86 billion parameters which learns from
almost 67 billion images.
Of course this is not a convenient way to come with this numbers but fun comes by ignorance 🙂
In this work, they propose two related problems and comes with a simple but functional solution to this. the problems are;
- Learning object location on the image with Proposal + Classification approach is very tiresome since it needs to classify >1000 patched per image. Therefore, use of end to end pixel-wise segmentation is a better solution as proposed by FCN (Long et al. 2014).
- FCN oversees the contextual information since it predicts the objects of each pixel independently. Therefore, even the thing on the image is Cat, there might be unrelated predictions for different pixels. They solve this by applying Conditional Random Field (CRF) on top of FCN. This is a way to consider context by using pixel relations. Nevertheless, this is still not a method that is able to learn end-to-end since CRF needs additional learning stage after FCN.
Based on these two problems they provide ParseNet architecture. It declares contextual information by looking each channel feature map and aggregating the activations values. These aggregations then merged to be appended to final features of the network as depicted below;
Their experiments construes the effectiveness of the additional contextual features. Yet there are two important points to consider before using these features together. Due to the scale differences of each layer activations, one needs to normalize first per layer then append them together. They L2 normalize each layer’s feature. However, this results very small feature values which also hinder the network to learn in a fast manner. As a cure to this, they learn scale parameters to each feature as used by the Batch Normalization method so that they first normalize and scale the values with scaling weights learned from the data.
The takeaway from this paper, for myself, adding intermediate layer features improves the results with a correct normalization framework and as we add more layers, network is more robust to local changes by the context defined by the aggregated features.
They use VGG16 and fine-tune it for their purpose, VGG net does not use Batch Normalization. Therefore, use of Batch Normalization from the start might evades the need of additional scale parameters even maybe the L2 normalization of aggregated features. This is because, Batch Normalization already scales and shifts the feature values into a common norm.
Note: this is a hasty used article sorry for any inconvenience or mistake or stupidly written sentences.
After some crawling on the Internet, I stumbled upon this thread on Quora. For the lazy ones, the thread is about the things that can be done by humans but not by computers after N years. There are many references to Turing Test in answers stating that the best AI is still not able to pass Turing Test; therefore we do not need to worry about AI being an existential threat for the humanity. First off, I ought to say that I am on the cautious side (like Elon Musk and Bill Gates) on AI being a threat. To explain myself, I would like to show that AI is a threat that has begun to affect, even we think the Turing Test as the validation method. We only need to think in a different way to verify the test.
For the ones who don’t know what Turing Test is; A and B (one machine – one human) are hidden from the human observer C. Looking at the interaction between A and B; the observer C tries to decide which one is human and which is the machine. If observer C cannot decide whether there is a machine or a human behind the curtain; than the machine passes the test. Conclusion is that machine exhibits intelligent behavior equivalent to, or indistinguishable from, that of a human.
From the definition, it is one of the legitimate milestones for AI to compass human capable agents. Therefore, it is normal for people to evaluate present AI to define its state and future potential using Turing Test.
I think a different formation regarding Turing Test where we replace the observer C with a machine as well. Then the remaining question turns out to be, is the machine C able to identify the machine A or even is this identification is necessary henceforth? Thinking the formulation in that way resolves many concerns for the AI supporters who say AI is not a threat since it does not and will not be able to pass Turing Test (at least in the short run). Nevertheless, when we replace C with a machine than the machine does not need to pass Turing Test to be a threat, right? Because we are out of the context like poor B depicted on the above figure.
Now let me explain, what does it mean in practice, changing the observer human with a machine. I believe real life “communication” is a good way to illustrate Turing Test. Think about the communication history. We started with bare foot messengers and have come to light speed flow of the today’s world. At the time, we were sending a message and waiting very long for the response. The reason was the tools were the bottleneck for the communication. First we expedited these tools and come up with new technologies. If we think today, then we see that our tools are so fast that we are the bottleneck of the flow any more. We send our mails and messages in a second that bursts the inboxes and message stacks and consequently bursts us as well. If we also accept that the communication is the bare bone of the today’s business world, companies do not want to waste time – time is money- and attempt to replace the slowest part with faster alternatives and so computerized solutions come to stage in place of humanized old fashion solution. Now, after we changed the tools for communication, we also start to change the sides of the communication up to a point that there is no need for any human being. There, we also have a fancy name for this Internet of “Things” (not humans any more). If you also look to the statistics, we see that huge partition of the data flow is between machine to machine communication. Could you say, in a more immense level of communication revolution, indistinguishability of a computer agent by a human observer is important? It is clear that we can still devastate our lives by our AI agents without passing Turing Test. You can watch out unemployment rates with the growth of the technological solutions.
Basically, what I try to say here is, yes, Turing Test is a barrier for Sci-Fi level AI threat but we changed the rules of the test by placing machines on the both side of the curtain. That means, there is no place in that test (even in the real life) for human unless some silly machine cannot replace you, but be sure it is yet to come.
Final saying, I am an AI guy and of course I am not saying we should stop but it is an ominously proceeding field. The punch card here is to underline the need of introspection of AI and related technologies and finding ways to serve AI for human needs not the contrary or any other way. We should be skeptical and be warned.
Some interesting links;
This work posits a way to integrate first order logic rules with neural networks structures. It enables to cooperate expert knowledge with the workhorse deep neural networks. For being more specific, given a sentiment analysis problem, you know that if there is “but” in the sentence the sentiment content changes direction along the sentence. Such rules are harnessed with the network.
The method combines two precursor ideas of information distilling [Hinton et al. 2015] and posterior regularization [Ganchev et al. 2010]. We have teacher and student networks. They learn simultaneously. Student networks directly uses the labelled data and learns model distribution P then given the logic rules, teacher networks adapts distribution Q as keeping it close to P but in the constraints of the given logic rules. That projects what is inside P to distribution Q bounded by the logic rules. as the below figure suggests.
I don’t like to go into deep math since my main purpose is to give the intuition rather than the formulation. However, formulation follows mathematical formulation of first order logic rules suitable to be in a loss function. Then the student loss is defined by the real network loss (cross-entropy) and the loss of the logic rules with a importance weight.
is the student model weight, the first part of the loss is the network loss and the second part is the logic loss. This function distills the information adapted by the given rules into student network.
Teacher network exploits KL divergence to approximate best Q which is close to P with a slack variable.
So the whole algorithm is as follows;
For the experiments and use cases of this algorithm please refer to the paper. They show promising results at sentiment classification with convolution networks by definition of such BUT rules to the network.
My take away is, it is perfectly possible to use expert knowledge with the wild deep networks. I guess the recent trend of deep learning shows the same promise. It seems like our wild networks goes to be a efficient learning and inference rule for large graphical probabilistic models with variational methods and such rules imposing methods. Still such expert knowledge is tenuous in the domain of image recognition problems.
Disclaimer; it is written hastily without any review therefore it is far from being complete but it targets the intuition of the work to make it memorable for latter use.
32 x memory saving and 58 x faster convolution operation. Only 2.9% performance loss (Top-1) with Binary-Weight version for AlexNet compared to the full precision version. Input and Weight binarization, XNOR-Net, scales the gap to 12.5%.
When the weights are binary convolution operation can be approximated by only summation and subtraction. Binary-Wight networks can fit into mobile devices with 2x speed-up on the operations.
To take the idea further, XNER-Net uses both binary weights and inputs. When both of them binary this allows convolution with XNOR and bitcount operation. This enable both CPU time inference and training of even state of art models.
Here they give a good summary of compressing models into smaller sizes.
- Shallow networks — estimate deep models with shallower architectures with different methods like information distilling.
- Compressing networks — compression of larger networks.
- Weight Decay 
- Optimal Brain Damage 
- Optimal Brain Surgeon 
- Deep Compression 
- Design compact layers — From the beginning keep the network minimal
- Decomposing 3×3 layers to 2 1×1 layers 
- Replace 3×3 layers with 1×1 layers achieving 50% less parameters.
- Quantization of parameters — High precision is not so important for good results in deep networks 
- 8-bit values instead of 32-bit float weight values 
- Ternary weights and 3-bits activation 
- Quantization of layers with L2 loss 
- Network binarization —
- Expectation Backpropagation 
- Binary Connect 
- BinaryNet 
- Retaining of a pre-trained model 
Binary-Weight-Net is defined as a approximateion of real-valued layers as
is scaling factor and
. Since values are binary we can perform convolution operation with only summation and subtraction.
With the details given in the paper:
Training of Binary-Weights-Net includes 3 main steps; forward pass, backward pass, parameters update. In both forward and backward stages weights are binarized but for updates real value weights are used to keep the small changes effective enough.
At this stage, the idea is extended and input values are also binarized to reduce the convolution operation cost by using only binary operation XNOR and bitcount. Basically, input values are binarized as the precious way they use for weight values. Sign operation is used for binary mapping of values and scale values are estimated by l1 norm of input values.
is the scale vector and
is binary mapping of the feature mapping after convolution.
I am lazy to go into much more details. For more and implementation details have a look at the paper.
For such works, this is always pain to replicate the results. I hope they will release some code work for being a basis. Other then this, using such tricks to compress gargantuan deep models into more moderate sizes is very useful for small groups who has no GPU back-end like big companies or deploy such models into small computing devices. Given such boost on computing time and small memory footprint, it is tempting to train such models as a big ensemble and compare against single full precision model.