Think "Turing Test" in another way ?

After some crawling on the Internet, I stumbled upon this thread on Quora. For the lazy ones, the thread is about the things that can be done by humans but not by computers after N years. There are many references to Turing Test in answers stating that the best AI is still not able to pass Turing Test; therefore we do not need to worry about AI being an existential threat for the humanity. First off, I ought to say that I am on the cautious side (like Elon Musk and Bill Gates) on AI being a threat. To explain myself, I would like to show that AI is a threat that has begun to affect, even we think the Turing Test as the validation method. We only need to think in a different way to verify the test.

For the ones who don’t know what Turing Test is;  A and B (one machine – one human) are hidden from the human observer C. Looking at the interaction between  A and  B; the observer C tries to decide which one is human and which is the machine. If observer C cannot decide whether there is a machine or a human behind the curtain; than the machine passes the test. Conclusion is that machine exhibits intelligent behavior equivalent to, or indistinguishable from, that of a human.

A and B is hidden from the observer C and given the interaction between the observer and the entity behind the curtain the observer tries to decide whether there is a machine or a human behind that curtain.

From the definition, it is one of the legitimate milestones for AI to compass human capable agents. Therefore, it is normal for people to evaluate present AI to define its state and future potential using Turing Test.

I think a different formation regarding Turing Test where we replace the observer C with a machine as well. Then the remaining question turns out to be, is the machine C able to identify the machine A or even is this identification is necessary henceforth? Thinking the formulation in that way resolves many concerns for the AI supporters who say AI is not a threat since it does not and will not be able to pass Turing Test (at least in the short run). Nevertheless, when we replace C with a machine than the machine does not need to pass Turing Test to be a threat, right? Because we are out of the context like poor B depicted on the above figure.

Now let me explain, what does it mean in practice, changing the observer human with a machine. I believe real life “communication” is a good way to illustrate Turing Test.  Think about the communication history. We started with bare foot messengers and have come to light speed flow of the today’s world. At the time, we were sending a message and waiting very long for the response. The reason was the tools were the bottleneck for the communication. First we expedited these tools and come up with new technologies. If we think today, then we see that our tools are so fast that we are the bottleneck of the flow any more. We send our mails and messages in a second that bursts the inboxes and message stacks and consequently bursts us as well. If we also accept that the communication is the bare bone of the today’s business world, companies do not want to waste time – time is money- and attempt to replace the slowest part with faster alternatives and so computerized solutions come to stage in place of humanized old fashion solution.  Now, after we changed the tools for communication, we also start to change the sides of the communication up to a point that there is no need for any human being. There, we also have a fancy name for this Internet of “Things” (not humans any more). If you also look to the statistics, we see that huge partition of the data flow is between machine to machine communication.  Could you say, in a more immense level of communication revolution, indistinguishability of a computer agent by a human observer is important? It is clear that we can still devastate our lives by our AI agents without passing Turing Test. You can watch out unemployment rates with the growth of the technological solutions.

 

from https://www.technologyreview.com/s/538401/who-will-own-the-robots/

from https://www.technologyreview.com/s/538401/who-will-own-the-robots/

Basically, what I try to say here is, yes, Turing Test is a barrier for Sci-Fi level AI threat but we changed the rules of the test by placing machines on the both side of the curtain. That means, there is no place in that test (even in the real life)  for human unless some silly machine cannot  replace you, but be sure it is yet to come.

Final saying, I am an AI guy and of course I am not saying we should stop but it is an ominously proceeding field. The punch card here is to underline the need of introspection of AI and related technologies and finding ways to serve AI for human needs not the contrary or any other way. We should be skeptical and be warned.

Some interesting links;

Share

Harnessing Deep Neural Networks with Logic Rules

paper: http://arxiv.org/pdf/1603.06318v1.pdf

This work posits a way to integrate first order logic rules with neural networks structures. It enables to cooperate expert knowledge with the workhorse deep neural networks. For being more specific, given a sentiment analysis problem, you know that if there is “but” in the sentence the sentiment content changes direction along the sentence. Such rules are harnessed with the network.

The method combines two precursor ideas of information distilling [Hinton et al. 2015] and posterior regularization [Ganchev et al. 2010].  We have teacher and student networks. They learn simultaneously.  Student networks directly uses the labelled data and learns model distribution P then given the logic rules, teacher networks adapts distribution Q as keeping it close to P but in the constraints of the given logic rules. That projects what is inside P to distribution Q bounded by the logic rules. as the below figure suggests.

harnessfolI don’t like to go into deep math since my main purpose is to give the intuition rather than the formulation. However, formulation follows mathematical formulation of first order logic rules suitable to be in a loss function. Then the student loss is defined by the real network loss (cross-entropy) and the loss of the logic rules with a importance weight.

harnessfol_form1

    \[theta\]

is the student model weight, the first part of the loss is the network loss and the second part is the logic loss. This function distills the information adapted by the given rules into student network.

Teacher network exploits KL divergence to approximate best Q which is close to P with a slack variable.

harnessfol_form2Since the problem is convex, solution van be found by its dual form with closed form solution as below.

harnessingfol_form3

So the whole algorithm is as follows;

harnessingfol_algoFor the experiments and use cases of this algorithm please refer to the paper. They show promising results at sentiment classification with convolution networks by definition of such BUT rules to the network.

My take away is, it is perfectly possible to use expert knowledge with the wild deep networks. I guess the recent trend of deep learning shows the same promise. It seems like our wild networks goes to be a efficient learning and inference rule for large graphical probabilistic models with variational methods and such rules imposing methods.  Still such expert knowledge is tenuous in the domain of image recognition problems.

Disclaimer; it is written hastily without any review therefore it is far from being complete but it targets the intuition of the work to make it memorable for latter use.

Share

XNOR-Net

paper: http://arxiv.org/pdf/1603.05279v1.pdf

32 x memory saving and 58 x faster convolution operation. Only 2.9% performance loss (Top-1) with Binary-Weight version for AlexNet compared to the full precision version. Input and Weight binarization, XNOR-Net, scales the gap to 12.5%.

When the weights are binary convolution operation can be approximated by only summation and subtraction. Binary-Wight networks can fit into mobile devices with 2x speed-up on the operations.

To take the idea further, XNER-Net uses both binary weights and inputs. When both of them binary this allows convolution with XNOR and bitcount operation.  This enable both CPU time inference and training of even state of art models.

Here they give a good summary of compressing models into smaller sizes.

  1. Shallow networks —  estimate deep models with shallower architectures with different methods like information distilling.
  2. Compressing networks — compression of larger networks.
    1. Weight Decay [17]
    2. Optimal Brain Damage [18]
    3. Optimal Brain Surgeon [19]
    4. Deep Compression [22]
    5. HashNets[23]
  3. Design compact layers — From the beginning keep the network minimal
    1. Decomposing 3×3 layers to 2 1×1 layers [27]
    2. Replace 3×3 layers with 1×1  layers achieving 50% less parameters.
  4. Quantization of parameters — High precision is not so important for good results in deep networks [29]
    1. 8-bit values instead of 32-bit float weight values [31]
    2. Ternary weights and 3-bits activation [32]
    3. Quantization of layers with L2 loss  [33]
  5. Network binarization —
    1. Expectation Backpropagation [36]
    2. Binary Connect [38]
    3. BinaryNet [11]
    4. Retaining of a pre-trained model  [41]

 

Binary-Weight-Net

Binary-Weight-Net is defined as a approximateion of real-valued layers as

    \[W approx alpha B\]

where

    \[alpha\]

is scaling factor and

    \[B in [+1, -1]\]

. Since values are binary we can perform convolution operation with only summation and subtraction.

    \[I*W approx (I oplus B ) alpha\]

With the details given in the paper:

    \[B = sign(W)\]

and

    \[alpha = 1/n||W||_{l1}\]

Training of Binary-Weights-Net includes 3 main steps; forward pass, backward pass, parameters update. In both forward and backward stages weights are binarized but for updates real value weights are used to keep the small changes effective enough.

Binary-Weight-Net training cycle

Binary-Weight-Net training cycle

 

XNOR-Networks

At this stage, the idea is extended and input values are also binarized to reduce the convolution operation cost by using only binary operation XNOR and bitcount.  Basically, input values are binarized as the precious way they use for weight values. Sign operation is used for binary mapping of values and scale values are estimated by l1 norm of input values.

    \[C = sign(X^T)sign(W) = H^TB\]

    \[gamma approx (1/n ||X||_{l1})(1/n||W||_{l1}) = beta alpha\]

where

    \[gamma\]

is the scale vector and

    \[C\]

is binary mapping of the feature mapping after convolution.

I am lazy to go into much more details. For more and implementation details have a look at the paper.

For such works, this is always pain to replicate the results.  I hope they will release some code work for being a basis.  Other then this, using such tricks to compress gargantuan deep models into more moderate sizes is very useful for small groups who has no GPU back-end like big companies or deploy such models into small computing devices.  Given such boost on computing time and small memory footprint, it is tempting  to train such models as a big ensemble and compare against single full precision model.

Share

Error-Driven Incremental Learning with Deep CNNs

paper link

This paper posits a way of incremental training of a network where you have continuous flow of new data categories.  they propose two main problems related to that problem.  First, with increasing number of instances we need more capacitive networks which are hard to train compared to small networks. Therefore starting with a small network and gradually increasing its size seems feasible. Second is to expand the network instead of using already learned features in new tasks. For instance, if you would like to use a pre-trained ImageNet network to your specific problem using it as a sole feature extractor does not reflect the real potential of the network. Instead, training it as it goes wild with the new data is a better choice.

They also recall the forgetting problem when new data is proposed to a pre-trained model. Al ready learned features are forgotten with the new data and the problem.

The proposed method here relies on tree-like structures networks as the below figure depicts. The algorithm starts with a pretrained network L0 with K superclasses. When we add new classes (depicted green), we clone network L0 to leaf networks L1, L2 and branching network B. That is, all set of new networks are the exact clone of L0. Then B is the branching network which decides the leaf network to be activated for the given instance. Then activated leaf network leads to the final prediction for the given instance.

incremental1 For partition the classes the idea is to keep more confusing classes together so that the later stages of the network can resolve this confusion.  So any new set of classes with the corresponding instances are passed through the already trained networks and mostly active network by its softmax outputs is selected for that single category to be appended.  Another choice to increase the number of categories is to add the new categories to output layer only by keeping the network capacity the same. When we need to increase the capacity then we can branch the network again and this network stays as a branching network now. When we need to decide the leaf network following that branching network we sum the confidence values of the classes of each leaf network and maximum confidence network is selected as the leaf network.

incremental2

While all these processes, any parameter is transfered from a branching network to leaf networks unless we have some mismatch between category units. Only these mismatch parameters are initialized randomly.

This work proposes a good approach for a scalable learning architecture with new categories coming in time. It both considers how to add new categories and increase the network capacity in a guided manner. One another good side of this architecture is that each of these network can be trained independently so that we can parallelize the training process.

 

Share

My Notes – Weight Normalization

Deep Learning is defined as (Goodfellow et al., 2016) a sub-field of machine learning consists in learning models that are wholly or partially specified by a class of flexible differentiable functions.
In this study there are three main methods which are Weight Normalization, a new data depended initialization method and Mean Only Batch Normalization.
Weight normalization id formalized as below. Weight values w are decoupled by their norms  g and the direction v / ||v||. In this way they propose that SGD gives faster convergence.
They compare Weight Normalization with Batch Normalization. The main disadvantage they posit that BN has stochasticity due to varying data batches and one additional difference is that WN has lower computational burden compared to BN.
the second perk is data depended initialization of the network. They first give a initial minibatch to network and compute mean activation and std per layer. Then given the initial weight values sampled from mean 0 and std 0.05, they set g = 1 / std and b = – mean / std
One downside is that since this scheme is batch depended, it might suffer for the forthcoming batches with possible different data statistics. However, they say that this scheme works well in practice.
The  third perk is Mean Only Batch Normalization.
This is a lighter operation due to the avoidance of variance normalization. We might easily skip variance normalization because of the initialization scheme already applied it. One another upside is that avodiance of variance normalization provides less distracted gradient feedbacks and therefore better learning.
At the experiments side, they note that batch normalization is 16% slower than weight normalization whereas BN yields better progress especially for initial iterations.  As a final remark they note 7.31% CIFAR-10 performance which is the state of art up to my knowledge (not better then my best network :)) in terms of published works. they also experiment with different architectures like RNNs , reinforcement learning and others but please refer to the paper for more.
Share




NegOut: Substitute for MaxOut units

Maxout [1] units are well-known and frequently used tools for Deep Neural Networks. For whom does not know, with a basic explanation, a Maxout unit is a set of internal activation units competing with each other for each instance and activation of the winner is propagated as output and the loosers are kept silent. At the backpropagation phase, it means we update only the winner unit. That also means, implicitly, we always prefer to back-propagate gradient signal through the strongest path.  It is an important aspect of Maxout units, especially for very deep models which are prone to gradient instability.

Although Maxout units have very good properties like which I told (please refer to the paper for more details), I am a proactive sceptic of its ability to encode underlying information and pass it to next layer.  Here is a very simple example. Suppose we have two competing functions (filters) in a Maxout unit. One of these functions is receptive of edge structures whereas the other is receptive of corners. For an instance, we might have the first filter as the winner with a value, let’s say, ~3 which means Maxout output is also ~3. For another instance, we have the other function as the winner with approximately same value ~3. If we assume that each NN layer is a classifier which takes the previous layer output as a feature vector (I guess not very wrong assumption), then basically we give the same value for different detections for a particular feature dimension (which is corresponded to our Maxout unit). Eventually, we cannot expect from the next layer to be able to discern this signal.

Edge detector is the winner

Edge detector is the winner

Corner detector is the winner but the result is same

Corner detector is the winner but the result is same

One can argue that we should evaluate Maxout unit as a whole and it is reminiscent of OR function on top of multiple filters. This is a valid argument which I cannot refuse directly but the problem that I indicated above is still floating on air.  Beside,  why we would waste our expensive NN parameters, if we could come up with a better encoding scheme for Maxout units

Here is one alternative approach for better encoding of competing functions, which we call NegOut. Let’s assume we have a ordering of two competing functions by heart as 1st and 2nd. If the winner is the 1st function, NegOut outputs the 1st function’s value and otherwise it outputs the 2nd function but by taking its negative. NegOut yields two assumptions. The first, competing functions are always positive (like ReLU functions ). The second, we have 2 competing functions.

NegOut activation with different winners.

NegOut activation with different winners.

If we consider the backpropagation signal, the only difference from Maxout unit is to take negative of the gradient signal for the 2nd competing unit, if it is the winner.

As you can see from the figure, the inherent property here is to output different values for different winner detectors in which the value captures both the structural difference and the strength of the winner activation.

I performed some experiments on CIFAR-10 and MNIST comparing Maxout Network with NegOut Network with exact same architectures explained in the Maxout Paper [1].  The table below summarizes results that I observe by the initial runs without any finetunning or hyper-parameter optimization yet. More comparisons on larger datasets are still in progress.

Results on CIFAR-10 and MNIST.

Results on CIFAR-10 and MNIST after average of 5 different runs.

NegOut give better results on CIFAR, although it is slightly lower on MNIST. Again notice that no tunning has been took a place for our NegOut network where as Maout Network is optimized as described in the paper [1].  In addition, NegOut network uses 2 competing set of units (as it is constrained by its nature) for the last FC layer in comparison to Maxout net which uses 5 competing units. My expectation is to have more difference as we go through larger models and datasets since as we scale up, representational power takes more place for better results.

Here, I tried to give a basic sketch of my recent work by no means complete. Different observations and experiments are still running. I also need to include LWTA [2] for being more fair and grasp more wider aspect of competing units. Please feel free to share your thoughts as well. Any contribution is appreciated.

PS: Lately, I devote myself to analyze the internal dynamics of Neural Networks with different architectures, layers and activation functions. The aim is checking under the hood and analysing any intuitionally well-functioning ideas applied to  Deep Neural Networks. I also expect to share more of my findings at my blog.

[1] Maxout networks IJ Goodfellow, D Warde-Farley, M Mirza, A Courville, Y Bengio arXiv preprint arXiv:1302.4389

[2] Understanding Locally Competitive Networks Rupesh Kumar Srivastava, Jonathan Masci, Faustino Gomez, Jürgen Schmidhuber. http://arxiv.org/abs/1410.1165

Share

Recent Advances in Deep Learning #2

  1. Teacher – Student paradigm:
    • The idea is flickered by (up to my best knowledge) Caruana et. al. 2006. Basically, the idea is to train an ensemble of networks and use their outputs on a held-out set to distill the knowledge to a smaller network. Then this idea is recently hashed by G. Hinton’s work which trains larger network then use this network output with a mixture of the original train data to train a smaller network. One important trick is to using higher temperature values on softmax layer of the teacher network so class probabilities are smoothly distributed over classes . Student networks is then able to learn class relations induced by the teacher network beside the true classes of the instances as it is suppose to. Eventually, we are able to compress the knowledge of the teacher net by  a smaller network with less number of parameters and faster execution time. Bengio has also one similar work called Fitnets which is the beneficiary of the same idea from a wider aspect. They do not only use the outputs of the teacher net, but they carry representation power of hidden layers  of the teacher to the student net by a regression loss that approximates the teacher hidden layer weights from the student weights.
  2. Bayesian Breezes :
    • We are finally able to see some Bayesian arguments on Deep Models. One of the prevailing works belongs to Maxwelling “Bayesian Dark Knowledge”. Again we have the previous idea but with a very simple trick in mind. Basically, we introduces a Gaussian noise, which is scaled by the decaying learning rate, to the gradient signals. This noise indices a MCMC dynamics to the network and it implicitly learns ensemble networks. The teacher trained in that fashion, is then used to train student nets with a similar approach proposed by G. Hinton. I won’t go into mathematical details here. I guess this is one of the rare Bayesian approaches which is close to be applicable for real-time problems with its a simple trick which is enough to do all the Bayesian magic.
    • Variational Auto Encoder is not a  new work but it recently draw my attention. The difference between VAE and conventional AE is, given a probability distribution, VAE learns the best possible representation that is parametrized by defined distribution. Let’s say we want to fit gaussian distribution to the data. Then, It is able to learn mean and standard deviation of the multiple gaussian functions ( corressponding VAE latent units) with backpropagation with a simple parametrization trick. Eventually, you obtain multiple gaussians with different mean and std on the latent units of VAE and you can sample new instances out of these. You can learn more from this great tutorial.
  3. Recurrent Models for Visual Recognition:
    • ReNet is a paper from Montreal group. They explain an alternative approach to convolutional neural networks in order to learn spatial structures over visual data. Their idea relies on recurrent neural network which scans the image in a sequence of horizontal and then vertical direction. At the end, RNN is able to learn the structure over the whole image (or image patch). Although, their results are not better than state of art, spotting an new alternative to old fashion convolution is exciting effort.
  4. Model Accelerator and Compression Methods:
Share