Posts tagged with: tricks

Important Nuances to Train Deep Learning Models.


A crucial problem in a real DL system design is to capture test data distribution with the trained model which only sees the training data distribution.  Therefore, it is always important to find a good data splitting scheme which at least gives the right measures to such divergence.

It is always a waste to spend all your time for fine-tunning your model on the measure of validation data taken from training data only. Because, when you deploy the model, it undergoes new instances sampled from dynamically shifting data distribution. If you have a chance to see some samples from this dynamic environment, use that to test your model on these real instances and keep your model more coherent and don’t mislead your training flow.

That being said, on the above figure, the second row depicts the right way to choose your data split. And the third row shows the smoothed version which is suggested in practice.


Above figure shows common machine learning problems in relation to different components of your work flow. It is really important to understand what is really said here and what these problems explain.

Bias is the quality of your model on training data. If it predicts wrong on training, it has a “Bias” problem. If you have a good performance on training data but not on validation data, it yields “Variance” problem. If performance differs for validation data taken from training set and test set, it is “Train – Test mismatch”. If performance suffers due to distribution shift on test time, it is “Overfitting”.

Bias requires better architecture and longer training. Variance needs more data and regularization. Train – Test mismatch needs more training data from distribution similar to your test data. Overfitting needs regularization, more data, and data synthesis effort.


Above chart shows a salient way of conducting  DL system evolution.  Follow these decisions with empirical evidences and don’t skip any of these in order not to be disappointed in the end. (I said it with many disappointments 🙂 )


When we see that train, validation errors are close enough to human level performance, it means more variance problem and we need to  collect more data similar to test portion and hurdle more data synthesis work. Train and validation errors  far from human level performance is the sign of bias problem, requires larger models and more training time. Keep in mind that, human performance is not the limit of what your model is theoretically capable of.

Disclaimer: Figures are taken from which summarizes Andrew Ng’s talk.


From NIPS 2016:


Update all python modules with simple command tool

In case you use many modules all together, it is hard to keep track of latest versions and the requisite updates. Therefore using such a little command regular might be useful.


pip install pip-tools
$ pip-review --interactive


After some time, you observe that all the packages are updating.


Run matlab codes from terminal.

Sometimes it is necessary to run your matlab codes from terminal, likely when you are away on remote connection to your work station. Sometimes you need to run couple of matlab instances from same terminal with additional & sign at the end of the terminal command. Now I’ll show a basic command to be able to run youR *.m script from terminal.

matlabOn the terminal you and the directory where the matlab bin file is located you type that command.

./matlab -nodesktop -nosplash -r "run path/to/your/*.m/file"


After -r macro at the quote signs every thing you write is respected as a normal matlab code thus you might write any other execution sequence you want.