Posts tagged with: machine learning

How does Feature Extraction work on Images?

Here I share enhanced version of one of my Quora answer to a similar question ...

There is no single answer for this question since there are many diverse set of methods to extract feature from an image.

First, what is called feature? "a distinctive attribute or aspect of something." so the thing is to have some set of values for a particular instance that diverse that instance from the counterparts. In the field of images, features might be raw pixels for simple problems like digit recognition of well-known Mnist dataset. However, in natural images, usage of simple image pixels are not descriptive enough. Instead there are two main steam to follow. One is to use hand engineered feature extraction methods (e.g. SIFT, VLAD, HOG, GIST, LBP) and the another stream is to learn features that are discriminative in the given context (i.e. Sparse Coding, Auto Encoders, Restricted Boltzmann Machines, PCA, ICA, K-means). Note that second alternative, Continue Reading

Share

A Large set of Machine Learning Resources for Beginners to Mavens

Note : I regularly update this list.

 Machine Learning 101:

I. Introduction to Machine Learning

II.  Linear Regression

Continue Reading

Share

Best way to qualify your machine learning model.

Selection of your final machine learning model is a vital part of your project. Using the accurate metric and the selection paradigm might give very good results even you use very simple or even wrong learning algorithm. Here, I explain a very parsimonious and plane way.

The metric you choose is depended to your problem end expectations. Some common alternatives are F1 score (combination of precision and recall), accuracy (ratio of correctly classified instances to all instances), ROC curve or error rate (1-accuracy).

For being an example I use error rate (at the below figure). First divide the data into 3 as train set, held-out set, test set. We will use held-out set as an objective guidance of hyper-parameters of your algorithm. You might also prefer to use K-fold X-validation but my choice is to keep a held-out set, if I have enough number of instances.

Following procedure can be used for parameter selection and the selection of the final model. The idea is, plotting the performance of the model with the lines of test fold accuracy (held-out set) and the train fold accuracy. This plot should be met at a certain point where both of the curves consistent in some sense (training fold and test fold scores are at reasonable levels) and after a slight step they start to be stray away from each other (train fold score increases still and test fold score starts to be dropped down). This straying effect might be underfitting or after a numerous learning iterations likely to be overfitting.  Choice the best trade-off point on the plot as the correct model.

 

Example with error rate so not confused by the decreasing values so lower is better in that sense. The signed point is the saturation point where the data starts to over-fit.


Another caveat, do not use so much folds for x-validation since some of the papers (that cannot come up the name right now:( ), asymptotic behaviour of cross validation is likely to tout over-fitting therefore use of leave-multiple out procedure instead of leave-one out if you propose to use large fold number.

Share

Randomness and RandomForests

One of the enhancing use case of randomness subjected to machine learning is Random Forests. If you are familiar with Decision Tree that is used inasmuch as vast amount of data analysis and machine learning problems, Random Forests is simple to grasp.

For the beginners, decision tree is a simple, deterministic data structure for modelling decision rules for a specific classification problem (Theoretically shortest possible message length in Information jargon). At each node, one feature is selected to make instance separating decision. That is, we select the feature that separates instances to classes with the best possible "purity". This "purity" is measured by  entropy, gini index or information gain. As lowing to the leaves , tree is branching to disperse the different class of instance to different root to leaf paths.  Therefore, at the leaves of the tree we are able to classify the items to the classes. Continue Reading

Share



Kohonen Learning Procedure K-Means vs Lloyd's K-means

K-means maybe the most common data quantization method, used widely for many different domain of problems. Even it relies on very simple idea, it proposes satisfying results in a computationally efficient environment.

Underneath of the formula of K-means optimization, the objective is to minimize the distance between data points to its closest centroid (cluster center). Here we can write the objective as;

is the closest centroid to instance .

Continue Reading

Share

Some Useful Machine Learning Libraries.

Especially, with the advent of many different and intricate Machine Learning algorithms, it is very hard to come up with your code to any problem. Therefore, the use of a library and its choice is imperative provision before you start the project. However, there are many different libraries having different quirks and rigs in different languages, even in multiple languages so that choice is not very straight forward as it seems.

Before you start, I strongly recommend you to experiment the library of your interest so as not to say " Ohh Buda!" at the end. For being a simple guide, I will point some possible libraries and signify some of them as my choices with the reason behind.

Continue Reading

Share

Deep Learning Resources

Here is a collection of resources about deep learning neural networks. I have not read all of the papers or watched all of the videos, so I cannot vouch for them.

Websites:

  • Deep Learning:

    Deep Learning is a new area of Machine Learning research, which has been introduced with the objective of moving Machine Learning closer to one of its original goals: Artificial Intelligence.

    This website is intended to host a variety of resources and pointers to information about Deep Learning. In these pages you will find

    For the latest additions, including papers and software announcement, be sure to visit the Blog section of the website. Contact us if you have any comments or suggestions!

  • Geoffrey E. Hinton: Papers, including tutorials, and videos.
  • Jürgen Schmidhuber: Papers and links.
Share