Month: December 2018

My Tech World

DAY 32-100 DAYS MLCODE: TensorFlow

TensorFlow is a very popular machine learning library which is supported by Google. TensorFlow computations are expressed as stateful data flow graphs. TensorFlow enables us to create a graph where data flow through a series of nodes. Each node is a mathematical operation and each connection between nodes is a multidimensional array also called tensor. Tensorflow program typically…
Read more


December 12, 2018 0

DAY 31-100 DAYS MLCODE: t-SNE

In the previous blog ( day 29), we discussed t-SNE. In this blog, we’ll work on the below problem which is from the book HandOn-Machine Learning with SciKit. First, let’s explain what is t-SNE? t-SNE t-Distributed Stochastic Neighbor Embedding (t-SNE) is a non-linear dimensional reduction technique. It reduces the dimensionality while trying to keep similar…
Read more


December 10, 2018 0

DAY 30-100 DAYS MLCODE: PCA example

In the previous blogs, we discussed PCA, in this blog, we are going to work on below problem as PCA example. Problem Statement Load the MNIST dataset and split it into a training set and a test set (take the first 60,000 instances for training, and the remaining 10,000 for testing). Train a Random Forest classifier…
Read more


December 9, 2018 0

DAY 29-100 DAYS MLCODE: Kernel PCA & LLE

In the previous blog, we discussed incremental PCA and Random PCA, in this blog we will discuss Kernel PCA , LLE and other dimensional reduction technique.  Kernel PCA In the past we have seen how the Kernel trick ( a mathematical technique) that map the instances into high dimensional space and allow to perform the…
Read more


December 8, 2018 0

DAY 28-100 DAYS MLCODE: PCA – Part 2

In the previous blog, we used the SciKit-Learn class to apply PCA and reduce the dimension of the training data. In this blog, let’s discuss more detail information regarding PCA. A simple code to implement the PCA was like below: Principal Component You can access the PC of the above code using “components_” variable. Output: array([[-0.93636116,…
Read more


December 7, 2018 0

DAY 27-100 DAYS MLCODE: PCA

In the previous blog, we discussed Gradient Boosting, in this blog, we will discuss PCA and the curse of Dimensionality. We leave in a world where we see the things in the three-dimensional space and it’s easy to visualize the things, but in machine learning, we may have thousands of features i.e. thousand of dimension.…
Read more


December 6, 2018 0

DAY 26-100 DAYS MLCODE: Gradient Boosting

In the previous blog, we discussed the AdaBoosting, in this blog, we’ll discuss Gradient Boosting.  Like Adaboosting, gradient boosting also try to correct the error of predecessor predictor. But unlike correcting the weight for training instance at each iteration, Gradient descent tries to fit the new predictors to the residual errors made by the previous predictor.…
Read more


December 5, 2018 0

DAY 25-100 DAYS MLCODE: Boosting

Boosting enable a set of a weak learner to form a strong learner. Boosting is an ensemble method to reduce the bias and variances in supervised learning.  You can find the previous blog here. In boosting we train the model sequentially and try to reduce the error predicted by a predecessor.   There are several method of…
Read more


December 4, 2018 0

DAY 24-100 DAYS MLCODE: Features Importance

Features Importance:  In this blog, we are going to discuss Features Importance. In the previous blog,  we have seen the bagging and pasting technique, these are ensemble technique and one of the most famous ensemble technique is Random Forest. Random forest not only help us to provide a better model using ensemble learning technique on…
Read more


December 4, 2018 0

DAY 23-100 DAYS MLCODE: Bagging and Pasting

In the previous blog, we discussed ensemble learning and in this blog, we are discuss Bagging and Pasting. In ensemble learning, the different classifier was training on different training instance and then the best prediction was selected as the final prediction. However, there is another technique where the same algorithm is trained on a different random…
Read more


December 2, 2018 0