Month: December 2018

My Tech World

DAY 42-100 DAYS MLCODE: Max-Norm Regularization

In the previous blog we discussed the Dropout, in this blog, we’ll discuss Max-Norm Regularization. This is another popular technique used when we are training a deep neural network. As per CS231 course Max-Norm Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron…
Read more


December 22, 2018 0

DAY 41-100 DAYS MLCODE: Dropout

Dropout is most famous regularization technique of the Deep neural network. Even state-of-art neural network get 1-2% boost using dropout. This was first found in the paper by G. E. Hinton and more detail in this paper. As per Wikipedia Dropout is a regularization technique for reducing overfitting in neural networks by preventing complex co-adaptations on training data. It is a very…
Read more


December 22, 2018 0

DAY 40-100 DAYS MLCODE: Regularization using Tensorflow

As per Wikipedia, regularization is a technique that applies to objective functions in ill-posed optimization problems. With the hundred thousands of parameters and huge data, generally model has tendency to overfitting. There are a way to avoid the overfitting using regularization technique. Most famous regularization techniques are early stopping, l1 and l2 regularization, dropout , max-norm regularization…
Read more


December 20, 2018 0

DAY 39-100 DAYS MLCODE: Adam Optimization

Training a deep neural network with lots of training data is always very difficult and slow thing, better and faster optimization technique can help to expedite the process. Adam Optimization is one of the faster optimization technique than the classical Gradient Descent. Generally, to speed up the process we start from a good initialization strategy…
Read more


December 19, 2018 0

DAY 38-100 DAYS MLCODE: Transfer Learning

Transfer learning is a technique in machine learning to share or reuse the pre-trained model to solve a similar type of new task. For example, you are trying to train dogs breed. We can take millions of pictures of the Dogs of different breeds and can train our model to detect the dog’s breed. This…
Read more


December 18, 2018 0

DAY 37-100 DAYS MLCODE: Batch Normalization

In this blog, we’ll implement Batch normalization using Tensorflow. In the previous blog.  we trained the model with two layers but in actual scenario we may have several layers when you are dealing with pictures or 100 items as classification problem. Training data may have thousands of parameters and it’s not easy to train the…
Read more


December 17, 2018 0

DAY 36-100 DAYS MLCODE: MLP with Tensorflow

In the previous blog, we completed a simple example using tensorflow, in this blog let’s create an example of MLP with tensorflow. Perceptron Perceptron is a simple artificial neural network compose of single linear threshold unit ( LTU) which is connected with all the input. Perceptron was able to classify the training instances to more…
Read more


December 16, 2018 0

DAY 35-100 DAYS MLCODE:Mini-Batch Gradient descent part 3

In the previous blog, we tried to implement the Mini-Batch Gradient Descent. We’ll continue the same example but in this blog, we’ll use the library TensorBoradColab to use the TensorBoard. Let’s continue to use the same dataset, the only thing we are going to change is to increase the degree of a polynomial. Reset the…
Read more


December 14, 2018 0

DAY 34-100 DAYS MLCODE: Implement Mini-Batch Gradient descent part 2

In the previous blog we created a simple Mini-Batch gradient descent and in this blog, we are going to make changes to improve the performance and use Tensorboard to view the graph. To improve the performance of the model, instead of using the linear decision boundary let’s increase the degree of the polynomial. Let’s add…
Read more


December 14, 2018 0

DAY 33-100 DAYS MLCODE: Implement Mini-Batch Gradient using Tensorflow

Mini-Batch Gradient descent is a form of gradient descent where the algorithm splits the training dataset into small batches and used these batches to calculate the loss and update the coefficient based on the outcome. Mini-Batch Gradient descent is lies between the very robust stochastic gradient descent and a very efficient batch gradient descent.  In…
Read more


December 12, 2018 0