Neural Network: Part 2

Neural Network: Logistic Regression

In this post, I'll deal with logic regression and its implementation of a single neuron network. I hope you have read my previous post on mattresses and its operation using NumPy package. You can click here to go to my previous post 

Logistic regression basically computes the probability of the output to be one. For example, if we train our Logistic model to recognize the image of a dog then for any new image the model basically try to calculate the probability of whether the new image is a dog or not. the higher the probability will imply that the given image is of a dog. 

A neuron is the primary and fundamental unit of computation for any neural network. A neuron will receive a vector that will include the input features. Each of the features will get multiplied with their corresponding weights and then a bias will be added to each of the features after which the weighted sum will be calculated. This will produce a linear output from the previous step. After processing the linear output, data will have to pass through the activation function. 

Activation Function

An activation function is a non-linear transformation which decides the probability of the linear input. Its output resides from 0 to 1 or from -1 to +1. This function helps in the mapping of values between a certain range which for us is between 0 and 1. For this post, I will use the Sigmoid function as the activation function. The Sigmoid function can be represented by the following equation
Sigmoid Function

The sigmoid function produces an S-shaped spanning across left and right plane. As per Wikipedia Sigmoid function is a bounded differentiable and a real function having a non-negative derivative at each. 
Sigmoid Graph
Source: Wikipedia

Weights and Biases

Weights and biases are responsible to adjust the input function which consists of features in order to bring the final output close to the actual output. Weights are multiplied with the features and then the bias component is added. The output has been passed to the activation function. Recently added from StackOverflow an interesting thing about the bias. As the name suggests bias means to favour something irrespective of whether that thing is right or wrong. Similarly, the bias favors the actual output by adding itself to the weighted sum.


The output z will now act as an input to our sigmoid function. One thing that users must always remember is that weight is a column vector whose size is equal to the number of features in the input and bias is a scalar number. The sigmoid function will now map the value of Z between 0 and 1. The output from this sigmoid function will indicate the probability of how close the input is to the actual output. 

Clearly, the above example was for a single input sample. What if we have multiple input samples? Well, the Logistic regression also works on multiple training samples. Suppose there are m training samples and each of the samples has n features then the input can be stated as M x N matrix. Let us call this matrix as X. Now we can easily perform dot product between X and the weight vector to get a linear output. The vector output will be then passed to the activation function which will result in an output vector A containing the probabilities of each training sample.


In the next post, I'll deal with the Loss function, Cost function and the implmentation of SNN (Single Neuron Network).

Neural Network: Part 1


Basics of Neurons


Hey everyone, long time no see. I have been working on some stuff so was busy and away from my blog. Well, as for my interest in AI, I have been working on neural networks and its implementation which I'll discuss in this post.

In this post, you will learn:
  • The basic concepts used in building a neural network
  • Implementing logistic regression
  • Single Neural Network
  • Forward propagation and Backward propagation
  • Building a small neural network using TensorFlow
  • Building a deep neural network using TensorFlow.
As we know, we humans learn from our experiences and through training. Our brain runs a very high speed and processes any activity quickly. Well, it actually depends on the type of activity, the person is working on. But as we all know, humans have a high grasping power and easily learn, understand, classify, remember and do various other things which machines take a hell lot of time to compute. All these computations are processed by our mighty brain. So to understand the technical neural network, we must understand the biological neural network.

Source: Wikipedia


The brain consists of a huge number of neurons. These neurons are the fundamental unit of the brain. They receive electric signals as input from the external world (Hello World!!) which are processed and sent to other neurons. A basic neuron consists of axion, dendrites, action potential, synapse etc.


Axons are the thin structure and are called as the transmitting part of the neurons. Dendrite is the receiving part of the neuron which receives its input from synapse summing total inputs. The action potential helps the neurons to communicate with each other. So basically, neurons receive electrical impulses from dendrites, which informs the neuron about the outcome of an incident. If the outcome is not in favor, these impulses are altered by chemical and electrical reactions and sent to other neurons by neurotransmission. This keeps on happening until the brain achieves perfection. 

In a similar way, we will work on an Artificial neuron.
Matrices and its manipulation is a basic requirement for this post. To simplify, we will work on the dot product, vectors, and broadcasting in python.

Dot Product
  If A is a C × D matrix and B is a D × E matrix, then the dot product of A and B is a C × E matrix. The dot product will reduce our computation time whenever we have quite a lot of equations. Note that the number of columns in the first matrix should always be equal to the number of rows in the second matrix. In python, we will use dot() function from the NumPy package.

A typical example describing the dot() function in the NumPy package

   import numpy as np 
   a = np.array([[7,6],[2,5]])
   b = np.array([[10,6,70],[18,9, 11]]) 
   c = np.dot(a,b)
   print("Shape of A ", a.shape)
   print("Shape of B ", b.shape)
   print("Shape of C ", c.shape)
   print("The dot product of A and B is : ", c)

And the result is 

Broadcasting
   Broadcasting in Python helps to perform element-wise calculation between a matrix and vector or scalar. Scalar is the just a single number whereas a vector is rank 1 array. The advantage of NumPy package is that it provides all these operations on matrices, vectors, reshaping, transform etc.A vector can be represented as numpy.ndarray(n-Dimensional Array) object using NumPy's array() function. A column vector is of shape (A × 1) and a row vector is of shape (1 × B)

To shape an array we use the syntax Array_name.reshape(row, column)
To find the rank of the matrix we will use the function numpy.linalg.matrix_rank()

These were the basics required to start a simple program or project on neural networks. In the next post, I'll deal with Logic Regression with a single neuron which requires the basics as I have described above. 

Update: Click here to jump to my next post