Build a Neural Network Classifier in 5 minutes


Hola Amigos,

I actually faced a lot of issues while building my own classifier to build my own neural network classifier. Step - by - Step explanations are hardly available coz everyone here thinks that we already know 30% approx about the subject but what about a beginner? Let's face it it's very difficult to understand these available examples, especially for new fledglings. Therefore I decided to follow the new way of learning and that is Reverse Engineering. Take an example and crack it down and then use that example as a reference to crack every other example. I did a similar thing a long time ago to learn to programme and did it again to make my own classifier that can identify a man and a woman.

To learn about CNN Click Here

The blueprint of a neural network classifier is as follows


  1. Specify a directory of your images for training
  2. Specify a directory of your images for validation
  3. Make a Convolutional Neural Network with input dimensions according to image dimensions.
  4. Add two hidden layers. Actually even 1 will work to some extent.
  5. Convert the image input to a format readable by the neural network
  6. Convert the validation input to a format readable by the neural network
  7. Set a learning rate, epochs, steps per epoch
  8. Save your model and retrain on different sets of examples and datasets 
Ok enough said. 

I am using here Keras to build this classifier. Keras is damn easy, believe it. I am pretty sure you have heard about the above steps but when the coding part comes, our mind starts cracking like, what is this "Google It", What is that "Google It".

I am using here Spyder 3.x Anaconda as my coding platform. Feel free to choose yours

Do open Anaconda Prompt first. Then type

conda pip install keras



The first step is to import the libraries which shouldn't be hard.

from keras.models import Sequential

We need Sequential to build the neural network,

from keras.preprocessing.image import ImageDataGenerator

ImageDataGenerator to convert our directory data into Keras neural network readable format. It is like when you eat Doritos, your stomach breaks it down with Hydrochloric Acid in order to process it. The intestines can only extract energy from that broken food and not directly from Doritos. I hope it is clear now :)

from keras.layers import *

Next, we need Conv2D to convert the image into arrays of data, MaxPooling2D to downsample the converted data. So the question is why do we need MaxPooling? How fast can you solve 10 variable equation? It will certainly take a long time. Similarly, greater the number of parameters, the longer will be the time taken to process the parameters. I hope this will make it clear that parameters are directly proportional to the complexity.

Then we need a dense layer to connect the layers. Dense and fully connected are different names for the same thing. This is what I realized. Dense layers take care to connect every input with every output adjoining them with a set of weights.

Click here to know about dense & fully connected layers.


Now specify the directory where you have stored the images for training and also the location for validation. Remember you just have to specify the directory containing the folders of your classes.

I had two folders namely men and women. They were located in train folder and the train folder was located in gathered_data folder. So just provide the location of the parent directory holding your classes.

Similarly, specify you validation data directory. The images for validation will remain separated from the training data in order to check the accuracy of our NN.

Now specify your epochs and batch size. One epoch is 1 forward+back propagation. Batch size is the number of samples that will be sent at a time to the network.

Now specify the height and width of the image. One can relax as Keras helps us to resize every image into the required size. Basically, small size images have fewer parameters thereby reducing the complexity of the neural network.

Now we will build our neural network. The syntax model = Sequential() calls the Sequential API which has the methods of convolution and pooling.

model.add(Conv2D(32, (3, 3), input_shape = (width, height, 3)))

model.add adds a layer in the network. Conv2D is the convolution layer method. so model.add(Conv2D) adds a convolution layer. (32, (3, 3)) adds 32 output filters with stride size 3 X 3.

Click here to know about filters.

input_shape is the shape of the image that we are providing the input. Do remember that only this convolution layer has this image input. All other layers have the input connected to the output of the previous output. In the syntax (width, height, 3) we know about width and height but what about the '3' here. It is the number of channels here. Every image is in RGB model. So we have three colors hence the third dimension is 3.

Keras has two methods. The first is channel_first which means the input_shape will be (3, width, height)
The second method is channel_last which means the input shape will be (width, height, 3). I guess English is enough to understand this. So we are following channel_last method.
Next, we will choose our activation formula. We all started with sigmoid but we can choose other methods too provided by Keras. I am choosing relu as my activation function. So after the update and summation of wights and data, it will be passed through the activation function. After passing through the activation function we will minimize some parameters by pooling. Thus I have used MaxPooling here. Average pooling is also available here but accuracy and output of Max Pooling are enhanced with Max rather than with average pooling.


The size of pooling layer is 2 X 2.

Click here to know about Pooling

Similarly, I have added a third layer too. So a quick way through will be like this


Image Credits - Mathworks

So it goes like this


Th last line compiles the neural network model into a single package with extra parameters like the loss, optimizer, and metrics.
Summary of what we did till now -

  • We imported the libraries of Keras for convolution, sequential and image generator
  • We specified the whereabouts i.e directory of our images for training as well as validation
  • We made our neural network design.
Coming to the term loss = 'binary_entropy'. Binary Entropy is the loss function. Let us consider x as the actual answer and y as an ideal answer. So the loss can be deduced by the simple formula
Loss = x - y
Clearly, the lower is the loss, the better our network gets. So in order to minimize the loss, the CNN has to deduce a function to minimize the loss of maximum efficiency. Smaller values of cost function point to better network fit and vice-versa. Binary entropy is modeled on a variable which can have only two values, 0 and 1. So if the probability of 1 is 0.4 then the probability of 0 will certainly be 0.6. The binary entropy follows a graph like below

Fig - Binary Entropy Function

We are using binary entropy here because we have two classes. The result has to be either a man or a woman. Hence the probability will either flow towards man or woman. One must remember that binary cross entropy is a special case for categorical cross entropy where if you are having two classes then you are using the binary cross entropy form of categorical cross entropy.

Optimizer depends on the neural network density. For deep networks, Adam or root mean square is used. Since we are having 2 hidden layers with a good number of neurons, I would stay with Adam. 

Coming to the final part of the code.


inp_data = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2)

Image Generator will rescale the image. The shear range will shift the image by a factor of 0.2 and will zoom the image by a factor of 0.2 in order to provide vibrant data and prevent overfitting.
Similarly, the validation data is also rescaled and flipped horizontally randomly for better fitting. 
You can use various features for this too.

Now to process the image from the directory and passing it through data generator we use flow from directory. It will grab the images from the directory and apply inp_data process on it. 
Carefully see the line 4 of the above image. The function flow from the directory will require the input image directory, the target size to convert image size, batch size, and the class. After conversion of the input images, it will be stored in input_data. Similarly, valid_data will store the validation data. 

Finally, the CNN model is compiled by passing the input data, epoch size, valid_data, steps per epoch.
Steps per epoch are equal to the number of samples divided by the batch size. Similarly, validation_steps is equal to the number of validation samples by batch size.
Batch size is the number of samples taken at once inside the NN.
One forward propagation and one backward propagation of every example is equal to one epoch.
Iterations are the number of passes required to complete 1 epoch.

Example Take 100 images and batch size is 50 then it will take 2 iterations to complete 100 images which is equal to 1 epoch.