In my previous post we are talking about AlexNet which was a revolutionary advancement in CNN’s and became best model for image classification. Then VGG came and change the whole scenario.

The full name of VGG is the Visual Geometry Group. It was proposed by Karen Simonyan and Andrew Zisserman of Oxford Robotics Institute in the the year 2014. The original purpose of VGG’s research on the depth of convolutional networks is to understand how the depth of convolutional networks affects the accuracy and accuracy of large-scale image classification and recognition. …

What is an optimizer?

Optimizers are algorithms or methods used to minimize an error function(loss function)or to maximize the efficiency of production. Optimizers are mathematical functions which are dependent on model’s learnable parameters i.e Weights & Biases. Optimizers help to know how to change weights and learning rate of neural network to reduce the losses.

This post will walk you through the optimizers and some popular approaches.

Types of optimizers

Let’s learn about different types of optimizers and how they exactly work to minimize the loss function.

Gradient Descent

Gradient descent is an optimization algorithm based on a convex function and tweaks its parameters iteratively to minimize a given…

AlexNet was designed by Sir Geoffrey Hinton and his student, they won the 2012 ImageNet competition, It was the first architecture after LeNet which brings the revolution in Deep Learning industry. It achieved a top-5 error of 15.3% in ImageNet Challenge. This was 10.8% lower than that of runner up.

AlexNet Architecture:

AlexNet consist of eight layers: Five Convolutional layers, Three Max-Pooling layers, Two Normalization Layers, 2 Fully Connected layers and One Softmax layer.

Alex-Net Architecture

Large Kernels are used in Alex-Net like 96 Kernels are used in first Convolutional layer which extract the important features from the image. Then next two…

LeNet was the first architecture in modern CNN introduced in 1998 by Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner n their paper, Gradient-Based Learning Applied to Document Recognition. It is a very efficient convolutional neural network for handwritten character recognition.

Architecture of LeNet:

It has a 7 layer architecture, among which there are 3 Convolutional layers (C1, C3 and C5), 2 Sub-Sampling (Pooling) layers (S2 and S4), and 1 Fully Connected layer (F6), that are followed by the output layer.

Musstafa

Computer Science Undergraduate and passionate Data Scientist

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store