skip to Main Content
Join Us for Comet's Annual Convergence Conference on May 8-9:

Deep Learning Techniques You Should Know in 2022

Over the years, Deep Learning has really taken off. This is because we have access to a lot more data and more computational power, allowing us to produce more accurate outputs and build models more efficiently. The rise in Deep Learning has been applied in different sectors such as speech recognition, image recognition, online advertising, and more.

Deep Learning has recently outperformed humans in solving particular tasks, for example, Image Recognition. The level of accuracy achieved by Deep Learning has made it become so popular, and everybody is figuring out ways to implement it into their business.

If you would like to know more about Deep Learning before knowing about the different techniques, check out this article I wrote: Deep Learning: How it works.

Deep Learning Techniques

There are various Deep Learning models which are used to solve complicated tasks.

Multilayer Perceptrons (MLPs)

Multilayer Perceptrons is a feedforward artificial neural network, where a set of inputs are fed into the Neural Network to generate a set of outputs. MLPs are made up of input layers and an output layer that is fully connected.

How does MLP work?

  1. The MLP Network feeds the data into the input layer.
  2. The neuron layers are connected so that the signal passed through in one direction.
  3. The MLP computes the input with existing weights between the input layer and the hidden layer.
  4. Activation Functions are used to determine which nodes to fire, for example, sigmoid functions, and tanh.
  5. Using the training dataset, the MLPs train the model to get a better understanding of the correlation and dependencies between the independent variable and the target variable.
Source: researchgate

Why do we need to know Multilayer Perceptrons?

  1. Adaptive learning: Multilayer Perceptrons have the ability to learn data effectively and perform well.
  2. Very popular: Multilayer Perceptrons is a preferred technique for image identification, spam detection, and stock analysis.
  3. Accuracy: Multilayer Perceptrons do not make assumptions in relation to the probability in comparison to other probability-based models.
  4. Decision Making: Multilayer Perceptrons contain the required decision function through training.
  5. Universal Approximation: Multilayer Perceptrons provide more than 2 layers allowing backpropagation has proven to find any mathematical function and can map attributes to outputs.

Convolutional Neural Network

Convolutional Neural Network (CNN), also known as a ConvNet is a feed-forward Neural Network. It is typically used in image recognition, by processing pixelated data to detect and classify objects in an image. The model has been built to solve complex tasks, preprocessing, and data compilation.

How do CNN’s work?

CNN’s consists of multiple layers:

1. Convolution Layer

This layer extracts the features from the input image by transforming it. During the transformation process, the image is convolved with a kernel which is a small matrix that moves over the input data. It is also known as a convolution matrix or convolution mask.

2. Rectified Linear Unit (ReLU)

This is a non-linear activation function that is used to perform on multi-layer neural networks. It aims to achieve non-linear transformation of the data, with the hope that the transformed data will be linear or linearly separable.

3. Pooling Layer

This layer is used to reduce the dimensions of the feature maps. It does this by reducing the number of parameters the model learns and the computational power used in the network.

It takes a two-dimensional array from the pooled feature map and converts it into a single, long, continuous, linear vector via flattening.

4. Fully Connected Layer

A Fully Connected Layer is made up of a series of connected layers that link every neuron in one layer to every neuron in another layer.

It is formed when the flattened matrix from the pooling layer is fed as an input, which is used to classify and identify the images.

Source: MDPI

Why do we need to know Convolutional Neural Networks?

  1. Feature Learning: Convolutional Neural Networks can automatically detect the importance of features without any human supervision, by learning the different features.
  2. Computationally effective: Convolutional Neural Networks use unique convolution, pooling, parameter sharing, and dimensionality reduction, making the models easy and quick to deploy.
  3. Accuracy: Convolutional Neural Networks is a powerful and efficient model which has outperformed humans in solving particular tasks

Recurrent Neural Networks (RNNs)

A Recurrent Neural Network is used to work for time series data or data that involve sequences. RNN uses the previous state’s knowledge and uses it as an input value for the current prediction.

Therefore, RNN can memorize previous inputs using its internal memory. They are used for time-series analysis, handwriting recognition, Natural Language Processing, and more.

An example of RNN is Long Short Term Memory.

Long Short Term Memory Networks (LSTMs)

Long Short Term Memory Networks is a type of Recurrent Neural Network which can learn and memorize long-term dependencies. Its default behavior and the aim of an LSTM is to remember past information for long periods.

How Do LSTMs Work?

LSTM uses a series of ‘gates’ that control the processing of information, how the data comes in, how it is stored, and how it leaves the network.

LSTM has three gates:

  1. Forget Gate — This is where the LSTM forgets (less weight) irrelevant parts of the previous state.
  2. Input Gate — This is where new information is determined if it should be added to the cell state, the network’s long-term memory. This is done by using the previous hidden state and new input data.
  3. Output Gate — This is deciding the new hidden state. This is done by using the newly updated cell state, the previous hidden state, and the new input data.
Source: d2l

Why do we need to know Long Short Term Memory Networks?

  1. Memory: Long Short Term Memory Networks’ ability to learn and memorize long-term dependencies is highly beneficial. This improves the overall performance of the model.
  2. Sequencing: Long Short Term Memory Networks are very popular with Natural Language Processing due to sequencing. If you train the model on a piece of text the model has the ability to generate new sentences, mimicking the style of the text.

Join 16,000 of your colleagues at Deep Learning Weekly for the latest products, acquisitions, technologies, deep-dives and more.

Generative Adversarial Networks (GANs)

Generative Adversarial Networks use two neural networks which compete with one another, hence the “adversarial” in the name.

The two neural networks used to build a GAN are called ‘the generator’ and ‘the discriminator’. The Generator learns to generate fake data whilst the Discriminator learns from that fake information. They are used to ensure accuracy in the model’s predictions.

How Do GANs work?

  1. During the initial training phase, The Generator learns to generate fake data in the network.
  2. The Discriminator learns to distinguish and learns the difference between the real sample data and the fake data generated by The Generator.
  3. The GAN then sends these results to the Generator and the Discriminator to continuously keep updating the model.

Why do we need to know Generative Adversarial Networks?

  1. No Data Labelling: Generative Adversarial Networks are unsupervised therefore no labeled data is needed in order to train them. This heavily reduces costs.
  2. Sharp Images: Generative Adversarial Networks currently produce the sharpest images in comparison to other techniques.
  3. Backpropagation: Generative Adversarial Networks can be trained only using backpropagation.

Restricted Boltzmann Machines (RBMs)

Restricted Boltzmann machine is a type of Recurrent Neural Network where the nodes make binary decisions with some bias. It was invented by Geoffrey Hinton and is used generally for dimensionality reduction, classification, regression, feature learning, and topic modeling.

RBMs uses two layers:

  • Visible units
  • Hidden units

The visible and hidden units have biases connected. The visible units are connected to the hidden units, and they do not have any output nodes.

How Do RBMs Work?

RBMs Networks have two phases: the forward pass and the backward pass.

  1. The inputs are fed into the RMB which is then translated into a set of numbers. This is the forward pass phase that encodes the inputs.
  2. Every input is combined with individual weights and uses one bias.
  3. The network then passes the output to the hidden layer
  4. During the backward pass phase, the set of numbers that were fed in during the forward pass phase is then translated to form the reconstructed inputs.
  5. Using activation functions, individual weights, and overall bias, RBMs pass the output over to the visible layer for reconstruction.
  6. At the Visible layer stage, the RBM will compare the reconstructed input with the original input

Why do we need to know Restricted Boltzmann Machines?

  1. Different uses: Restricted Boltzmann Machines can be used for classification, regression, topic modeling, and feature learning.
  2. SMOTE: Restricted Boltzmann Machines uses SMOTE which selects examples that are within close feature space and draws a line between the examples producing a new sample along that line.
  3. Gibb’s sampling: Restricted Boltzmann Machines can identify missing values.
  4. Feature Extractor: Restricted Boltzmann Machines can transform raw data into hidden units, solving the problem of unstructured data.

When to use these techniques?

Multilayer Perceptrons (MLPs)

  • When your dataset is in a tabular format consisting of rows and columns. Typically CSV files
  • Can be used for both Classification and Regression tasks where a set of ground truth values are given as the input.

Convolutional Neural Network

  • This technique works very well with Image Datasets An example of this is OCR document analysis, which recognizes text within a digital image.
  • Ideally, the input data is a 2-dimensional field. However, it can also be converted into a 1-dimensional to make the process faster.
  • This technique should also be used if the model requires high complexity in calculating the output.

Recurrent Neural Networks

There are 4 different ways that you can use Recurrent Neural Networks. These are:

  1. One to one: a single input that produces a single output. An example of this is Image Classification
  2. One to many: a single input that produces a sequence of outputs. An example of this is Image captioning, where a variety of words are detected from a single image
  3. Many to one: a sequence of inputs that produces a single output. An example of this is Sentiment Analysis
  4. Many to many: a sequence of inputs that produces a sequence of outputs. An example of this is Video classification, where you split the video into frames and label each frame separately

Generative Adversarial Networks

Generative Adversarial Network is highly used in Images and other forms of media to identify deepfakes. You can use it for:

  • Image inpainting — you can do this by restoring missing parts of images.
  • Image super-resolution — you can do this by upscaling low-resolution images to high resolution.
  • If you want to create data from images to texts.

Restricted Boltzmann Machines

  • As the Boltzmann Machine will learn to regulate, this technique will be good to use when monitoring a system.
  • It is efficient when you are building a binary recommendation system
  • It is also used specifically when using a very specific dataset.

Conclusion

Deep Learning is still evolving and has become very popular over the years. We can say that more and more people and businesses will incorporate Deep Learning into their methodologies.

There are many different techniques you can use for Deep Learning. Each of them is used for specific tasks, with certain processes and limitations.

If you are interested in becoming a Data Scientist or a Machine Learning Engineer, learning more about Deep Learning should be a part of your journey. Here are a few book recommendations:

  1. Deep Learning with Python by Francois Chollet (for beginners and intermediate python programmers)
  2. Neural Networks and Deep Learning: A Textbook by Charu C. Aggarwal (explores classical and modern models used in deep learning)
  3. Deep Learning From Scratch: Building with Python from First Principles by Seth Weidman (for beginners and intermediate python programmers)
Nisha Arya Ahmed

Nisha Arya Ahmed

Back To Top