{"id":4581,"date":"2022-11-10T17:50:05","date_gmt":"2022-11-11T01:50:05","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=4581"},"modified":"2025-04-24T17:16:33","modified_gmt":"2025-04-24T17:16:33","slug":"deep-learning-techniques-you-should-know-in-2022","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/","title":{"rendered":"Deep Learning Techniques You Should Know in 2022"},"content":{"rendered":"\n<link rel=\"canonical\" href=\"heartbeat.comet.ml\/deep-learning-techniques-you-should-know-in-2022-94f33e62d922\">\n\n\n\n<div class=\"ir is it iu iv\">\n<p id=\"87b1\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">Over the years, Deep Learning has really taken off. This is because we have access to a lot more data and more computational power, allowing us to produce more accurate outputs and build models more efficiently. The rise in Deep Learning has been applied in different sectors such as speech recognition, image recognition, online advertising, and more.<\/p>\n<p id=\"9c02\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">Deep Learning has recently outperformed humans in solving particular tasks, for example, Image Recognition. The level of accuracy achieved by Deep Learning has made it become so popular, and everybody is figuring out ways to implement it into their business.<\/p>\n<p id=\"ddad\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">If you would like to know more about Deep Learning before knowing about the different techniques, check out this article I wrote: Deep Learning: How it works.<\/p>\n<h1 id=\"31ec\" class=\"ks kt iy bm ku kv kw kx ky kz la lb lc ld le lf lg lh li lj lk ll lm ln lo lp ga\" data-selectable-paragraph=\"\">Deep Learning Techniques<\/h1>\n<p id=\"e6a5\" class=\"pw-post-body-paragraph jv jw iy bm b jx lq jz ka kb lr kd ke kf ls kh ki kj lt kl km kn lu kp kq kr ir ga\" data-selectable-paragraph=\"\">There are various Deep Learning models which are used to solve complicated tasks.<\/p>\n<h2 id=\"7575\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">Multilayer Perceptrons (MLPs)<\/h2>\n<p id=\"45b2\" class=\"pw-post-body-paragraph jv jw iy bm b jx lq jz ka kb lr kd ke kf ls kh ki kj lt kl km kn lu kp kq kr ir ga\" data-selectable-paragraph=\"\">Multilayer Perceptrons is a feedforward artificial neural network, where a set of inputs are fed into the Neural Network to generate a set of outputs. MLPs are made up of input layers and an output layer that is fully connected.<\/p>\n<h2 id=\"603f\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">How does MLP work?<\/h2>\n<ol class=\"\">\n<li id=\"e40e\" class=\"mj mk iy bm b jx lq kb lr kf ml kj mm kn mn kr mo mp mq mr ga\" data-selectable-paragraph=\"\">The MLP Network feeds the data into the input layer.<\/li>\n<li id=\"a6e6\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">The neuron layers are connected so that the signal passed through in one direction.<\/li>\n<li id=\"7d16\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">The MLP computes the input with existing weights between the input layer and the hidden layer.<\/li>\n<li id=\"40d9\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Activation Functions are used to determine which nodes to fire, for example, sigmoid functions, and tanh.<\/li>\n<li id=\"66de\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Using the training dataset, the MLPs train the model to get a better understanding of the correlation and dependencies between the independent variable and the target variable.<\/li>\n<\/ol>\n<figure class=\"my mz na nb gx nc gl gm paragraph-image\">\n<div class=\"nd ne do nf ce ng\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"ce nh ni c aligncenter\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/max\/700\/0*ae0C9EJAszvFQu__\" alt=\"\" width=\"700\" height=\"493\"><\/figure><div class=\"gl gm mx\" style=\"text-align: center;\"><picture><source srcset=\"https:\/\/miro.medium.com\/max\/640\/0*ae0C9EJAszvFQu__ 640w, https:\/\/miro.medium.com\/max\/720\/0*ae0C9EJAszvFQu__ 720w, https:\/\/miro.medium.com\/max\/750\/0*ae0C9EJAszvFQu__ 750w, https:\/\/miro.medium.com\/max\/786\/0*ae0C9EJAszvFQu__ 786w, https:\/\/miro.medium.com\/max\/828\/0*ae0C9EJAszvFQu__ 828w, https:\/\/miro.medium.com\/max\/1100\/0*ae0C9EJAszvFQu__ 1100w, https:\/\/miro.medium.com\/max\/1400\/0*ae0C9EJAszvFQu__ 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\">Source: <\/picture><a class=\"au nm\" href=\"https:\/\/www.researchgate.net\/figure\/A-hypothetical-example-of-Multilayer-Perceptron-Network_fig4_303875065\" target=\"_blank\" rel=\"noopener ugc nofollow\">researchgate<\/a><\/div>\n<\/div>\n<\/figure>\n<p data-selectable-paragraph=\"\">\n<\/p><p id=\"40e6\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Why do we need to know Multilayer Perceptrons?<\/strong><\/p>\n<ol class=\"\">\n<li id=\"2ba8\" class=\"mj mk iy bm b jx jy kb kc kf no kj np kn nq kr mo mp mq mr ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Adaptive learning<\/strong>: Multilayer Perceptrons have the ability to learn data effectively and perform well.<\/li>\n<li id=\"a378\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Very popular<\/strong>: Multilayer Perceptrons is a preferred technique for image identification, spam detection, and stock analysis.<\/li>\n<li id=\"763e\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Accuracy<\/strong>: Multilayer Perceptrons do not make assumptions in relation to the probability in comparison to other probability-based models.<\/li>\n<li id=\"c821\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Decision Making<\/strong>: Multilayer Perceptrons contain the required decision function through training.<\/li>\n<li id=\"8ad3\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Universal Approximation<\/strong>: Multilayer Perceptrons provide more than 2 layers allowing backpropagation has proven to find any mathematical function and can map attributes to outputs.<\/li>\n<\/ol>\n<h2 id=\"0763\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">Convolutional Neural Network<\/h2>\n<p id=\"046f\" class=\"pw-post-body-paragraph jv jw iy bm b jx lq jz ka kb lr kd ke kf ls kh ki kj lt kl km kn lu kp kq kr ir ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Convolutional Neural Network (CNN)<\/strong>, also known as a ConvNet is a feed-forward Neural Network. It is typically used in image recognition, by processing pixelated data to detect and classify objects in an image. The model has been built to solve complex tasks, preprocessing, and data compilation.<\/p>\n<h2 id=\"3aa6\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">How do CNN\u2019s work?<\/h2>\n<p id=\"844c\" class=\"pw-post-body-paragraph jv jw iy bm b jx lq jz ka kb lr kd ke kf ls kh ki kj lt kl km kn lu kp kq kr ir ga\" data-selectable-paragraph=\"\">CNN\u2019s consists of multiple layers:<\/p>\n<p>1. Convolution Layer<\/p>\n<p id=\"bcae\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">This layer extracts the features from the input image by transforming it. During the transformation process, the image is convolved with a kernel which is a small matrix that moves over the input data. It is also known as a convolution matrix or convolution mask.<\/p>\n<p id=\"7502\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">2. Rectified Linear Unit (ReLU)<\/p>\n<p id=\"96cc\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">This is a non-linear activation function that is used to perform on multi-layer neural networks. It aims to achieve non-linear transformation of the data, with the hope that the transformed data will be linear or linearly separable.<\/p>\n<p id=\"8e03\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">3. Pooling Layer<\/p>\n<p id=\"1ba9\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">This layer is used to reduce the dimensions of the feature maps. It does this by reducing the number of parameters the model learns and the computational power used in the network.<\/p>\n<p id=\"f1e1\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">It takes a two-dimensional array from the pooled feature map and converts it into a single, long, continuous, linear vector via flattening.<\/p>\n<p id=\"9f4f\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">4. Fully Connected Layer<\/p>\n<p id=\"d6a8\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">A Fully Connected Layer is made up of a series of connected layers that link every neuron in one layer to every neuron in another layer.<\/p>\n<p id=\"90fe\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">It is formed when the flattened matrix from the pooling layer is fed as an input, which is used to classify and identify the images.<\/p>\n<figure class=\"my mz na nb gx nc gl gm paragraph-image\">\n<div class=\"nd ne do nf ce ng\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"ce nh ni c aligncenter\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/max\/700\/0*6X87TlWGV-uste5y\" alt=\"\" width=\"700\" height=\"252\"><\/figure><div class=\"gl gm nr\" style=\"text-align: center;\"><picture><source srcset=\"https:\/\/miro.medium.com\/max\/640\/0*6X87TlWGV-uste5y 640w, https:\/\/miro.medium.com\/max\/720\/0*6X87TlWGV-uste5y 720w, https:\/\/miro.medium.com\/max\/750\/0*6X87TlWGV-uste5y 750w, https:\/\/miro.medium.com\/max\/786\/0*6X87TlWGV-uste5y 786w, https:\/\/miro.medium.com\/max\/828\/0*6X87TlWGV-uste5y 828w, https:\/\/miro.medium.com\/max\/1100\/0*6X87TlWGV-uste5y 1100w, https:\/\/miro.medium.com\/max\/1400\/0*6X87TlWGV-uste5y 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\">Source: <\/picture><a class=\"au nm\" href=\"https:\/\/www.mdpi.com\/1424-8220\/19\/22\/4933\" target=\"_blank\" rel=\"noopener ugc nofollow\">MDPI<\/a><\/div>\n<\/div>\n<\/figure>\n<p data-selectable-paragraph=\"\">\n<\/p><p id=\"ba82\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Why do we need to know Convolutional Neural Networks?<\/strong><\/p>\n<ol class=\"\">\n<li id=\"fd22\" class=\"mj mk iy bm b jx jy kb kc kf no kj np kn nq kr mo mp mq mr ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Feature Learning<\/strong>: Convolutional Neural Networks can automatically detect the importance of features without any human supervision, by learning the different features.<\/li>\n<li id=\"d2d5\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Computationally effective<\/strong>: Convolutional Neural Networks use unique convolution, pooling, parameter sharing, and dimensionality reduction, making the models easy and quick to deploy.<\/li>\n<li id=\"e435\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Accuracy<\/strong>: Convolutional Neural Networks is a powerful and efficient model which has outperformed humans in solving particular tasks<\/li>\n<\/ol>\n<h2 id=\"fc86\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">Recurrent Neural Networks (RNNs)<\/h2>\n<p id=\"d955\" class=\"pw-post-body-paragraph jv jw iy bm b jx lq jz ka kb lr kd ke kf ls kh ki kj lt kl km kn lu kp kq kr ir ga\" data-selectable-paragraph=\"\">A Recurrent Neural Network is used to work for time series data or data that involve sequences. RNN uses the previous state\u2019s knowledge and uses it as an input value for the current prediction.<\/p>\n<p id=\"c360\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">Therefore, RNN can memorize previous inputs using its internal memory. They are used for time-series analysis, handwriting recognition, Natural Language Processing, and more.<\/p>\n<p id=\"86c8\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">An example of RNN is Long Short Term Memory.<\/p>\n<h2 id=\"1149\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">Long Short Term Memory Networks (LSTMs)<\/h2>\n<p id=\"59bc\" class=\"pw-post-body-paragraph jv jw iy bm b jx lq jz ka kb lr kd ke kf ls kh ki kj lt kl km kn lu kp kq kr ir ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Long Short Term Memory Networks&nbsp;<\/strong>is a type of Recurrent Neural Network which can learn and memorize long-term dependencies. Its default behavior and the aim of an LSTM is to remember past information for long periods.<\/p>\n<h2 id=\"ab69\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">How Do LSTMs Work?<\/h2>\n<p id=\"f6f7\" class=\"pw-post-body-paragraph jv jw iy bm b jx lq jz ka kb lr kd ke kf ls kh ki kj lt kl km kn lu kp kq kr ir ga\" data-selectable-paragraph=\"\">LSTM uses a series of \u2018gates\u2019 that control the processing of information, how the data comes in, how it is stored, and how it leaves the network.<\/p>\n<p id=\"635c\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">LSTM has three gates:<\/p>\n<ol class=\"\">\n<li id=\"ddb9\" class=\"mj mk iy bm b jx jy kb kc kf no kj np kn nq kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Forget Gate \u2014 This is where the LSTM forgets (less weight) irrelevant parts of the previous state.<\/li>\n<li id=\"a413\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Input Gate \u2014 This is where new information is determined if it should be added to the cell state, the network\u2019s long-term memory. This is done by using the previous hidden state and new input data.<\/li>\n<li id=\"4e16\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Output Gate \u2014 This is deciding the new hidden state. This is done by using the newly updated cell state, the previous hidden state, and the new input data.<\/li>\n<\/ol>\n<figure class=\"my mz na nb gx nc gl gm paragraph-image\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"ce nh ni c aligncenter\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/max\/691\/0*zwKkgRA161Z3aPer\" alt=\"\" width=\"691\" height=\"348\"><\/figure><div class=\"gl gm ns\" style=\"text-align: center;\"><picture><source srcset=\"https:\/\/miro.medium.com\/max\/640\/0*zwKkgRA161Z3aPer 640w, https:\/\/miro.medium.com\/max\/720\/0*zwKkgRA161Z3aPer 720w, https:\/\/miro.medium.com\/max\/750\/0*zwKkgRA161Z3aPer 750w, https:\/\/miro.medium.com\/max\/786\/0*zwKkgRA161Z3aPer 786w, https:\/\/miro.medium.com\/max\/828\/0*zwKkgRA161Z3aPer 828w, https:\/\/miro.medium.com\/max\/1100\/0*zwKkgRA161Z3aPer 1100w, https:\/\/miro.medium.com\/max\/1382\/0*zwKkgRA161Z3aPer 1382w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 691px\" data-testid=\"og\">Source: <\/picture><a class=\"au nm\" href=\"https:\/\/d2l.ai\/chapter_recurrent-modern\/lstm.html\" target=\"_blank\" rel=\"noopener ugc nofollow\">d2l<\/a><\/div>\n<\/figure>\n<p data-selectable-paragraph=\"\">\n<\/p><p id=\"d5d2\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Why do we need to know Long Short Term Memory Networks?<\/strong><\/p>\n<ol class=\"\">\n<li id=\"2883\" class=\"mj mk iy bm b jx jy kb kc kf no kj np kn nq kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Memory: Long Short Term Memory Networks&#8217; ability to learn and memorize long-term dependencies is highly beneficial. This improves the overall performance of the model.<\/li>\n<li id=\"6810\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Sequencing: Long Short Term Memory Networks are very popular with Natural Language Processing due to sequencing. If you train the model on a piece of text the model has the ability to generate new sentences, mimicking the style of the text.<\/li>\n<\/ol>\n<\/div>\n\n\n\n<div class=\"o dx nt nu id nv\" role=\"separator\"><\/div>\n\n\n\n<div class=\"ir is it iu iv\">\n<blockquote class=\"oa\"><p id=\"60c7\" class=\"ob oc iy bm od oe of og oh oi oj kr cn\" data-selectable-paragraph=\"\">Join 16,000 of your colleagues at&nbsp;<a class=\"au nm\" href=\"https:\/\/www.deeplearningweekly.com\/about\" target=\"_blank\" rel=\"noopener ugc nofollow\">Deep Learning Weekly<\/a>&nbsp;for the latest products, acquisitions, technologies, deep-dives and more.<\/p><\/blockquote>\n<\/div>\n\n\n\n<div class=\"o dx nt nu id nv\" role=\"separator\"><\/div>\n\n\n\n<div class=\"ir is it iu iv\">\n<h2 id=\"261a\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">Generative Adversarial Networks (GANs)<\/h2>\n<p id=\"2e01\" class=\"pw-post-body-paragraph jv jw iy bm b jx lq jz ka kb lr kd ke kf ls kh ki kj lt kl km kn lu kp kq kr ir ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Generative Adversarial Networks<\/strong>&nbsp;use two neural networks which compete with one another, hence the \u201cadversarial\u201d in the name.<\/p>\n<p id=\"f659\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">The two neural networks used to build a GAN are called \u2018the generator\u2019 and \u2018the discriminator\u2019. The Generator learns to generate fake data whilst the Discriminator learns from that fake information. They are used to ensure accuracy in the model\u2019s predictions.<\/p>\n<h2 id=\"5f31\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">How Do GANs work?<\/h2>\n<ol class=\"\">\n<li id=\"f7b9\" class=\"mj mk iy bm b jx lq kb lr kf ml kj mm kn mn kr mo mp mq mr ga\" data-selectable-paragraph=\"\">During the initial training phase, The Generator learns to generate fake data in the network.<\/li>\n<li id=\"07ba\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">The Discriminator learns to distinguish and learns the difference between the real sample data and the fake data generated by The Generator.<\/li>\n<li id=\"25f5\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">The GAN then sends these results to the Generator and the Discriminator to continuously keep updating the model.<\/li>\n<\/ol>\n<figure class=\"my mz na nb gx nc gl gm paragraph-image\">\n<div class=\"nd ne do nf ce ng\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"ce nh ni c aligncenter\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/max\/700\/0*5Lbqgsz18qWLJMYM\" alt=\"\" width=\"700\" height=\"306\"><\/figure><div class=\"gl gm ok\" style=\"text-align: center;\"><picture><source srcset=\"https:\/\/miro.medium.com\/max\/640\/0*5Lbqgsz18qWLJMYM 640w, https:\/\/miro.medium.com\/max\/720\/0*5Lbqgsz18qWLJMYM 720w, https:\/\/miro.medium.com\/max\/750\/0*5Lbqgsz18qWLJMYM 750w, https:\/\/miro.medium.com\/max\/786\/0*5Lbqgsz18qWLJMYM 786w, https:\/\/miro.medium.com\/max\/828\/0*5Lbqgsz18qWLJMYM 828w, https:\/\/miro.medium.com\/max\/1100\/0*5Lbqgsz18qWLJMYM 1100w, https:\/\/miro.medium.com\/max\/1400\/0*5Lbqgsz18qWLJMYM 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\">Source: <\/picture><a class=\"au nm\" href=\"https:\/\/wiki.pathmind.com\/generative-adversarial-network-gan\" target=\"_blank\" rel=\"noopener ugc nofollow\">wiki.pathmind<\/a><\/div>\n<\/div>\n<\/figure>\n<p data-selectable-paragraph=\"\">\n<\/p><p id=\"4a19\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Why do we need to know Generative Adversarial Networks?<\/strong><\/p>\n<ol class=\"\">\n<li id=\"4e25\" class=\"mj mk iy bm b jx jy kb kc kf no kj np kn nq kr mo mp mq mr ga\" data-selectable-paragraph=\"\">No Data Labelling: Generative Adversarial Networks are unsupervised therefore no labeled data is needed in order to train them. This heavily reduces costs.<\/li>\n<li id=\"5d33\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Sharp Images: Generative Adversarial Networks currently produce the sharpest images in comparison to other techniques.<\/li>\n<li id=\"818e\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Backpropagation: Generative Adversarial Networks can be trained only using backpropagation.<\/li>\n<\/ol>\n<h2 id=\"f94d\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">Restricted Boltzmann Machines (RBMs)<\/h2>\n<p id=\"a4a8\" class=\"pw-post-body-paragraph jv jw iy bm b jx lq jz ka kb lr kd ke kf ls kh ki kj lt kl km kn lu kp kq kr ir ga\" data-selectable-paragraph=\"\">Restricted Boltzmann machine is a type of Recurrent Neural Network where the nodes make binary decisions with some bias. It was invented by Geoffrey Hinton and is used generally for dimensionality reduction, classification, regression, feature learning, and topic modeling.<\/p>\n<p id=\"01bd\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">RBMs uses two layers:<\/p>\n<ul class=\"\">\n<li id=\"6a52\" class=\"mj mk iy bm b jx jy kb kc kf no kj np kn nq kr ol mp mq mr ga\" data-selectable-paragraph=\"\">Visible units<\/li>\n<li id=\"bfca\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr ol mp mq mr ga\" data-selectable-paragraph=\"\">Hidden units<\/li>\n<\/ul>\n<p id=\"252c\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">The visible and hidden units have biases connected. The visible units are connected to the hidden units, and they do not have any output nodes.<\/p>\n<h2 id=\"abc3\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">How Do RBMs Work?<\/h2>\n<p id=\"53f6\" class=\"pw-post-body-paragraph jv jw iy bm b jx lq jz ka kb lr kd ke kf ls kh ki kj lt kl km kn lu kp kq kr ir ga\" data-selectable-paragraph=\"\">RBMs Networks have two phases: the forward pass and the backward pass.<\/p>\n<ol class=\"\">\n<li id=\"db48\" class=\"mj mk iy bm b jx jy kb kc kf no kj np kn nq kr mo mp mq mr ga\" data-selectable-paragraph=\"\">The inputs are fed into the RMB which is then translated into a set of numbers. This is the forward pass phase that encodes the inputs.<\/li>\n<li id=\"b423\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Every input is combined with individual weights and uses one bias.<\/li>\n<li id=\"f1fd\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">The network then passes the output to the hidden layer<\/li>\n<li id=\"f7d6\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">During the backward pass phase, the set of numbers that were fed in during the forward pass phase is then translated to form the reconstructed inputs.<\/li>\n<li id=\"e193\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Using activation functions, individual weights, and overall bias, RBMs pass the output over to the visible layer for reconstruction.<\/li>\n<li id=\"5abb\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">At the Visible layer stage, the RBM will compare the reconstructed input with the original input<\/li>\n<\/ol>\n<figure class=\"my mz na nb gx nc gl gm paragraph-image\">\n<div class=\"nd ne do nf ce ng\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"ce nh ni c aligncenter\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/max\/700\/0*bYdbrIjY_mUOu1WO\" alt=\"\" width=\"700\" height=\"394\"><\/figure><div class=\"gl gm nr\" style=\"text-align: center;\"><picture><source srcset=\"https:\/\/miro.medium.com\/max\/640\/0*bYdbrIjY_mUOu1WO 640w, https:\/\/miro.medium.com\/max\/720\/0*bYdbrIjY_mUOu1WO 720w, https:\/\/miro.medium.com\/max\/750\/0*bYdbrIjY_mUOu1WO 750w, https:\/\/miro.medium.com\/max\/786\/0*bYdbrIjY_mUOu1WO 786w, https:\/\/miro.medium.com\/max\/828\/0*bYdbrIjY_mUOu1WO 828w, https:\/\/miro.medium.com\/max\/1100\/0*bYdbrIjY_mUOu1WO 1100w, https:\/\/miro.medium.com\/max\/1400\/0*bYdbrIjY_mUOu1WO 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><a class=\"au nm\" href=\"https:\/\/www.doc.ic.ac.uk\/~la1515\/web\/how.html\" target=\"_blank\" rel=\"noopener ugc nofollow\">Source: doc.ic<\/a><\/div>\n<\/div>\n<\/figure>\n<p data-selectable-paragraph=\"\">\n<\/p><p id=\"14d6\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\"><strong class=\"bm nn\">Why do we need to know Restricted Boltzmann Machines?<\/strong><\/p>\n<ol class=\"\">\n<li id=\"503b\" class=\"mj mk iy bm b jx jy kb kc kf no kj np kn nq kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Different uses: Restricted Boltzmann Machines can be used for classification, regression, topic modeling, and feature learning.<\/li>\n<li id=\"5223\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">SMOTE: Restricted Boltzmann Machines uses SMOTE which selects examples that are within close feature space and draws a line between the examples producing a new sample along that line.<\/li>\n<li id=\"e4ef\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Gibb\u2019s sampling: Restricted Boltzmann Machines can identify missing values.<\/li>\n<li id=\"f389\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Feature Extractor: Restricted Boltzmann Machines can transform raw data into hidden units, solving the problem of unstructured data.<\/li>\n<\/ol>\n<h1 id=\"05f6\" class=\"ks kt iy bm ku kv kw kx ky kz la lb lc ld le lf lg lh li lj lk ll lm ln lo lp ga\" data-selectable-paragraph=\"\">When to use these techniques?<\/h1>\n<h2 id=\"ce52\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">Multilayer Perceptrons (MLPs)<\/h2>\n<ul class=\"\">\n<li id=\"1ad1\" class=\"mj mk iy bm b jx lq kb lr kf ml kj mm kn mn kr ol mp mq mr ga\" data-selectable-paragraph=\"\">When your dataset is in a tabular format consisting of rows and columns. Typically CSV files<\/li>\n<li id=\"060b\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr ol mp mq mr ga\" data-selectable-paragraph=\"\">Can be used for both Classification and Regression tasks where a set of ground truth values are given as the input.<\/li>\n<\/ul>\n<h2 id=\"0e9e\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">Convolutional Neural Network<\/h2>\n<ul class=\"\">\n<li id=\"6aa2\" class=\"mj mk iy bm b jx lq kb lr kf ml kj mm kn mn kr ol mp mq mr ga\" data-selectable-paragraph=\"\">This technique works very well with Image Datasets An example of this is OCR document analysis, which recognizes text within a digital image.<\/li>\n<li id=\"c311\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr ol mp mq mr ga\" data-selectable-paragraph=\"\">Ideally, the input data is a 2-dimensional field. However, it can also be converted into a 1-dimensional to make the process faster.<\/li>\n<li id=\"4aac\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr ol mp mq mr ga\" data-selectable-paragraph=\"\">This technique should also be used if the model requires high complexity in calculating the output.<\/li>\n<\/ul>\n<h2 id=\"79dc\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">Recurrent Neural Networks<\/h2>\n<p id=\"6d5c\" class=\"pw-post-body-paragraph jv jw iy bm b jx lq jz ka kb lr kd ke kf ls kh ki kj lt kl km kn lu kp kq kr ir ga\" data-selectable-paragraph=\"\">There are 4 different ways that you can use Recurrent Neural Networks. These are:<\/p>\n<ol class=\"\">\n<li id=\"2950\" class=\"mj mk iy bm b jx jy kb kc kf no kj np kn nq kr mo mp mq mr ga\" data-selectable-paragraph=\"\">One to one: a single input that produces a single output. An example of this is Image Classification<\/li>\n<li id=\"f5a0\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">One to many: a single input that produces a sequence of outputs. An example of this is Image captioning, where a variety of words are detected from a single image<\/li>\n<li id=\"41c4\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Many to one: a sequence of inputs that produces a single output. An example of this is Sentiment Analysis<\/li>\n<li id=\"2899\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\">Many to many: a sequence of inputs that produces a sequence of outputs. An example of this is Video classification, where you split the video into frames and label each frame separately<\/li>\n<\/ol>\n<h2 id=\"6f2f\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">Generative Adversarial Networks<\/h2>\n<p id=\"e1a3\" class=\"pw-post-body-paragraph jv jw iy bm b jx lq jz ka kb lr kd ke kf ls kh ki kj lt kl km kn lu kp kq kr ir ga\" data-selectable-paragraph=\"\">Generative Adversarial Network is highly used in Images and other forms of media to identify deepfakes. You can use it for:<\/p>\n<ul class=\"\">\n<li id=\"8998\" class=\"mj mk iy bm b jx jy kb kc kf no kj np kn nq kr ol mp mq mr ga\" data-selectable-paragraph=\"\">Image inpainting \u2014 you can do this by restoring missing parts of images.<\/li>\n<li id=\"94d3\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr ol mp mq mr ga\" data-selectable-paragraph=\"\">Image super-resolution \u2014 you can do this by upscaling low-resolution images to high resolution.<\/li>\n<li id=\"d64c\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr ol mp mq mr ga\" data-selectable-paragraph=\"\">If you want to create data from images to texts.<\/li>\n<\/ul>\n<h2 id=\"829e\" class=\"lv kt iy bm ku lw lx ly ky lz ma mb lc kf mc md lg kj me mf lk kn mg mh lo mi ga\" data-selectable-paragraph=\"\">Restricted Boltzmann Machines<\/h2>\n<ul class=\"\">\n<li id=\"4ba9\" class=\"mj mk iy bm b jx lq kb lr kf ml kj mm kn mn kr ol mp mq mr ga\" data-selectable-paragraph=\"\">As the Boltzmann Machine will learn to regulate, this technique will be good to use when monitoring a system.<\/li>\n<li id=\"b2e6\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr ol mp mq mr ga\" data-selectable-paragraph=\"\">It is efficient when you are building a binary recommendation system<\/li>\n<li id=\"8e5c\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr ol mp mq mr ga\" data-selectable-paragraph=\"\">It is also used specifically when using a very specific dataset.<\/li>\n<\/ul>\n<h1 id=\"2035\" class=\"ks kt iy bm ku kv kw kx ky kz la lb lc ld le lf lg lh li lj lk ll lm ln lo lp ga\" data-selectable-paragraph=\"\">Conclusion<\/h1>\n<p id=\"51a3\" class=\"pw-post-body-paragraph jv jw iy bm b jx lq jz ka kb lr kd ke kf ls kh ki kj lt kl km kn lu kp kq kr ir ga\" data-selectable-paragraph=\"\">Deep Learning is still evolving and has become very popular over the years. We can say that more and more people and businesses will incorporate Deep Learning into their methodologies.<\/p>\n<p id=\"2c1c\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">There are many different techniques you can use for Deep Learning. Each of them is used for specific tasks, with certain processes and limitations.<\/p>\n<p id=\"7834\" class=\"pw-post-body-paragraph jv jw iy bm b jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ir ga\" data-selectable-paragraph=\"\">If you are interested in becoming a Data Scientist or a Machine Learning Engineer, learning more about Deep Learning should be a part of your journey. Here are a few book recommendations:<\/p>\n<ol class=\"\">\n<li id=\"79d5\" class=\"mj mk iy bm b jx jy kb kc kf no kj np kn nq kr mo mp mq mr ga\" data-selectable-paragraph=\"\"><a class=\"au nm\" href=\"https:\/\/geni.us\/JXDP\" target=\"_blank\" rel=\"noopener ugc nofollow\">Deep Learning with Python by Francois Chollet<\/a>&nbsp;(for beginners and intermediate python programmers)<\/li>\n<li id=\"339f\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\"><a class=\"au nm\" href=\"https:\/\/geni.us\/k9LsHpp\" target=\"_blank\" rel=\"noopener ugc nofollow\">Neural Networks and Deep Learning: A Textbook by Charu C. Aggarwal<\/a>&nbsp;(explores classical and modern models used in deep learning)<\/li>\n<li id=\"dcd9\" class=\"mj mk iy bm b jx ms kb mt kf mu kj mv kn mw kr mo mp mq mr ga\" data-selectable-paragraph=\"\"><a class=\"au nm\" href=\"https:\/\/www.amazon.co.uk\/Deep-Learning-Scratch-Building-Principles\/dp\/1492041416\/ref=asc_df_1492041416\/?tag=googshopuk-21&amp;linkCode=df0&amp;hvadid=375498709181&amp;hvpos=&amp;hvnetw=g&amp;hvrand=8995822476729481845&amp;hvpone=&amp;hvptwo=&amp;hvqmt=&amp;hvdev=c&amp;hvdvcmdl=&amp;hvlocint=&amp;hvlocphy=9044970&amp;hvtargid=pla-716776504046&amp;psc=1&amp;th=1&amp;psc=1&amp;tag=&amp;ref=&amp;adgrpid=76471991426&amp;hvpone=&amp;hvptwo=&amp;hvadid=375498709181&amp;hvpos=&amp;hvnetw=g&amp;hvrand=8995822476729481845&amp;hvqmt=&amp;hvdev=c&amp;hvdvcmdl=&amp;hvlocint=&amp;hvlocphy=9044970&amp;hvtargid=pla-716776504046\" target=\"_blank\" rel=\"noopener ugc nofollow\">Deep Learning From Scratch: Building with Python from First Principles by Seth Weidman<\/a>&nbsp;(for beginners and intermediate python programmers)<\/li>\n<\/ol>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Over the years, Deep Learning has really taken off. This is because we have access to a lot more data and more computational power, allowing us to produce more accurate outputs and build models more efficiently. The rise in Deep Learning has been applied in different sectors such as speech recognition, image recognition, online advertising, [&hellip;]<\/p>\n","protected":false},"author":8,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[6],"tags":[],"coauthors":[139],"class_list":["post-4581","post","type-post","status-publish","format-standard","hentry","category-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Deep Learning Techniques You Should Know in 2022 - Comet<\/title>\n<meta name=\"description\" content=\"Over the years, Deep Learning has really taken off. This is because we have access to a lot more data and more computational power, allowing us to produce more accurate outputs and build models more efficiently.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Deep Learning Techniques You Should Know in 2022\" \/>\n<meta property=\"og:description\" content=\"Over the years, Deep Learning has really taken off. This is because we have access to a lot more data and more computational power, allowing us to produce more accurate outputs and build models more efficiently.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2022-11-11T01:50:05+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:16:33+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/miro.medium.com\/max\/700\/0*ae0C9EJAszvFQu__\" \/>\n<meta name=\"author\" content=\"Nisha Arya Ahmed\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Nisha Arya Ahmed\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Deep Learning Techniques You Should Know in 2022 - Comet","description":"Over the years, Deep Learning has really taken off. This is because we have access to a lot more data and more computational power, allowing us to produce more accurate outputs and build models more efficiently.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/","og_locale":"en_US","og_type":"article","og_title":"Deep Learning Techniques You Should Know in 2022","og_description":"Over the years, Deep Learning has really taken off. This is because we have access to a lot more data and more computational power, allowing us to produce more accurate outputs and build models more efficiently.","og_url":"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2022-11-11T01:50:05+00:00","article_modified_time":"2025-04-24T17:16:33+00:00","og_image":[{"url":"https:\/\/miro.medium.com\/max\/700\/0*ae0C9EJAszvFQu__","type":"","width":"","height":""}],"author":"Nisha Arya Ahmed","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Nisha Arya Ahmed","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/"},"author":{"name":"Team Comet Digital","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/6266601170c60a7a82b3e0043fbe8ddf"},"headline":"Deep Learning Techniques You Should Know in 2022","datePublished":"2022-11-11T01:50:05+00:00","dateModified":"2025-04-24T17:16:33+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/"},"wordCount":1964,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/max\/700\/0*ae0C9EJAszvFQu__","articleSection":["Machine Learning"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/","url":"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/","name":"Deep Learning Techniques You Should Know in 2022 - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/max\/700\/0*ae0C9EJAszvFQu__","datePublished":"2022-11-11T01:50:05+00:00","dateModified":"2025-04-24T17:16:33+00:00","description":"Over the years, Deep Learning has really taken off. This is because we have access to a lot more data and more computational power, allowing us to produce more accurate outputs and build models more efficiently.","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/#primaryimage","url":"https:\/\/miro.medium.com\/max\/700\/0*ae0C9EJAszvFQu__","contentUrl":"https:\/\/miro.medium.com\/max\/700\/0*ae0C9EJAszvFQu__"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/deep-learning-techniques-you-should-know-in-2022\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Deep Learning Techniques You Should Know in 2022"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/6266601170c60a7a82b3e0043fbe8ddf","name":"Team Comet Digital","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/4f0c0a8cc7c0e87c636ff6a420a6647c","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/Screen-Shot-2023-08-12-at-8.58.50-AM-96x96.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/Screen-Shot-2023-08-12-at-8.58.50-AM-96x96.png","caption":"Team Comet Digital"},"sameAs":["https:\/\/www.comet.ml\/"],"url":"https:\/\/www.comet.com\/site\/blog\/author\/teamcometdigital\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/4581","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=4581"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/4581\/revisions"}],"predecessor-version":[{"id":15650,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/4581\/revisions\/15650"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=4581"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=4581"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=4581"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=4581"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}