{"id":8076,"date":"2023-11-02T09:42:52","date_gmt":"2023-11-02T17:42:52","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=8076"},"modified":"2025-04-24T17:04:53","modified_gmt":"2025-04-24T17:04:53","slug":"monitoring-a-convolutional-neural-network-cnn-in-comet","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet\/","title":{"rendered":"Monitoring A Convolutional Neural Network (CNN) in Comet"},"content":{"rendered":"\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet\">\n\n\n\n<div class=\"fk fl fm fn fo\">\n<div class=\"ab ca\">\n<div class=\"ch bg ew ex ey ez\">\n<figure class=\"mk ml mm mn mo mp mh mi paragraph-image\">\n<div class=\"mq mr ee ms bg mt\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mu mv c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*4ik_OeQvo2rjBCltjsjojQ.jpeg\" alt=\"\" width=\"700\" height=\"467\"><\/figure><div class=\"mh mi mj\"><picture><\/picture><\/div>\n<\/div><figcaption class=\"mw mx my mh mi mz na be b bf z dw\" data-selectable-paragraph=\"\">Photo from nmedia on Shutterstock.com<\/figcaption><\/figure>\n<h2 id=\"3507\" class=\"nb nc fr be nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu nv nw nx ny bj\" data-selectable-paragraph=\"\">Introduction<\/h2>\n<p id=\"450d\" class=\"pw-post-body-paragraph nz oa fr be b gp ob oc od gs oe of og nm oh oi oj nq ok ol om nu on oo op oq fk bj\" data-selectable-paragraph=\"\">Image classification is a task that involves training a neural network to recognize and classify items in images. A dataset of labeled images is used to train the network, with each image given a particular class or label. Thousands or even millions of photos make up the normal size of the dataset needed to train the model. Before being fed into the network, the photos are pre-processed and shrunk to the same size.<\/p>\n<p id=\"2ef8\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">A convolutional neural network (CNN) is primarily used for image classification. Convolutional, pooling, and fully linked layers are some of the layers that make up a CNN. The pooling layers are used to shrink the spatial dimensions of the image while preserving the key features, whereas the convolutional layers are in charge of identifying patterns and features in the image. The image is subsequently classified using the convolutional and pooling layers\u2019 retrieved features by the fully connected layers.<\/p>\n<p id=\"81ce\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">Once trained, the model can be used to categorize new, unseen photos. The network processes the image, and the result is a probability distribution across all potential classifications. The anticipated label for the image is the class with the highest likelihood.<\/p>\n<p id=\"8039\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">Numerous applications, including object identification, image search, and image annotation, can benefit from image categorization. For instance, picture classification can be used in self-driving automobiles to recognize and classify items like vehicles, people, and traffic signs so that the vehicle can travel safely. Image classification can be used in search engines to index and retrieve photos based on their content. Additionally, picture categorization can be used in photo management software to automatically identify photographs with pertinent labels, making it simpler to find and organize them.<\/p>\n<h1 id=\"fb6e\" class=\"ow nc fr be nd ox oy gr nh oz pa gu nl pb pc pd pe pf pg ph pi pj pk pl pm pn bj\" data-selectable-paragraph=\"\">Comet<\/h1>\n<p id=\"3c78\" class=\"pw-post-body-paragraph nz oa fr be b gp ob oc od gs oe of og nm oh oi oj nq ok ol om nu on oo op oq fk bj\" data-selectable-paragraph=\"\">Comet is a machine-learning experimentation tool that assists you in keeping track of your machine-learning experiments. Another important feature of Comet is that it allows users to track or monitor the performance of a model by utilizing the various tools accessible on the Comet platform. You can learn more about Comet <a class=\"af po\" href=\"\/signup?utm_source=Heartbeat&amp;utm_medium=referral&amp;utm_campaign=AMS_US_EN_SNUP_Heartbeat_Comet_Content\" target=\"_blank\" rel=\"noopener ugc nofollow\">here<\/a>.<\/p>\n<p id=\"205c\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">In this tutorial, we will learn how to keep track of our image classification model in Comet. After building our model, we will log it onto the Comet experimentation website or platform to follow and track the progress of our experiment. So let\u2019s get started with it.<\/p>\n<h1 id=\"5cf3\" class=\"ow nc fr be nd ox oy gr nh oz pa gu nl pb pc pd pe pf pg ph pi pj pk pl pm pn bj\" data-selectable-paragraph=\"\">Prerequisites<\/h1>\n<p id=\"5330\" class=\"pw-post-body-paragraph nz oa fr be b gp ob oc od gs oe of og nm oh oi oj nq ok ol om nu on oo op oq fk bj\" data-selectable-paragraph=\"\">In order to focus on the model monitoring function in Comet, we will be using a code tutorial from <a class=\"af po\" href=\"https:\/\/codebasics.io\/\" target=\"_blank\" rel=\"noopener ugc nofollow\">CodeBasics<\/a> to build, train, and test our models. You can find the original code in <a class=\"af po\" href=\"https:\/\/github.com\/codebasics\/deep-learning-keras-tf-tutorial\/blob\/master\/16_cnn_cifar10_small_image_classification\/cnn_cifar10_dataset.ipynb\" target=\"_blank\" rel=\"noopener ugc nofollow\">this repo here<\/a>, or follow along with <a class=\"af po\" href=\"https:\/\/colab.research.google.com\/github\/olujerry\/olujerry\/blob\/main\/Image_Classification_Model.ipynb#scrollTo=R9gvj3dP74U6\" target=\"_blank\" rel=\"noopener ugc nofollow\">my modified version here<\/a>.<\/p>\n<p id=\"ad98\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">Using one of the following lines at the command prompt, you can install the Comet library on your machine if it isn\u2019t already present. Be aware that pip is probably what you should use if you\u2019re installing packages directly into a Colab notebook or another environment that makes use of virtual machines.<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"e151\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">pip install comet_ml<\/span><\/pre>\n<p id=\"51e1\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">\u2014 or \u2014<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"25c7\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">conda install -c comet_ml<\/span><\/pre>\n<h2 id=\"efd9\" class=\"nb nc fr be nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu nv nw nx ny bj\" data-selectable-paragraph=\"\">About The Dataset.<\/h2>\n<p id=\"eae9\" class=\"pw-post-body-paragraph nz oa fr be b gp ob oc od gs oe of og nm oh oi oj nq ok ol om nu on oo op oq fk bj\" data-selectable-paragraph=\"\">The CIFAR-10 dataset consists of 6,000 images per class in 10 classes totaling 60,000 32&#215;32 color images. 10,000 test photos and 50,000 training images are available.<\/p>\n<p id=\"0333\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">Five training batches and one test batch, each with 10,000 photos, are created from the dataset. Exactly 1,000 randomly chosen photos from each class make up the test batch. The remaining images are distributed across the training batches in random order, but certain training batches can have a disproportionate number of images from a particular class. The training batches are made up of 5,000 photographs from each class together. You can get the <a class=\"af po\" href=\"https:\/\/www.cs.toronto.edu\/~kriz\/cifar.html.\" target=\"_blank\" rel=\"noopener ugc nofollow\">dataset here<\/a>.<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"fk fl fm fn fo\">\n<div class=\"ab ca\">\n<div class=\"ch bg ew ex ey ez\">\n<blockquote class=\"qg\"><p id=\"d703\" class=\"qh qi fr be qj qk ql qm qn qo qp oq dw\" data-selectable-paragraph=\"\">Want to see the evolution of AI-generated art projects? <a class=\"af po\" href=\"https:\/\/www.comet.com\/team-comet-ml\/clipdraw\/view\/Y4aT3gy6IrPQKBi5wncFXCYLR?utm_campaign=clipdraw-gradio&amp;utm_source=blog&amp;utm_medium=summary\" target=\"_blank\" rel=\"noopener ugc nofollow\">Visit our public project to see time-lapses, experiment evolutions, and more<\/a>!<\/p><\/blockquote>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"fk fl fm fn fo\">\n<div class=\"ab ca\">\n<div class=\"ch bg ew ex ey ez\">\n<h2 id=\"0318\" class=\"nb nc fr be nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu nv nw nx ny bj\" data-selectable-paragraph=\"\"><strong class=\"al\">Understanding the goal<\/strong><\/h2>\n<ol class=\"\">\n<li id=\"6b72\" class=\"nz oa fr be b gp ob oc od gs oe of og nm qq oi oj nq qr ol om nu qs oo op oq qt qu qv bj\" data-selectable-paragraph=\"\">Build a neural network to recognize different types of images assigned to it.<\/li>\n<li id=\"6093\" class=\"nz oa fr be b gp qw oc od gs qx of og nm qy oi oj nq qz ol om nu ra oo op oq qt qu qv bj\" data-selectable-paragraph=\"\">Use both the CNN and ANN and see the network with the most accuracy.<\/li>\n<li id=\"813e\" class=\"nz oa fr be b gp qw oc od gs qx of og nm qy oi oj nq qz ol om nu ra oo op oq qt qu qv bj\" data-selectable-paragraph=\"\">Detect layers and patterns in the dataset.<\/li>\n<li id=\"8e20\" class=\"nz oa fr be b gp qw oc od gs qx of og nm qy oi oj nq qz ol om nu ra oo op oq qt qu qv bj\" data-selectable-paragraph=\"\">Monitor ML model performance after training in the Comet dashboard, including its accuracy and loss.<\/li>\n<li id=\"99e1\" class=\"nz oa fr be b gp qw oc od gs qx of og nm qy oi oj nq qz ol om nu ra oo op oq qt qu qv bj\" data-selectable-paragraph=\"\">Monitor hardware performance.<\/li>\n<\/ol>\n<h2 id=\"18b0\" class=\"nb nc fr be nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu nv nw nx ny bj\" data-selectable-paragraph=\"\">Exploring the model:<\/h2>\n<p id=\"7965\" class=\"pw-post-body-paragraph nz oa fr be b gp ob oc od gs oe of og nm oh oi oj nq ok ol om nu on oo op oq fk bj\" data-selectable-paragraph=\"\">Let\u2019s import and Install the needed libraries<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"0339\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\"><span class=\"hljs-keyword\">import<\/span> tensorflow <span class=\"hljs-keyword\">as<\/span> tf\n<span class=\"hljs-keyword\">from<\/span> tensorflow.keras <span class=\"hljs-keyword\">import<\/span> datasets,layers,models\n<span class=\"hljs-keyword\">import<\/span> matplotlib <span class=\"hljs-keyword\">as<\/span> plt\n<span class=\"hljs-keyword\">import<\/span> pandas <span class=\"hljs-keyword\">as<\/span> pd\n<span class=\"hljs-keyword\">import<\/span> numpy <span class=\"hljs-keyword\">as<\/span> np<\/span><\/pre>\n<p id=\"e2fd\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">The dataset is subsequently loaded, and the test and training datasets are loaded simultaneously.<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"48c0\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">(X_train,y_train),(X_test,y_test)= datasets.cifar10.load_data()<\/span><\/pre>\n<p id=\"e42b\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">Next, we examine the training dataset\u2019s form. The dataset\u2019s form also reveals that we have 50,000 training samples, a 32 by 32 image, and that the third digit stands for the RGB channel.<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"8d19\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">X_train.shape<\/span><\/pre>\n<p id=\"3e62\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">We check the test samples after examining the train sample shapes.<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"ff3a\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">X_test.shape <\/span><\/pre>\n<p id=\"bc3a\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">We also define our classes:<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"c56d\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">classes = [<span class=\"hljs-string\">\"airplane\"<\/span>,<span class=\"hljs-string\">\"automobile\"<\/span>,<span class=\"hljs-string\">\"bird\"<\/span>,<span class=\"hljs-string\">\"cat\"<\/span>,<span class=\"hljs-string\">\"deer\"<\/span>,<span class=\"hljs-string\">\"dog\"<\/span>,<span class=\"hljs-string\">\"frog\"<\/span>,<span class=\"hljs-string\">\"horse\"<\/span>,<span class=\"hljs-string\">\"ship\"<\/span>,<span class=\"hljs-string\">\"truck\"<\/span>]<\/span><\/pre>\n<p id=\"081d\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">Then, we create the following function:<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"cd22\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\"><span class=\"hljs-keyword\">def<\/span> <span class=\"hljs-title.function\">plot_sample<\/span>(<span class=\"hljs-params\">X, y, index<\/span>):\n    plt.figure(figsize = (<span class=\"hljs-number\">15<\/span>,<span class=\"hljs-number\">2<\/span>))\n    plt.imshow(X[index])\n    plt.xlabel(classes[y[index]])<\/span><\/pre>\n<p id=\"0adb\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">We can test the function as follows:<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"d1ec\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">plot_sample(X_train, y_train, <span class=\"hljs-number\">0<\/span>)<\/span><\/pre>\n<figure class=\"mk ml mm mn mo mp mh mi paragraph-image\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mu mv c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:255\/1*-OyurbmgGdwft9SlgFm07Q.jpeg\" alt=\"\" width=\"255\" height=\"193\"><\/figure><div class=\"mh mi rb\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*-OyurbmgGdwft9SlgFm07Q.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*-OyurbmgGdwft9SlgFm07Q.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*-OyurbmgGdwft9SlgFm07Q.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*-OyurbmgGdwft9SlgFm07Q.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*-OyurbmgGdwft9SlgFm07Q.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*-OyurbmgGdwft9SlgFm07Q.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:510\/format:webp\/1*-OyurbmgGdwft9SlgFm07Q.jpeg 510w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 255px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*-OyurbmgGdwft9SlgFm07Q.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*-OyurbmgGdwft9SlgFm07Q.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*-OyurbmgGdwft9SlgFm07Q.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*-OyurbmgGdwft9SlgFm07Q.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*-OyurbmgGdwft9SlgFm07Q.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*-OyurbmgGdwft9SlgFm07Q.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:510\/1*-OyurbmgGdwft9SlgFm07Q.jpeg 510w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 255px\" data-testid=\"og\"><\/picture><\/div>\n<\/figure>\n<p id=\"8a49\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">We can try another sample image:<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"375a\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">plot_sample(X_train, y_train, <span class=\"hljs-number\">0<\/span>)<\/span><\/pre>\n<figure class=\"mk ml mm mn mo mp mh mi paragraph-image\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mu mv c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:269\/1*gO-GarRFIp-oSNAThWUkvg.jpeg\" alt=\"\" width=\"269\" height=\"187\"><\/figure><div class=\"mh mi rc\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*gO-GarRFIp-oSNAThWUkvg.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*gO-GarRFIp-oSNAThWUkvg.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*gO-GarRFIp-oSNAThWUkvg.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*gO-GarRFIp-oSNAThWUkvg.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*gO-GarRFIp-oSNAThWUkvg.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*gO-GarRFIp-oSNAThWUkvg.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:538\/format:webp\/1*gO-GarRFIp-oSNAThWUkvg.jpeg 538w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 269px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*gO-GarRFIp-oSNAThWUkvg.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*gO-GarRFIp-oSNAThWUkvg.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*gO-GarRFIp-oSNAThWUkvg.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*gO-GarRFIp-oSNAThWUkvg.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*gO-GarRFIp-oSNAThWUkvg.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*gO-GarRFIp-oSNAThWUkvg.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:538\/1*gO-GarRFIp-oSNAThWUkvg.jpeg 538w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 269px\" data-testid=\"og\"><\/picture><\/div>\n<\/figure>\n<p id=\"d1ad\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">Normalize the pictures to a value between 0 and 1. The image has three channels (R, G, and B), each of which can have a value between 0 and 255. So we must divide it by 255 to normalize it in the 0\u20131 range.<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"6146\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">X_train = X_train \/ <span class=\"hljs-number\">255.0<\/span>\nX_test = X_test \/ <span class=\"hljs-number\">255.0<\/span><\/span><\/pre>\n<h2 id=\"1f8b\" class=\"nb nc fr be nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu nv nw nx ny bj\" data-selectable-paragraph=\"\">Model Building<\/h2>\n<p id=\"0af3\" class=\"pw-post-body-paragraph nz oa fr be b gp ob oc od gs oe of og nm oh oi oj nq ok ol om nu on oo op oq fk bj\" data-selectable-paragraph=\"\">We then construct an artificial neural network first, followed by a convolutional neural network, and we can test the effectiveness of both models as well as compare their benefits and drawbacks.<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"8df5\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">ann = models.Sequential([\n        layers.Flatten(input_shape=(<span class=\"hljs-number\">32<\/span>,<span class=\"hljs-number\">32<\/span>,<span class=\"hljs-number\">3<\/span>)),\n        layers.Dense(<span class=\"hljs-number\">3000<\/span>, activation=<span class=\"hljs-string\">'relu'<\/span>),\n        layers.Dense(<span class=\"hljs-number\">1000<\/span>, activation=<span class=\"hljs-string\">'relu'<\/span>),\n        layers.Dense(<span class=\"hljs-number\">10<\/span>, activation=<span class=\"hljs-string\">'softmax'<\/span>)\n    ])\n\nann.<span class=\"hljs-built_in\">compile<\/span>(optimizer=<span class=\"hljs-string\">'SGD'<\/span>,\n              loss=<span class=\"hljs-string\">'sparse_categorical_crossentropy'<\/span>,\n              metrics=[<span class=\"hljs-string\">'accuracy'<\/span>])\n\nann.fit(X_train, y_train, epochs=<span class=\"hljs-number\">5<\/span>)<\/span><\/pre>\n<figure class=\"mk ml mm mn mo mp mh mi paragraph-image\">\n<div class=\"mq mr ee ms bg mt\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mu mv c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*50HqD322TejiaRp27cc37Q.jpeg\" alt=\"\" width=\"700\" height=\"175\"><\/figure><div class=\"mh mi rd\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*50HqD322TejiaRp27cc37Q.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*50HqD322TejiaRp27cc37Q.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*50HqD322TejiaRp27cc37Q.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*50HqD322TejiaRp27cc37Q.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*50HqD322TejiaRp27cc37Q.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*50HqD322TejiaRp27cc37Q.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*50HqD322TejiaRp27cc37Q.jpeg 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*50HqD322TejiaRp27cc37Q.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*50HqD322TejiaRp27cc37Q.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*50HqD322TejiaRp27cc37Q.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*50HqD322TejiaRp27cc37Q.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*50HqD322TejiaRp27cc37Q.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*50HqD322TejiaRp27cc37Q.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*50HqD322TejiaRp27cc37Q.jpeg 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<\/figure>\n<p id=\"106c\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">We can therefore see from the code above that there are four layers. The first layer accepts images with a 32 by 32 pixel dimension, while the second and third levels are deep layers with 3000 and 1000 neurons, respectively. The fourth layer then includes 10 categories to reflect the 10 sample images that are included in our dataset. After executing the code, we can see that the ANN\u2019s accuracy on training samples is quite low\u2014about 10%.<\/p>\n<p id=\"0a2e\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">We then try it on test samples to see if the results get better.<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"2086\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">ann.evaluate(X_test,y_test)<\/span><\/pre>\n<p id=\"aba1\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">We can see from the review above that ANN continues to perform poorly. The last step is to print a classification report that displays the recall and precision for each sample image.<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"4948\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\"><span class=\"hljs-keyword\">from<\/span> sklearn.metrics <span class=\"hljs-keyword\">import<\/span> confusion_matrix , classification_report\n<span class=\"hljs-keyword\">import<\/span> numpy <span class=\"hljs-keyword\">as<\/span> np\ny_pred = ann.predict(X_test)\ny_pred_classes = [np.argmax(element) <span class=\"hljs-keyword\">for<\/span> element <span class=\"hljs-keyword\">in<\/span> y_pred]\n\n<span class=\"hljs-built_in\">print<\/span>(<span class=\"hljs-string\">\"Classification Report: \\n\"<\/span>, classification_report(y_test, y_pred_classes))<\/span><\/pre>\n<figure class=\"mk ml mm mn mo mp mh mi paragraph-image\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mu mv c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:555\/1*QvHgweNvSRx49uG4SeToyA.jpeg\" alt=\"\" width=\"555\" height=\"310\"><\/figure><div class=\"mh mi re\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*QvHgweNvSRx49uG4SeToyA.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*QvHgweNvSRx49uG4SeToyA.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*QvHgweNvSRx49uG4SeToyA.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*QvHgweNvSRx49uG4SeToyA.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*QvHgweNvSRx49uG4SeToyA.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*QvHgweNvSRx49uG4SeToyA.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1110\/format:webp\/1*QvHgweNvSRx49uG4SeToyA.jpeg 1110w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 555px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*QvHgweNvSRx49uG4SeToyA.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*QvHgweNvSRx49uG4SeToyA.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*QvHgweNvSRx49uG4SeToyA.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*QvHgweNvSRx49uG4SeToyA.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*QvHgweNvSRx49uG4SeToyA.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*QvHgweNvSRx49uG4SeToyA.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1110\/1*QvHgweNvSRx49uG4SeToyA.jpeg 1110w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 555px\" data-testid=\"og\"><\/picture><\/div>\n<\/figure>\n<p id=\"96af\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">The precision and recall scores for each of the sample photos can then be examined (0\u20139). A sample image with index 1 has, for instance, the highest precision scores when compared to the others, as can be seen by looking at it.<\/p>\n<p id=\"0ded\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">The next step is to utilize build and train a CNN:<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"d662\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">model = models.Sequential([\n    layers.Conv2D(filters=<span class=\"hljs-number\">32<\/span>, kernel_size=(<span class=\"hljs-number\">3<\/span>, <span class=\"hljs-number\">3<\/span>), activation=<span class=\"hljs-string\">'relu'<\/span>, input_shape=(<span class=\"hljs-number\">32<\/span>, <span class=\"hljs-number\">32<\/span>, <span class=\"hljs-number\">3<\/span>)),\n    layers.MaxPooling2D((<span class=\"hljs-number\">2<\/span>, <span class=\"hljs-number\">2<\/span>)),\n\n    layers.Conv2D(filters=<span class=\"hljs-number\">64<\/span>, kernel_size=(<span class=\"hljs-number\">3<\/span>, <span class=\"hljs-number\">3<\/span>), activation=<span class=\"hljs-string\">'relu'<\/span>),\n    layers.MaxPooling2D((<span class=\"hljs-number\">2<\/span>, <span class=\"hljs-number\">2<\/span>)),\n\n    layers.Flatten(),\n    layers.Dense(<span class=\"hljs-number\">64<\/span>, activation=<span class=\"hljs-string\">'relu'<\/span>),\n    layers.Dense(<span class=\"hljs-number\">10<\/span>, activation=<span class=\"hljs-string\">'softmax'<\/span>)\n])\n\nmodel.<span class=\"hljs-built_in\">compile<\/span>(optimizer=<span class=\"hljs-string\">'adam'<\/span>,\n              loss=<span class=\"hljs-string\">'sparse_categorical_crossentropy'<\/span>,\n              metrics=[<span class=\"hljs-string\">'accuracy'<\/span>])<\/span><\/pre>\n<p id=\"a9d9\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">We then run our loss and metrics tests using the Adam optimizer, which is known for its versatility.<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"f8dc\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">model.fit(X_train, y_train, epochs=<span class=\"hljs-number\">10<\/span>, verbose=<span class=\"hljs-number\">1<\/span>,batch_size=<span class=\"hljs-number\">128<\/span>)<\/span><\/pre>\n<p id=\"4fd0\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">We train the model for 10 epochs and get 70% accuracy, which is significantly higher than ANN.<\/p>\n<figure class=\"mk ml mm mn mo mp mh mi paragraph-image\">\n<div class=\"mq mr ee ms bg mt\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mu mv c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*pz69yhZCwNdeXsqqzig8Rw.jpeg\" alt=\"\" width=\"700\" height=\"321\"><\/figure><div class=\"mh mi rf\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*pz69yhZCwNdeXsqqzig8Rw.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*pz69yhZCwNdeXsqqzig8Rw.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*pz69yhZCwNdeXsqqzig8Rw.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*pz69yhZCwNdeXsqqzig8Rw.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*pz69yhZCwNdeXsqqzig8Rw.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*pz69yhZCwNdeXsqqzig8Rw.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*pz69yhZCwNdeXsqqzig8Rw.jpeg 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*pz69yhZCwNdeXsqqzig8Rw.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*pz69yhZCwNdeXsqqzig8Rw.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*pz69yhZCwNdeXsqqzig8Rw.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*pz69yhZCwNdeXsqqzig8Rw.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*pz69yhZCwNdeXsqqzig8Rw.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*pz69yhZCwNdeXsqqzig8Rw.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*pz69yhZCwNdeXsqqzig8Rw.jpeg 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<\/figure>\n<p id=\"7ea9\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">To further our model building, we also try to evaluate the test dataset to see if we can get a high accuracy score.<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"2131\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">model.evaluate(X_test,y_test)<\/span><\/pre>\n<figure class=\"mk ml mm mn mo mp mh mi paragraph-image\">\n<div class=\"mq mr ee ms bg mt\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mu mv c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*F7MBss541w32pIfy3_oPyw.jpeg\" alt=\"\" width=\"700\" height=\"53\"><\/figure><div class=\"mh mi rg\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*F7MBss541w32pIfy3_oPyw.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*F7MBss541w32pIfy3_oPyw.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*F7MBss541w32pIfy3_oPyw.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*F7MBss541w32pIfy3_oPyw.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*F7MBss541w32pIfy3_oPyw.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*F7MBss541w32pIfy3_oPyw.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*F7MBss541w32pIfy3_oPyw.jpeg 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*F7MBss541w32pIfy3_oPyw.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*F7MBss541w32pIfy3_oPyw.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*F7MBss541w32pIfy3_oPyw.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*F7MBss541w32pIfy3_oPyw.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*F7MBss541w32pIfy3_oPyw.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*F7MBss541w32pIfy3_oPyw.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*F7MBss541w32pIfy3_oPyw.jpeg 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<\/figure>\n<p id=\"bd45\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">The accuracy we have is 65%, which is above average, as we can then see.<br>\nSo now that our CNN model is prepared, all that is left to do is plot a sample image and wait for the model to identify it.<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"085e\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">plot_sample(X_test, y_test,<span class=\"hljs-number\">6<\/span>)<\/span><\/pre>\n<figure class=\"mk ml mm mn mo mp mh mi paragraph-image\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mu mv c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:260\/1*BhvkAZ2k4x8fFyt61aPGCA.jpeg\" alt=\"\" width=\"260\" height=\"149\"><\/figure><div class=\"mh mi rh\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*BhvkAZ2k4x8fFyt61aPGCA.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*BhvkAZ2k4x8fFyt61aPGCA.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*BhvkAZ2k4x8fFyt61aPGCA.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*BhvkAZ2k4x8fFyt61aPGCA.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*BhvkAZ2k4x8fFyt61aPGCA.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*BhvkAZ2k4x8fFyt61aPGCA.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:520\/format:webp\/1*BhvkAZ2k4x8fFyt61aPGCA.jpeg 520w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 260px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*BhvkAZ2k4x8fFyt61aPGCA.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*BhvkAZ2k4x8fFyt61aPGCA.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*BhvkAZ2k4x8fFyt61aPGCA.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*BhvkAZ2k4x8fFyt61aPGCA.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*BhvkAZ2k4x8fFyt61aPGCA.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*BhvkAZ2k4x8fFyt61aPGCA.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:520\/1*BhvkAZ2k4x8fFyt61aPGCA.jpeg 520w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 260px\" data-testid=\"og\"><\/picture><\/div>\n<\/figure>\n<p id=\"b429\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">We can see that this is obviously an automobile; let\u2019s see if our model can identify it as such.<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"d9fe\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">classes[y_classes[<span class=\"hljs-number\">6<\/span>]]<\/span><\/pre>\n<figure class=\"mk ml mm mn mo mp mh mi paragraph-image\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mu mv c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:275\/1*yYG_fNR2NQt5PSm6UZJr2w.jpeg\" alt=\"\" width=\"275\" height=\"88\"><\/figure><div class=\"mh mi ri\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*yYG_fNR2NQt5PSm6UZJr2w.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*yYG_fNR2NQt5PSm6UZJr2w.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*yYG_fNR2NQt5PSm6UZJr2w.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*yYG_fNR2NQt5PSm6UZJr2w.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*yYG_fNR2NQt5PSm6UZJr2w.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*yYG_fNR2NQt5PSm6UZJr2w.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:550\/format:webp\/1*yYG_fNR2NQt5PSm6UZJr2w.jpeg 550w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 275px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*yYG_fNR2NQt5PSm6UZJr2w.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*yYG_fNR2NQt5PSm6UZJr2w.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*yYG_fNR2NQt5PSm6UZJr2w.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*yYG_fNR2NQt5PSm6UZJr2w.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*yYG_fNR2NQt5PSm6UZJr2w.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*yYG_fNR2NQt5PSm6UZJr2w.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:550\/1*yYG_fNR2NQt5PSm6UZJr2w.jpeg 550w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 275px\" data-testid=\"og\"><\/picture><\/div>\n<\/figure>\n<h2 id=\"af04\" class=\"nb nc fr be nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu nv nw nx ny bj\" data-selectable-paragraph=\"\">Monitoring In Comet<\/h2>\n<p id=\"5a9f\" class=\"pw-post-body-paragraph nz oa fr be b gp ob oc od gs oe of og nm oh oi oj nq ok ol om nu on oo op oq fk bj\" data-selectable-paragraph=\"\">In this section, I\u2019ll be revealing the step-by-step process of logging our experiments in Comet.<\/p>\n<p id=\"4935\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">The first step involves Installing comet which we have done at the beginning<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"1982\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">!pip3 install comet_ml<\/span><\/pre>\n<p id=\"66f7\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">The next step is to import experiments from Comet to enable us to log on to the Comet platform.<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"98a1\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\"><span class=\"hljs-keyword\">from<\/span> comet_ml <span class=\"hljs-keyword\">import<\/span> experiment<\/span><\/pre>\n<p id=\"424e\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">We then move forward by creating an experiment and also setting the auto-logging features and other important features to true.<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"4f6d\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\"><span class=\"hljs-comment\"># Create an experiment<\/span>\nexperiment = comet_ml.Experiment(\n    project_name=<span class=\"hljs-string\">\"image classification\"<\/span>,\n    workspace=<span class=\"hljs-string\">\"&lt;olujerry&gt;\"<\/span>,\n     api_key=<span class=\"hljs-string\">\"API_KEY\"<\/span>,\n    auto_metric_logging=<span class=\"hljs-literal\">True<\/span>,\n    auto_param_logging=<span class=\"hljs-literal\">True<\/span>,\n    auto_histogram_weight_logging=<span class=\"hljs-literal\">True<\/span>,\n    auto_histogram_gradient_logging=<span class=\"hljs-literal\">True<\/span>,\n    auto_histogram_activation_logging=<span class=\"hljs-literal\">True<\/span>,\n    log_code=<span class=\"hljs-literal\">True<\/span>\n)<\/span><\/pre>\n<p id=\"1820\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">The next step involves logging important features that we want to monitor on the Comet platform.<\/p>\n<p id=\"15cb\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">Using the experiment, we record some parameters using <code class=\"cw rj rk rl pq b\">log_parameters(params)<\/code>:<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"dd7a\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">batch_size = <span class=\"hljs-number\">128<\/span>\nepochs = <span class=\"hljs-number\">10<\/span>\nEMBEDDING_SIZE = <span class=\"hljs-number\">50<\/span>\nlearning_rate = <span class=\"hljs-number\">0.001<\/span>\n\nparams={\n    <span class=\"hljs-string\">\"batch_size\"<\/span>:batch_size,\n    <span class=\"hljs-string\">\"epochs\"<\/span>:epochs,\n    <span class=\"hljs-string\">\"embed_size\"<\/span>:EMBEDDING_SIZE,\n    <span class=\"hljs-string\">\"optimizer\"<\/span>:<span class=\"hljs-string\">\"Adam\"<\/span>,\n    <span class=\"hljs-string\">\"learning_rate\"<\/span>:learning_rate,\n    <span class=\"hljs-string\">\"loss\"<\/span>:<span class=\"hljs-string\">\"sparse_categorical_crossentropy\"<\/span>,\n    }\n\nexperiment.log_parameters(params)<\/span><\/pre>\n<p id=\"48f2\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">We then log the accuracy and loss of our model using the <code class=\"cw rj rk rl pq b\">experiment.log_metric<\/code> method:<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"8f9d\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">loss, accuracy = model.evaluate(X_test,y_test)\n<span class=\"hljs-built_in\">print<\/span>(<span class=\"hljs-string\">\"Loss: \"<\/span>, loss)\n<span class=\"hljs-built_in\">print<\/span>(<span class=\"hljs-string\">\"Accuracy: \"<\/span>, accuracy)\n\nexperiment.log_metric(<span class=\"hljs-string\">\"Loss\"<\/span>, loss, step=<span class=\"hljs-literal\">None<\/span>, include_context=<span class=\"hljs-literal\">True<\/span>)\nexperiment.log_metric(<span class=\"hljs-string\">\"Accuracy\"<\/span>, accuracy, step=<span class=\"hljs-literal\">None<\/span>, include_context=<span class=\"hljs-literal\">True<\/span>)<\/span><\/pre>\n<p id=\"6171\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">Finally, before viewing our logged model on the Comet platform we log the saved model and then end the experiment afterward, as we are using an interactive notebook.<\/p>\n<pre class=\"mk ml mm mn mo pp pq pr bo ps ba bj\"><span id=\"7804\" class=\"pt nc fr pq b bf pu pv l pw px\" data-selectable-paragraph=\"\">experiment.log_model(model, <span class=\"hljs-string\">'model'<\/span>)\nexperiment.end()<\/span><\/pre>\n<p id=\"197c\" class=\"pw-post-body-paragraph nz oa fr be b gp or oc od gs os of og nm ot oi oj nq ou ol om nu ov oo op oq fk bj\" data-selectable-paragraph=\"\">The next step involves checking out the performance of our model on the Comet platform.<\/p>\n<figure class=\"mk ml mm mn mo mp\">\n<div class=\"rm iu l ee\">\n<div class=\"rn ro l\"><iframe loading=\"lazy\" class=\"eo n ff dy bg\" title=\"Image Classification With Comet\" src=\"https:\/\/cdn.embedly.com\/widgets\/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FseVLbLznvOo%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DseVLbLznvOo&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FseVLbLznvOo%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube\" width=\"854\" height=\"480\" frameborder=\"0\" scrolling=\"no\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/div>\n<\/div>\n<\/figure>\n<h2 id=\"5477\" class=\"nb nc fr be nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu nv nw nx ny bj\" data-selectable-paragraph=\"\">Conclusion<\/h2>\n<p id=\"da5f\" class=\"pw-post-body-paragraph nz oa fr be b gp ob oc od gs oe of og nm oh oi oj nq ok ol om nu on oo op oq fk bj\" data-selectable-paragraph=\"\">We have successfully built a CNN model that helps classify images, and we also tested the ANN, and we were able to evaluate the efficiency of both models. Here is a <a class=\"af po\" href=\"https:\/\/colab.research.google.com\/github\/olujerry\/olujerry\/blob\/main\/Image_Classification_Model.ipynb#scrollTo=R9gvj3dP74U6\" target=\"_blank\" rel=\"noopener ugc nofollow\">link to my notebook<\/a>, as well as the<a class=\"af po\" href=\"https:\/\/github.com\/codebasics\/deep-learning-keras-tf-tutorial\/blob\/master\/16_cnn_cifar10_small_image_classification\/cnn_cifar10_dataset.ipynb\" target=\"_blank\" rel=\"noopener ugc nofollow\"> original notebook by Codebasics<\/a>.<\/p>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Photo from nmedia on Shutterstock.com Introduction Image classification is a task that involves training a neural network to recognize and classify items in images. A dataset of labeled images is used to train the network, with each image given a particular class or label. Thousands or even millions of photos make up the normal size [&hellip;]<\/p>\n","protected":false},"author":99,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[9,7],"tags":[],"coauthors":[197],"class_list":["post-8076","post","type-post","status-publish","format-standard","hentry","category-product","category-tutorials"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Monitoring A Convolutional Neural Network (CNN) in Comet - Comet<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Monitoring A Convolutional Neural Network (CNN) in Comet\" \/>\n<meta property=\"og:description\" content=\"Photo from nmedia on Shutterstock.com Introduction Image classification is a task that involves training a neural network to recognize and classify items in images. A dataset of labeled images is used to train the network, with each image given a particular class or label. Thousands or even millions of photos make up the normal size [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-02T17:42:52+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:04:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*4ik_OeQvo2rjBCltjsjojQ.jpeg\" \/>\n<meta name=\"author\" content=\"Jeremiah Oluseye\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Jeremiah Oluseye\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Monitoring A Convolutional Neural Network (CNN) in Comet - Comet","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet","og_locale":"en_US","og_type":"article","og_title":"Monitoring A Convolutional Neural Network (CNN) in Comet","og_description":"Photo from nmedia on Shutterstock.com Introduction Image classification is a task that involves training a neural network to recognize and classify items in images. A dataset of labeled images is used to train the network, with each image given a particular class or label. Thousands or even millions of photos make up the normal size [&hellip;]","og_url":"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2023-11-02T17:42:52+00:00","article_modified_time":"2025-04-24T17:04:53+00:00","og_image":[{"url":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*4ik_OeQvo2rjBCltjsjojQ.jpeg","type":"","width":"","height":""}],"author":"Jeremiah Oluseye","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Jeremiah Oluseye","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet\/"},"author":{"name":"Jeremiah Oluseye","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/73f233b4e35ab400cb753ebd29e58621"},"headline":"Monitoring A Convolutional Neural Network (CNN) in Comet","datePublished":"2023-11-02T17:42:52+00:00","dateModified":"2025-04-24T17:04:53+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet\/"},"wordCount":1309,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*4ik_OeQvo2rjBCltjsjojQ.jpeg","articleSection":["Product","Tutorials"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet\/","url":"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet","name":"Monitoring A Convolutional Neural Network (CNN) in Comet - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*4ik_OeQvo2rjBCltjsjojQ.jpeg","datePublished":"2023-11-02T17:42:52+00:00","dateModified":"2025-04-24T17:04:53+00:00","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet#primaryimage","url":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*4ik_OeQvo2rjBCltjsjojQ.jpeg","contentUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*4ik_OeQvo2rjBCltjsjojQ.jpeg"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/monitoring-a-convolutional-neural-network-cnn-in-comet#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Monitoring A Convolutional Neural Network (CNN) in Comet"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/73f233b4e35ab400cb753ebd29e58621","name":"Jeremiah Oluseye","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/5534a0faf067d6eb66a7f0c328629fbf","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/10\/cropped-mBj9qH7g_400x400-96x96.jpg","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/10\/cropped-mBj9qH7g_400x400-96x96.jpg","caption":"Jeremiah Oluseye"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/oluseyejeremiahgmail-com\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8076","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/99"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=8076"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8076\/revisions"}],"predecessor-version":[{"id":15471,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8076\/revisions\/15471"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=8076"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=8076"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=8076"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=8076"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}