{"id":4432,"date":"2022-10-28T08:56:10","date_gmt":"2022-10-28T16:56:10","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=4432"},"modified":"2025-04-24T17:16:52","modified_gmt":"2025-04-24T17:16:52","slug":"the-gradient-descent-algorithm","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/","title":{"rendered":"The Gradient Descent Algorithm"},"content":{"rendered":"\n<section class=\"section section--body\">\n<div class=\"section-divider\" style=\"text-align: center;\">\n<div class=\"lc ld do le ce lf\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"ce lg lh c aligncenter\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/max\/700\/0*yLPPB92dMxIxZ6U5\" alt=\"\" width=\"700\" height=\"469\"><\/figure><div class=\"gl gm kw\"><picture><\/picture><\/div>\n<\/div>\n<p>Photo by&nbsp;<a class=\"au ll\" href=\"https:\/\/unsplash.com\/@todd_diemer?utm_source=medium&amp;utm_medium=referral\" target=\"_blank\" rel=\"noopener ugc nofollow\">Todd Diemer<\/a>&nbsp;on&nbsp;<a class=\"au ll\" href=\"https:\/\/unsplash.com\/?utm_source=medium&amp;utm_medium=referral\" target=\"_blank\" rel=\"noopener ugc nofollow\">Unsplash<\/a><\/p>\n<hr class=\"section-divider\">\n<\/div>\n<div class=\"section-content\">\n<figure><img decoding=\"async\" class=\"graf-image aligncenter\" src=\"https:\/\/cdn-images-1.medium.com\/max\/800\/0*2CYulkK8MFLIvL32.png\" data-image-id=\"0*2CYulkK8MFLIvL32.png\" data-width=\"793\" data-height=\"334\"><\/figure><figure><img decoding=\"async\" class=\"graf-image aligncenter\" src=\"https:\/\/cdn-images-1.medium.com\/max\/800\/1*D4A6VSiV326A-E6IPeijqA.png\" data-image-id=\"1*D4A6VSiV326A-E6IPeijqA.png\" data-width=\"473\" data-height=\"102\"><\/figure><figure><img decoding=\"async\" class=\"graf-image aligncenter\" src=\"https:\/\/cdn-images-1.medium.com\/max\/800\/1*Y_e2kqLCo1-Pd1UbMBl9Tg.png\" data-image-id=\"1*Y_e2kqLCo1-Pd1UbMBl9Tg.png\" data-width=\"195\" data-height=\"80\"><\/figure><figure><img decoding=\"async\" class=\"graf-image aligncenter\" src=\"https:\/\/cdn-images-1.medium.com\/max\/800\/1*FGbdEp5WqSxaqFyf290Gvw.png\" data-image-id=\"1*FGbdEp5WqSxaqFyf290Gvw.png\" data-width=\"369\" data-height=\"118\"><\/figure><div class=\"section-inner sectionLayout--insetColumn\" style=\"text-align: center;\">\n<p class=\"graf graf--p\" style=\"text-align: left;\">Gradient descent is one of the most used optimization algorithms in machine learning. It\u2019s highly likely you\u2019ll come across it so it\u2019s worth taking the time to study its inner workings. For starters: optimization is the task of minimizing or maximizing an objective function.<\/p>\n<p class=\"graf graf--p\" style=\"text-align: left;\">Typically we\u2019d prefer to minimize the objective function in a machine learning context and gradient descent is a computationally efficient way to do so. This means we want to find the global minimum of the objective function but we can only achieve that goal if the objective function is convex. In the event of a non-convex objective function, settling with the lowest possible value within the neighborhood is a reasonable solution. This is usually the case in deep learning problems.<\/p>\n<figure class=\"graf graf--figure\"><\/figure>\n<h3 class=\"graf graf--h3\" style=\"text-align: left;\">How are we performing?<\/h3>\n<p class=\"graf graf--p\" style=\"text-align: left;\">Before we start wandering around, it\u2019s good to know how we\u2019re performing. We use a loss function to work this out. You may also hear people refer to it as an objective function (the most general term for any function that\u2019s being optimized), or cost function.<\/p>\n<p class=\"graf graf--p\" style=\"text-align: left;\">Most people wouldn\u2019t be bothered if you used the terms interchangeably but the math police would probably pull you up\u200a\u2014\u200alearn more about <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/stats.stackexchange.com\/questions\/179026\/objective-function-cost-function-loss-function-are-they-the-same-thing\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/stats.stackexchange.com\/questions\/179026\/objective-function-cost-function-loss-function-are-they-the-same-thing\">the differences between objective, cost, and loss functions<\/a>. We will refer to our objective function as the loss function.<\/p>\n<p class=\"graf graf--p\" style=\"text-align: left;\">The responsibility of the loss function is to measure the performance of the model. It tells us where we currently stand. More formally, the goal of computing the loss function is to quantify the error between the predictions and the actual label.<\/p>\n<p class=\"graf graf--p\" style=\"text-align: left;\">Here\u2019s the definition of a loss function from <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/en.wikipedia.org\/wiki\/Loss_function\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/en.wikipedia.org\/wiki\/Loss_function\">Wikipedia<\/a>:<\/p>\n<blockquote class=\"graf graf--blockquote graf--startsWithDoubleQuote\"><p><em class=\"markup--em markup--blockquote-em\">\u201c<\/em>A function that maps an event or values of one or more variables onto a real number intuitively representing some \u201ccost\u201d associated with the event. An optimization problem seeks to minimize a loss function.\u201d<\/p><\/blockquote>\n<p class=\"graf graf--p\" style=\"text-align: left;\">To recap: our machine learning model approximates a target function so we can make predictions. We then use those predictions to calculate the loss of the model\u200a\u2014\u200aor how wrong the model is. The mathematical expression for this can be seen below.<\/p>\n<figure class=\"graf graf--figure\"><figcaption class=\"imageCaption\">Computing the loss&nbsp;function<\/figcaption><\/figure>\n<h3 class=\"graf graf--h3\" style=\"text-align: left;\">What direction shall we&nbsp;move?<\/h3>\n<p class=\"graf graf--p\" style=\"text-align: left;\">In case you\u2019re wondering how gradient descent finds the initial parameter of the model, they&#8217;re initialized randomly. This means we start at some random point on a hill and our job is to find our way to the lowest point.<\/p>\n<p class=\"graf graf--p\" style=\"text-align: left;\">To put it technically, the goal of gradient descent is to minimize the loss J(\u03b8) where \u03b8 is the model parameters.<\/p>\n<figure class=\"graf graf--figure\"><p><\/p>\n<figcaption class=\"imageCaption\">The goal of gradient&nbsp;descent<\/figcaption>\n<\/figure>\n<p>&nbsp;<\/p>\n<p class=\"graf graf--p\" style=\"text-align: left;\">We achieve this by taking steps proportional to the negative of the gradient at the current point.<\/p>\n<p class=\"graf graf--p\" style=\"text-align: left;\">We\u2019ve calculated the loss for the model. For our gradient descent algorithm to determine the direction that will permit it to converge fastest, the partial derivative of the loss is calculated. This is a concept from calculus that is used to reference the slope of a function at a given point. If we know the slope then we can figure out what direction to move in order to reduce the loss on the next step.<\/p>\n<figure class=\"graf graf--figure\"><p><\/p>\n<figcaption class=\"imageCaption\">Calculating partial derivative of the loss function to update gradients (making a&nbsp;step)<\/figcaption>\n<\/figure>\n<\/div>\n<\/div>\n<\/section>\n\n\n\n<section class=\"section section--body\">\n<div class=\"section-divider\">\n<hr class=\"section-divider\">\n<\/div>\n<div class=\"section-content\">\n<div class=\"section-inner sectionLayout--insetColumn\">\n<blockquote class=\"graf graf--pullquote\"><p>Most projects fail before they get to production. <a class=\"markup--anchor markup--pullquote-anchor\" href=\"https:\/\/go.comet.ml\/ebook-Building-Effective-ML-Teams.html\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/go.comet.ml\/ebook-Building-Effective-ML-Teams.html\">Check out our free ebook<\/a> to learn how to implement an MLOps lifecycle to better monitor, train, and deploy your machine learning models to increase output and iteration.<\/p><\/blockquote>\n<\/div>\n<\/div>\n<\/section>\n\n\n\n<section class=\"section section--body\">\n<div class=\"section-divider\">\n<hr class=\"section-divider\">\n<\/div>\n<div class=\"section-content\">\n<figure><img decoding=\"async\" class=\"graf-image aligncenter\" src=\"https:\/\/cdn-images-1.medium.com\/max\/800\/0*h5TMcNxE3xmcEwNt.png\" data-image-id=\"0*h5TMcNxE3xmcEwNt.png\" data-width=\"875\" data-height=\"474\"><\/figure><div class=\"section-inner sectionLayout--insetColumn\" style=\"text-align: center;\">\n<h3 class=\"graf graf--h3\" style=\"text-align: left;\">How big shall the step&nbsp;be?<\/h3>\n<p class=\"graf graf--p\" style=\"text-align: left;\">Among all the funny symbols in the equation above is a cute \u03b1 symbol. That\u2019s alpha, but we can also call her the learning rate. Alpha\u2019s job is to determine how quickly we should move towards the minimum. Big steps would move us quicker and little steps will move us slower.<\/p>\n<figure class=\"graf graf--figure\">Figure 2: Example of minimizing J(w); (Source: <a class=\"markup--anchor markup--figure-anchor\" href=\"http:\/\/rasbt.github.io\/mlxtend\/user_guide\/general_concepts\/gradient-optimization\/\" target=\"_blank\" rel=\"noopener ugc nofollow\" data-href=\"http:\/\/rasbt.github.io\/mlxtend\/user_guide\/general_concepts\/gradient-optimization\/\">MLextend<\/a>)<\/figure>\n<p>&nbsp;<\/p>\n<p class=\"graf graf--p\" style=\"text-align: left;\">Being too conservative, or selecting an alpha value that\u2019s too small, is going to take longer to converge at the minimum. Another way to think of it is that learning will happen slower.<\/p>\n<p class=\"graf graf--p\" style=\"text-align: left;\">But being too optimistic, or selecting an alpha value that\u2019s too large, is risky too. We can completely overshoot the minimum, thus increasing the error of the model.<\/p>\n<p class=\"graf graf--p\" style=\"text-align: left;\">The value we select for alpha has to be just right so we can learn fast, but not so fast that we miss the minimum. To do this we must find the optimal value for alpha such that gradient descent converges fast. This will require <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/heartbeat.comet.ml\/hyperparameter-optimization-with-comet-80c6d4b83502?source=user_profile---------23-------------------------------\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/heartbeat.comet.ml\/hyperparameter-optimization-with-comet-80c6d4b83502?source=user_profile---------23-------------------------------\">hyperparameter optimization<\/a>.<\/p>\n<h3 class=\"graf graf--h3\" style=\"text-align: left;\">Gradient Descent in&nbsp;Python<\/h3>\n<p class=\"graf graf--p\" style=\"text-align: left;\">The steps to implement gradient descent are as follows:<\/p>\n<ol class=\"postList\" style=\"text-align: left;\">\n<li class=\"graf graf--li\">Randomly initialize weights<\/li>\n<li class=\"graf graf--li\">Calculate the loss<\/li>\n<li class=\"graf graf--li\">Make a step by updating the gradients<\/li>\n<li class=\"graf graf--li\">Repeat until gradient descent converges.<\/li>\n<\/ol>\n<pre class=\"graf graf--pre\"><strong class=\"markup--strong markup--pre-strong\">def<\/strong> gradient_descent(X, y, params, alpha, n_iter):\n    \"\"\"\n    Gradient descent to minimize cost function\n    __________________\n    Input(s)\n    X: Training data\n    y: Labels\n    params: Dictionary contatining random coefficients\n    alpha: Model learning rate\n    __________________\n    Output(s)\n    params: Dictionary containing optimized coefficients\n    \"\"\"\n    W <strong class=\"markup--strong markup--pre-strong\">=<\/strong> params[\"W\"]\n    b <strong class=\"markup--strong markup--pre-strong\">=<\/strong> params[\"b\"]\n    m <strong class=\"markup--strong markup--pre-strong\">=<\/strong> X<strong class=\"markup--strong markup--pre-strong\">.<\/strong>shape[0] <em class=\"markup--em markup--pre-em\"># number of training instances <\/em>\n\n    <strong class=\"markup--strong markup--pre-strong\">for<\/strong> _ <strong class=\"markup--strong markup--pre-strong\">in<\/strong> range(n_iter):\n        <em class=\"markup--em markup--pre-em\"># prediction with random weights<\/em>\n        y_pred <strong class=\"markup--strong markup--pre-strong\">=<\/strong> np<strong class=\"markup--strong markup--pre-strong\">.<\/strong>dot(X, W) <strong class=\"markup--strong markup--pre-strong\">+<\/strong> b\n        <em class=\"markup--em markup--pre-em\"># taking the partial derivative of coefficients<\/em>\n        dW <strong class=\"markup--strong markup--pre-strong\">=<\/strong> (2<strong class=\"markup--strong markup--pre-strong\">\/<\/strong>m) <strong class=\"markup--strong markup--pre-strong\">*<\/strong> np<strong class=\"markup--strong markup--pre-strong\">.<\/strong>dot(X<strong class=\"markup--strong markup--pre-strong\">.<\/strong>T, (y_pred <strong class=\"markup--strong markup--pre-strong\">-<\/strong> y))\n        db <strong class=\"markup--strong markup--pre-strong\">=<\/strong> (2<strong class=\"markup--strong markup--pre-strong\">\/<\/strong>m) <strong class=\"markup--strong markup--pre-strong\">*<\/strong> np<strong class=\"markup--strong markup--pre-strong\">.<\/strong>sum(y_pred <strong class=\"markup--strong markup--pre-strong\">-<\/strong>  y)\n        <em class=\"markup--em markup--pre-em\"># updates to coefficients<\/em>\n        W <strong class=\"markup--strong markup--pre-strong\">-=<\/strong> alpha <strong class=\"markup--strong markup--pre-strong\">*<\/strong> dW\n        b <strong class=\"markup--strong markup--pre-strong\">-=<\/strong> alpha <strong class=\"markup--strong markup--pre-strong\">*<\/strong> db\n\n    params[\"W\"] <strong class=\"markup--strong markup--pre-strong\">=<\/strong> W\n    params[\"b\"] <strong class=\"markup--strong markup--pre-strong\">=<\/strong> b\n    <strong class=\"markup--strong markup--pre-strong\">return<\/strong> params<\/pre>\n<p class=\"graf graf--p\" style=\"text-align: left;\">Several different adaptations could be made to gradient descent to make it run more efficiently in different scenarios\u200a\u2014\u200alike when the dataset is huge. Have a read of my other article on <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/towardsdatascience.com\/gradient-descent-811efcc9f1d5\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/towardsdatascience.com\/gradient-descent-811efcc9f1d5\">Gradient Descent<\/a> in which I cover some of the various adaptions as well as their pros and cons.<\/p>\n<p class=\"graf graf--p\" style=\"text-align: left;\"><em class=\"markup--em markup--p-em\">Thanks for reading.<\/em><\/p>\n<\/div>\n<\/div>\n<\/section>\n","protected":false},"excerpt":{"rendered":"<p>Photo by&nbsp;Todd Diemer&nbsp;on&nbsp;Unsplash Gradient descent is one of the most used optimization algorithms in machine learning. It\u2019s highly likely you\u2019ll come across it so it\u2019s worth taking the time to study its inner workings. For starters: optimization is the task of minimizing or maximizing an objective function. Typically we\u2019d prefer to minimize the objective function [&hellip;]<\/p>\n","protected":false},"author":8,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[6],"tags":[],"coauthors":[138],"class_list":["post-4432","post","type-post","status-publish","format-standard","hentry","category-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The Gradient Descent Algorithm - Comet<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The Gradient Descent Algorithm\" \/>\n<meta property=\"og:description\" content=\"Photo by&nbsp;Todd Diemer&nbsp;on&nbsp;Unsplash Gradient descent is one of the most used optimization algorithms in machine learning. It\u2019s highly likely you\u2019ll come across it so it\u2019s worth taking the time to study its inner workings. For starters: optimization is the task of minimizing or maximizing an objective function. Typically we\u2019d prefer to minimize the objective function [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2022-10-28T16:56:10+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:16:52+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/miro.medium.com\/max\/700\/0*yLPPB92dMxIxZ6U5\" \/>\n<meta name=\"author\" content=\"Kurtis Pykes\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kurtis Pykes\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"The Gradient Descent Algorithm - Comet","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/","og_locale":"en_US","og_type":"article","og_title":"The Gradient Descent Algorithm","og_description":"Photo by&nbsp;Todd Diemer&nbsp;on&nbsp;Unsplash Gradient descent is one of the most used optimization algorithms in machine learning. It\u2019s highly likely you\u2019ll come across it so it\u2019s worth taking the time to study its inner workings. For starters: optimization is the task of minimizing or maximizing an objective function. Typically we\u2019d prefer to minimize the objective function [&hellip;]","og_url":"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2022-10-28T16:56:10+00:00","article_modified_time":"2025-04-24T17:16:52+00:00","og_image":[{"url":"https:\/\/miro.medium.com\/max\/700\/0*yLPPB92dMxIxZ6U5","type":"","width":"","height":""}],"author":"Kurtis Pykes","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Kurtis Pykes","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/"},"author":{"name":"Team Comet Digital","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/6266601170c60a7a82b3e0043fbe8ddf"},"headline":"The Gradient Descent Algorithm","datePublished":"2022-10-28T16:56:10+00:00","dateModified":"2025-04-24T17:16:52+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/"},"wordCount":851,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/max\/700\/0*yLPPB92dMxIxZ6U5","articleSection":["Machine Learning"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/","url":"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/","name":"The Gradient Descent Algorithm - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/max\/700\/0*yLPPB92dMxIxZ6U5","datePublished":"2022-10-28T16:56:10+00:00","dateModified":"2025-04-24T17:16:52+00:00","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/#primaryimage","url":"https:\/\/miro.medium.com\/max\/700\/0*yLPPB92dMxIxZ6U5","contentUrl":"https:\/\/miro.medium.com\/max\/700\/0*yLPPB92dMxIxZ6U5"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/the-gradient-descent-algorithm\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"The Gradient Descent Algorithm"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/6266601170c60a7a82b3e0043fbe8ddf","name":"Team Comet Digital","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/4f0c0a8cc7c0e87c636ff6a420a6647c","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/Screen-Shot-2023-08-12-at-8.58.50-AM-96x96.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/Screen-Shot-2023-08-12-at-8.58.50-AM-96x96.png","caption":"Team Comet Digital"},"sameAs":["https:\/\/www.comet.ml\/"],"url":"https:\/\/www.comet.com\/site\/blog\/author\/teamcometdigital\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/4432","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=4432"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/4432\/revisions"}],"predecessor-version":[{"id":15664,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/4432\/revisions\/15664"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=4432"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=4432"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=4432"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=4432"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}