{"id":1958,"date":"2019-09-12T16:08:25","date_gmt":"2019-09-13T00:08:25","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/"},"modified":"2019-09-12T16:08:25","modified_gmt":"2019-09-13T00:08:25","slug":"estimating-uncertainty-in-machine-learning-models-part-1","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/","title":{"rendered":"Estimating Uncertainty in Machine Learning Models \u2014 Part 1"},"content":{"rendered":"\n<p>&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-pullquote\">\n<blockquote>\n<p>\u201cWe demand rigidly defined areas of doubt and uncertainty!\u201d<\/p>\n<cite>&#8211; Douglas Adams, The Hitchhiker\u2019s Guide to the Galaxy<\/cite><\/blockquote>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Why is uncertainty important?<\/strong><\/h2>\n\n\n\n<p>Let\u2019s imagine for a second that we\u2019re building a computer vision model for a construction company, ABC Construction. The company is interested in automating its aerial site surveillance process, and would like our algorithm to run on their drones.<\/p>\n\n\n\n<p>We happily get to work, and deploy our algorithm onto their fleets of drones, and go home thinking that the project is a great success. A week later, we get a call from ABC Construction saying that the drones keep crashing into the white trucks that they have parked on all their sites. You rush to one of the sites to examine the vision model, and realize that it is mistakenly predicting that the side of the white truck is just the bright sky. Given this single prediction, the drones are flying straight into the trucks, thinking that there is nothing there.<\/p>\n\n\n\n<p>When making predictions about data in the real world, it\u2019s a good idea to include an estimate of how sure your model is about its predictions. This is especially true if models are required to make decisions that have real consequences to peoples lives. In applications such as self driving cars, health care, insurance, etc, measures of uncertainty can help prevent serious accidents from happening.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Sources of uncertainty<\/strong><\/h2>\n\n\n\n<p>When modeling any process, we are primarily concerned with two types of uncertainty.<\/p>\n\n\n\n<p><strong>Aleatoric Uncertainty<\/strong>: This is the uncertainty that is inherent in the process we are trying to explain. e.g. A ping pong ball dropped from the same location above a table will land in a slightly different spot every time, due to complex interactions with the surrounding air. Uncertainty in this category tends to be irreducible in practice.<\/p>\n\n\n\n<p><strong>Epistemic Uncertainty<\/strong>: This is the uncertainty attributed to an inadequate knowledge of the model most suited to explain the data. This uncertainty is reducible given more knowledge about the problem at hand. e.g. reduce this uncertainty by adding more parameters to the model, gather more data etc.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>So how do we estimate uncertainty?<\/strong><\/h2>\n\n\n\n<p>Let\u2019s consider the case of a bakery trying to estimate the number of cakes it will sell in a given month based on the number of customers that enter the bakery. We\u2019re going to try and model this problem using a simple\u00a0<a href=\"https:\/\/en.wikipedia.org\/wiki\/Linear_regression\" target=\"_blank\" rel=\"noreferrer noopener\">linear regression<\/a>\u00a0model. We will then try to estimate the different types of epistemic uncertainty in this model from the available data that we have.<\/p>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-987\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/formula-1-2.png\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<p>The coefficients of this model are subject to\u00a0<a href=\"https:\/\/atmos.washington.edu\/~breth\/classes\/AM582\/lect\/lect3-notes.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">sampling uncertainty<\/a>, and it is unlikely that we will ever determine the true parameters of the model from the sample data. Therefore, providing an estimate of the set of possible values for these coefficients will inform us of how appropriately our current model is able to explain the data.<\/p>\n\n\n\n<p>First, lets generate some data. We\u2019re going to sample our\u00a0<strong><em>x\u00a0<\/em><\/strong>values from a scaled and shifted unit normal distribution. Our\u00a0<strong><em>y<\/em>\u00a0<\/strong>values are just perturbations of these\u00a0<strong><em>x\u00a0<\/em><\/strong>values.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import numpy as np\n\nfrom numpy.random import randn, randint\nfrom numpy.random import seed\n\n# random number seed\nseed(1)\n\n# Number of samples\nsize = 100\n\nx = 20 * (2.5 + randn(size))\ny = x + (10 * randn(size) + 50)<\/code><\/pre>\n\n\n\n<p>Our resulting data ends up looking like this<\/p>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-988\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/distributed-sample-data-1-1024x775-1.jpg\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<p>We\u2019re going to start with estimating the uncertainty in our model parameters using bootstrap sampling. Bootstrap sampling is a technique to build new datasets by sampling with replacement from the original dataset. It generates variants of our dataset, and can give us some intuition into the range of parameters that could describe the data.<\/p>\n\n\n\n<p>In the code below, we run 1000 iterations of bootstrap sampling, fit a linear regression model to each sample dataset, and log the coefficients, and intercepts of the model at every iteration.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>from sklearn.utils import resample\n\ncoefficients = []\nintercepts = []\n\nfor _ in range(1000):\n    idx = randint(0, size, size)\n    x_train = x[idx]\n    y_train = y[idx]\n\n    model = LinearRegression().fit(x_train, y_train)\n\n    coefficients.append(model.coef_.item())\n    intercepts.append(model.intercept_)<\/code><\/pre>\n\n\n\n<p>Finally, we extract the 97.5th, 2.5th percentile from the logged coefficients. This gives us the 95% confidence interval of the coefficients and intercepts. Using percentiles to determine the interval has the added advantage of not making assumptions about the sampling distribution of the coefficients.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>upper_coefficient = np.percentile(coefficients, 97.5)\nupper_intercept = np.percentile(intercepts, 97.5)\n\nlower_coefficient = np.percentile(coefficients, 2.5)\nlower_intercept = np.percentile(intercepts, 2.5)<\/code><\/pre>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image\"><img decoding=\"async\" data-id=\"666\" class=\"wp-image-666\" src=\"https:\/\/i1.wp.com\/blog.comet.ml\/wp-content\/uploads\/2019\/10\/distribution-coefficients.png?fit=769%2C618&amp;ssl=1\" alt=\"\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" data-id=\"667\" class=\"wp-image-667\" src=\"https:\/\/i2.wp.com\/blog.comet.ml\/wp-content\/uploads\/2019\/10\/distribution-of-coefficients.png?fit=769%2C635&amp;ssl=1\" alt=\"\" \/><\/figure>\n<\/figure>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" class=\"wp-image-989\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/distribution-coefficients-1024x823-1.jpg\" alt=\"\" \/>\n<figcaption>Distribution of Coefficients and Intercepts<\/figcaption>\n<\/figure>\n\n\n\n<p>We can now use these coefficients to plot the 95% confidence interval for a family of curves that can describe the data.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" class=\"wp-image-990\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/predictions-1024x851-1.jpg\" alt=\"\" \/><\/figure>\n\n\n\n<p>Now lets estimate the uncertainty in the models predictions. Our linear regression model is predicting the mean number of cakes sold given the fact that\u00a0<strong><em>x<\/em><\/strong>\u00a0number of customers have come in to the store. We expect different values of\u00a0<strong><em>x<\/em><\/strong>\u00a0to produce different mean responses in\u00a0<strong><em>y<\/em><\/strong>, and we\u2019re going to assume that for a fixed\u00a0<strong><em>x<\/em><\/strong>, the response\u00a0<strong><em>y<\/em><\/strong>, is normally distributed.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" class=\"wp-image-991\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/conditional-means.jpg\" alt=\"\" \/><\/figure>\n\n\n\n<p>Based on this assumption, we can approximate the variance in\u00a0<strong><em>y<\/em><\/strong>\u00a0conditioned on\u00a0<strong><em>x<\/em><\/strong>, using the residuals from our predictions. With this variance in hand, we can calculate the\u00a0<a href=\"https:\/\/en.wikipedia.org\/wiki\/Standard_error\" target=\"_blank\" rel=\"noreferrer noopener\">standard error<\/a>\u00a0of the mean response, and use that to build the confidence interval of the mean response. This is a measure of how well we are approximating the true mean response of\u00a0<strong><em>y<\/em><\/strong>. The smaller we can make this value, the better.<\/p>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-992\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/formula-2-2.png\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-993\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/formula-3-1.png\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-994\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/formula-4-1.png\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-995\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/formula-5-1.png\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-996\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/formula-6.png\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-997\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/predictions-2-1024x826-1.jpg\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<p>The variance in our conditional mean is dependent on the variance in our coefficient and intercept. The standard error is just the square root of this variance. Since the standard error of the conditional mean is proportional to the deviation in the values of\u00a0<strong><em>x\u00a0<\/em><\/strong>from themean, we can see it getting narrower as it approaches the mean value of\u00a0<strong><em>x<\/em><\/strong>.<\/p>\n\n\n\n<p>With the confidence interval, the bakery is able to determine the interval for the average number of cakes it will sell for a given number of customers, however, they still do not know the interval for the number possible of cakes they might sell for a given number of customers.<\/p>\n\n\n\n<p>A confidence interval only accounts for drift in the mean response of\u00a0<strong><em>y<\/em><\/strong>. It does not provide the interval for all possible values of\u00a0<strong><em>y<\/em><\/strong>\u00a0for a given\u00a0<strong><em>x<\/em><\/strong>\u00a0value. In order to do that, we would need to use a prediction interval.<\/p>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-998\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/formula-7.png\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-999\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/formula-8.png\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-1000\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/formula-9.png\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<p>The prediction interval derived in a similar manner as the confidence interval. The only difference is that we include the variance of our dependent variable\u00a0<strong><em>y\u00a0<\/em><\/strong>when calculating the standard error, which leads to a wider interval.<\/p>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-1001\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/predictions-3-1024x808-1.jpg\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>In the first part of our series on estimating uncertainty, we looked at ways to estimate sources of epistemic uncertainty in a simple regression model. Of course, these estimations become a lot harder when the size and complexity of your data, and model increase.<\/p>\n\n\n\n<p>Bootstrapping techniques won\u2019t work when we\u2019re dealing with large neural networks, and estimating the confidence and prediction intervals through the standard error only works when normality assumptions are made about the sampling distributions of the model\u2019s residuals, and parameters. How do we measure uncertainty when these assumptions are violated?<\/p>\n\n\n\n<p>In the next part of this series we will looks at ways to quantify uncertainty in more complex models.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>&nbsp; \u201cWe demand rigidly defined areas of doubt and uncertainty!\u201d &#8211; Douglas Adams, The Hitchhiker\u2019s Guide to the Galaxy Why is uncertainty important? Let\u2019s imagine for a second that we\u2019re building a computer vision model for a construction company, ABC Construction. The company is interested in automating its aerial site surveillance process, and would like [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1974,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[8,6],"tags":[],"coauthors":[128],"class_list":["post-1958","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-comet-community-hub","category-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Estimating Uncertainty in Machine Learning Models \u2014 Part 1 - Comet<\/title>\n<meta name=\"description\" content=\"In the first part of this series, we look at ways to estimate sources of epistemic uncertainty in a simple regression model.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Estimating Uncertainty in Machine Learning Models \u2014 Part 1\" \/>\n<meta property=\"og:description\" content=\"In the first part of this series, we look at ways to estimate sources of epistemic uncertainty in a simple regression model.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2019-09-13T00:08:25+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/distributed-sample-data.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1300\" \/>\n\t<meta property=\"og:image:height\" content=\"984\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Dhruv Nair\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Dhruv Nair\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Estimating Uncertainty in Machine Learning Models \u2014 Part 1 - Comet","description":"In the first part of this series, we look at ways to estimate sources of epistemic uncertainty in a simple regression model.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/","og_locale":"en_US","og_type":"article","og_title":"Estimating Uncertainty in Machine Learning Models \u2014 Part 1","og_description":"In the first part of this series, we look at ways to estimate sources of epistemic uncertainty in a simple regression model.","og_url":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2019-09-13T00:08:25+00:00","og_image":[{"width":1300,"height":984,"url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/distributed-sample-data.jpg","type":"image\/jpeg"}],"author":"Dhruv Nair","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Dhruv Nair","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/"},"author":{"name":"engineering@atre.net","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/550ac35e8e821db8064c5bd1f0a04e6b"},"headline":"Estimating Uncertainty in Machine Learning Models \u2014 Part 1","datePublished":"2019-09-13T00:08:25+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/"},"wordCount":1132,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/distributed-sample-data.jpg","articleSection":["Comet Community Hub","Machine Learning"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/","url":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/","name":"Estimating Uncertainty in Machine Learning Models \u2014 Part 1 - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/distributed-sample-data.jpg","datePublished":"2019-09-13T00:08:25+00:00","description":"In the first part of this series, we look at ways to estimate sources of epistemic uncertainty in a simple regression model.","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/#primaryimage","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/distributed-sample-data.jpg","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/distributed-sample-data.jpg","width":1300,"height":984,"caption":"number of cakes sold over number of number of customers graph"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-1\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Estimating Uncertainty in Machine Learning Models \u2014 Part 1"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/550ac35e8e821db8064c5bd1f0a04e6b","name":"engineering@atre.net","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/027c18177377edf459980f0cfb83706c","url":"https:\/\/secure.gravatar.com\/avatar\/d002a459a297e0d1779329318029aee19868c312b3e1f3c9ec9b3e3add2740de?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/d002a459a297e0d1779329318029aee19868c312b3e1f3c9ec9b3e3add2740de?s=96&d=mm&r=g","caption":"engineering@atre.net"},"sameAs":["https:\/\/live-cometml.pantheonsite.io"],"url":"https:\/\/www.comet.com\/site\/blog\/author\/engineeringatre-net\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/1958","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=1958"}],"version-history":[{"count":0,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/1958\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media\/1974"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=1958"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=1958"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=1958"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=1958"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}