{"id":2013,"date":"2019-10-18T21:24:21","date_gmt":"2019-10-19T05:24:21","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/"},"modified":"2019-10-18T21:24:21","modified_gmt":"2019-10-19T05:24:21","slug":"estimating-uncertainty-in-machine-learning-models-part-3","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/","title":{"rendered":"Estimating Uncertainty in Machine Learning Models \u2014 Part 3"},"content":{"rendered":"\n<p>&nbsp;<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>Check out part 1 (<\/em><a href=\"https:\/\/medium.com\/comet-ml\/estimating-uncertainty-in-machine-learning-models-part-1-2bd1209c347c\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a><em>)and part 2 (<\/em><a href=\"https:\/\/medium.com\/comet-ml\/estimating-uncertainty-in-machine-learning-models-part-2-8711c832cc15\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a><em>) of this series<\/em><\/p>\n<\/blockquote>\n\n\n\n<p>In the last part of our series on uncertainty estimation, we addressed the limitations of approaches like bootstrapping for large models, and demonstrated how we might estimate uncertainty in the predictions of a neural network using\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/1506.02142.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">MC Dropout<\/a>.<\/p>\n\n\n\n<p>So far, the approaches we have looked at have involved creating variations in the dataset, or the model parameters to estimate uncertainty. The main drawback here is that it requires us to either train multiple models, or make multiple predictions in order to figure out the variance in our model\u2019s predictions.<\/p>\n\n\n\n<p>In situations with latency constraints, techniques such as MC Dropout might not be appropriate for estimating a prediction interval. What can we do to reduce the number of predictions we need to estimate the interval?<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Using Maximum Likelihood Method (MLE) to estimate Intervals<\/h2>\n\n\n\n<p>In part 1 of this series, we made an assumption that the mean response of our dependent variable,\u00a0<strong>\u03bc(y|x)<\/strong>,is normally distributed.<\/p>\n\n\n\n<p>The MLE method involves building two models, one to estimate the conditional mean response,\u00a0<strong>\u03bc(y|x)<\/strong>\u00a0, and another to estimate the variance,\u00a0<strong>\u03c3\u00b2\u00a0<\/strong>in the predicted response.<\/p>\n\n\n\n<p>We do this by first, splitting our training data into two halves. The first half model,\u00a0<strong>m\u03bc<\/strong>\u00a0is trained as a regular regression model, using the first half of the data. This model is then used to make predictions on the second half of the data.<\/p>\n\n\n\n<p>The second model,\u00a0<strong>m\u03c3\u00b2<\/strong>\u00a0is trained using the second half of the data, and the squared residuals of\u00a0<strong>m\u03bc\u00a0<\/strong>as the dependent variable.<\/p>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-937\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/formula-1.png\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-936\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/formula-2.png\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<p>The final prediction interval can be expressed in the following way<\/p>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-935\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/formula-3.png\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<p>Here\u00a0<strong>\u03b1\u00a0<\/strong>is the desired level of confidence according to the Gaussian Distribution.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Let\u2019s try it out<\/h3>\n\n\n\n<p>We\u2019re going to be using the Auto MPG dataset again. Notice how the training data is split again in the last step.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>Mean Variance Estimation Method\n\ndataset_path = keras.utils.get_file(\"auto-mpg.data\", \"http:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/auto-mpg\/auto-mpg.data\")\ncolumn_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',\n                'Acceleration', 'Model Year', 'Origin']\nraw_dataset = pd.read_csv(dataset_path, names=column_names,\n                      na_values = \"?\", comment='\\t',\n                      sep=\" \", skipinitialspace=True)\n\ndataset = raw_dataset.copy()\ndataset = dataset.dropna()\n\norigin = dataset.pop('Origin')\n\ndataset['USA'] = (origin == 1)*1.0\ndataset['Europe'] = (origin == 2)*1.0\ndataset['Japan'] = (origin == 3)*1.0\n\ntrain_dataset = dataset.sample(frac=0.8,random_state=0)\ntest_dataset = dataset.drop(train_dataset.index)\n\nmean_dataset = train_dataset.sample(frac=0.5 , random_state=0)\nvar_dataset = train_dataset.drop(mean_dataset.index)<\/code><\/pre>\n\n\n\n<p>Next, we\u2019re going to create two models to estimate the mean and variance in our data<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import keras\n\nfrom keras.models import Model\nfrom keras.layers import Input, Dense, Dropout\ndropout_rate = 0.5\n\ndef model_fn():\n    inputs = Input(shape=(9,))\n    x = Dense(64, activation='relu')(inputs)\n    x = Dropout(dropout_rate)(x)\n    x = Dense(64, activation='relu')(x)\n    x = Dropout(dropout_rate)(x)\n    outputs = Dense(1)(x)\n\n    model = Model(inputs, outputs)\n\n    return model\n\nmean_model = model_fn()\nmean_model.compile(loss=\"mean_squared_error\", optimizer='adam')\n\nvar_model = model_fn()\nvar_model.compile(loss=\"mean_squared_error\", optimizer='adam')<\/code><\/pre>\n\n\n\n<p>Finally, we\u2019re going to normalize our data, and start training<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>train_stats = train_dataset.describe()\ntrain_stats.pop(\"MPG\")\ntrain_stats.transpose()\n\ndef norm(x):\n    return (x - train_stats.loc['mean']) \/ train_stats.loc['std']\n\nnormed_train_data = norm(train_dataset)\nnormed_mean_data = norm(mean_dataset)\nnormed_var_data = norm(var_dataset)\nnormed_test_data = norm(test_dataset)\n\ntrain_labels = train_dataset.pop('MPG')\nmean_labels = mean_dataset.pop('MPG')\nvar_labels = var_dataset.pop('MPG')\ntest_labels = test_dataset.pop('MPG')<\/code><\/pre>\n\n\n\n<p>Once the mean model has been trained, we can use it to make predictions on the second half of our dataset and compute the squared residuals.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>EPOCHS = 100\n\nmean_model.fit(normed_mean_data, mean_labels, epochs=EPOCHS, validation_split=0.2, verbose=0)\n\nmean_predictions = mean_model.predict(normed_var_data)\nsquared_residuals = (var_labels.values.reshape(-1,1) - mean_predictions) ** 2\n\nvar_model.fit(normed_var_data, squared_residuals, epochs=EPOCHS, validation_split=0.2, verbose=0)<\/code><\/pre>\n\n\n\n<p>Let\u2019s take a look at the intervals produced by this approach.<\/p>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-934\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/intervals-1.jpg\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<p>You will notice that the highly inaccurate predictions have much larger intervals around the mean.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Using Quantile Regression to Estimate Intervals<\/h2>\n\n\n\n<p>What if we do not want to make assumptions about the distribution of our response variable, and want to directly estimate the upper and lower limit of our target variable?<\/p>\n\n\n\n<p>A quantile loss can help us estimate a target percentile response, instead of a mean response. i.e. Predicting the 0.25th Quantile value of our target will tell us, that given our current set of features, we expect 25% of the target values to be equal to, or less than our prediction.<\/p>\n\n\n\n<p>If we train two separate regression models, one for the 0.025 percentile and another for the 0.9725 percentile, we are effectively saying that we expect 95% of our target values to fall within this interval i.e.\u00a0<strong>A 95% prediction interval<\/strong><\/p>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-933\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/formula-4.png\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Let\u2019s try it out<\/h3>\n\n\n\n<p>Keras, does not come with a default quantile loss, so we\u2019re going to use the\u00a0<a href=\"https:\/\/towardsdatascience.com\/deep-quantile-regression-c85481548b5a\" target=\"_blank\" rel=\"noreferrer noopener\">following implementation<\/a>\u00a0from\u00a0<a href=\"https:\/\/towardsdatascience.com\/@sachin.abeywardana\" target=\"_blank\" rel=\"noreferrer noopener\">Sachin Abeywardana<\/a><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import keras.backend as K\n\ndef tilted_loss(q,y,f):\n    e = (y-f)\n    return K.mean(K.maximum(q*e, (q-1)*e), axis=-1)\n\nmodel = model_fn()\nmodel.compile(loss=lambda y,f: tilted_loss(0.5,y,f), optimizer='adam')\n\nlowerq_model = model_fn()\nlowerq_model.compile(loss=lambda y,f: tilted_loss(0.025,y,f), optimizer='adam')\n\nupperq_model = model_fn()\nupperq_model.compile(loss=lambda y,f: tilted_loss(0.9725,y,f), optimizer='adam')<\/code><\/pre>\n\n\n\n<p>The resulting predictions look like this<\/p>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-932\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/intervals-2.jpg\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<p>One of the disadvantages of this approach is that it tends to produce very wide intervals. You will also notice that the intervals are not symmetric about the median estimated values (blue dots).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Evaluating the Predicted Intervals<\/h2>\n\n\n\n<p>In the last\u00a0<a href=\"https:\/\/medium.com\/comet-ml\/estimating-uncertainty-in-machine-learning-models-part-2-8711c832cc15\" target=\"_blank\" rel=\"noreferrer noopener\">post<\/a>, we introduced two metrics to assess the quality of our interval predictions, PICP, and MPIW. The table below compares these metrics across the last three approaches we have used to estimate uncertainty in a Neural Network.<\/p>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" class=\"wp-image-931\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/compare-techniques.png\" alt=\"\" \/><\/figure>\n<\/div>\n\n\n\n<p>We see that the Mean-Variance estimation method produces the intervals with the smallest width, which results in a reduction of it\u2019s PICP score. MC Dropout, and Quantile Regression produce very wide intervals, leading to a perfect PICP score.<\/p>\n\n\n\n<p>Balancing between MPIW and PICP is an open ended question, and completely dependent on how the model is being applied. Ideally, we would like our intervals to be as tight as possible, with a low mean width, and also includes our target values the majority of time.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h2>\n\n\n\n<p>These techniques can readily be implemented on top of your existing models with very few changes, and providing uncertainty estimates to your predictions, makes them significantly more trustworthy.<\/p>\n\n\n\n<p>I hope you enjoyed our series on uncertainty. Keep watching this space for more great content!!<\/p>\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n<h2 class=\"wp-block-heading\"><em>Want to stay in the loop?\u00a0<a href=\"https:\/\/info.comet.ml\/newsletter-signup\/?utm_campaign=tensorboard-integration&amp;utm_source=blog&amp;utm_medium=CTA\">Subscribe to the Comet Newsletter<\/a>\u00a0for weekly insights and perspective on the latest ML news, projects, and more.<\/em><\/h2>\n","protected":false},"excerpt":{"rendered":"<p>&nbsp; Check out part 1 (here)and part 2 (here) of this series In the last part of our series on uncertainty estimation, we addressed the limitations of approaches like bootstrapping for large models, and demonstrated how we might estimate uncertainty in the predictions of a neural network using\u00a0MC Dropout. So far, the approaches we have [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2021,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[8,6],"tags":[],"coauthors":[128],"class_list":["post-2013","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-comet-community-hub","category-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Estimating Uncertainty in Machine Learning Models \u2014 Part 3 - Comet<\/title>\n<meta name=\"description\" content=\"In the last part of our series on uncertainty estimation, we addressed the limitations of approaches like bootstrapping for large models, and demonstrated how we might estimate uncertainty in the predictions of a neural network using MC Dropout.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Estimating Uncertainty in Machine Learning Models \u2014 Part 3\" \/>\n<meta property=\"og:description\" content=\"In the last part of our series on uncertainty estimation, we addressed the limitations of approaches like bootstrapping for large models, and demonstrated how we might estimate uncertainty in the predictions of a neural network using MC Dropout.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2019-10-19T05:24:21+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/estimating-uncertainty-3.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"699\" \/>\n\t<meta property=\"og:image:height\" content=\"525\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Dhruv Nair\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Dhruv Nair\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Estimating Uncertainty in Machine Learning Models \u2014 Part 3 - Comet","description":"In the last part of our series on uncertainty estimation, we addressed the limitations of approaches like bootstrapping for large models, and demonstrated how we might estimate uncertainty in the predictions of a neural network using MC Dropout.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/","og_locale":"en_US","og_type":"article","og_title":"Estimating Uncertainty in Machine Learning Models \u2014 Part 3","og_description":"In the last part of our series on uncertainty estimation, we addressed the limitations of approaches like bootstrapping for large models, and demonstrated how we might estimate uncertainty in the predictions of a neural network using MC Dropout.","og_url":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2019-10-19T05:24:21+00:00","og_image":[{"width":699,"height":525,"url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/estimating-uncertainty-3.jpg","type":"image\/jpeg"}],"author":"Dhruv Nair","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Dhruv Nair","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/"},"author":{"name":"engineering@atre.net","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/550ac35e8e821db8064c5bd1f0a04e6b"},"headline":"Estimating Uncertainty in Machine Learning Models \u2014 Part 3","datePublished":"2019-10-19T05:24:21+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/"},"wordCount":788,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/estimating-uncertainty-3.jpg","articleSection":["Comet Community Hub","Machine Learning"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/","url":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/","name":"Estimating Uncertainty in Machine Learning Models \u2014 Part 3 - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/estimating-uncertainty-3.jpg","datePublished":"2019-10-19T05:24:21+00:00","description":"In the last part of our series on uncertainty estimation, we addressed the limitations of approaches like bootstrapping for large models, and demonstrated how we might estimate uncertainty in the predictions of a neural network using MC Dropout.","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/#primaryimage","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/estimating-uncertainty-3.jpg","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/estimating-uncertainty-3.jpg","width":699,"height":525,"caption":"resulting predictions"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/estimating-uncertainty-in-machine-learning-models-part-3\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Estimating Uncertainty in Machine Learning Models \u2014 Part 3"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/550ac35e8e821db8064c5bd1f0a04e6b","name":"engineering@atre.net","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/027c18177377edf459980f0cfb83706c","url":"https:\/\/secure.gravatar.com\/avatar\/d002a459a297e0d1779329318029aee19868c312b3e1f3c9ec9b3e3add2740de?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/d002a459a297e0d1779329318029aee19868c312b3e1f3c9ec9b3e3add2740de?s=96&d=mm&r=g","caption":"engineering@atre.net"},"sameAs":["https:\/\/live-cometml.pantheonsite.io"],"url":"https:\/\/www.comet.com\/site\/blog\/author\/engineeringatre-net\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/2013","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=2013"}],"version-history":[{"count":0,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/2013\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media\/2021"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=2013"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=2013"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=2013"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=2013"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}