{"id":2321,"date":"2021-06-02T15:06:38","date_gmt":"2021-06-02T23:06:38","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/"},"modified":"2021-06-02T15:06:38","modified_gmt":"2021-06-02T23:06:38","slug":"comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/","title":{"rendered":"Issue 3: Comet + Sweetviz, Unsupervised Speech Recognition, GPT-3 and In-context Learning"},"content":{"rendered":"\n<h3 class=\"has-text-color wp-block-heading\" style=\"color: #747474;\"><em>Facebook AI&#8217;s new unsupervised approach to speech recognition, Sweetviz + Comet for supercharged EDA, and more<\/em><\/h3>\n\n\n\n<p>Welcome to issue #3 of The Comet Newsletter!<\/p>\n\n\n\n<p>This week, we take a closer look at Facebook\u2019s impressive new unsupervised approach to speech recognition and the concept of \u201cIn-context learning\u201d as it relates to GPT-3.<\/p>\n\n\n\n<p>Additionally, you might enjoy a new way to automatically visualize and log your EDA experiments with Sweetviz+Comet, as well as some compelling perspective on the issue of academic fraud in the ML community.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Like what you\u2019re reading? <a href=\"https:\/\/info.comet.ml\/newsletter-signup\/\">Subscribe here.<\/a><\/h4>\n\n\n\n<p>And be sure to follow us on <a href=\"https:\/\/twitter.com\/cometml\/\">Twitter<\/a> and <a href=\"https:\/\/www.linkedin.com\/company\/comet-ml\/\">LinkedIn<\/a> \u2014 drop us a note if you have something we should cover in an upcoming issue!<\/p>\n\n\n\n<p>Happy Reading,<\/p>\n\n\n\n<p>Austin<\/p>\n\n\n\n<p>Head of Community, Comet<\/p>\n\n\n<center>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/center>\n\n\n<p><strong>INDUSTRY<\/strong> | WHAT WE&#8217;RE READING | PROJECTS | OPINION<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Facebook AI: High-performance speech recognition with no supervision at all<\/h2>\n\n\n\n<p>Traditionally, speech recognition systems require thousands of hours of annotated speech and text data in order to correctly identify phonemes. This limits the technology to only a small fraction of all the languages spoken on earth and largely favors the <a href=\"https:\/\/www.scientificamerican.com\/article\/speech-recognition-tech-is-yet-another-example-of-bias\/\">American Dialect of English<\/a>, as it is the largest annotated corpus of data.<\/p>\n\n\n\n<p>Facebook AI Research (FAIR) recently open sourced their new unsupervised learning approach for speech recognition. Their approach leverages self-supervised speech representations from their previously published <a href=\"https:\/\/github.com\/pytorch\/fairseq\/tree\/master\/examples\/wav2vec\">wav2vec<\/a> model to segment unlabeled audio and learn a mapping from these representations to phonemes via adversarial training.<\/p>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img decoding=\"async\" class=\"wp-image-4995\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/wav-2-vec-how-trained.png\" alt=\"\" \/>\n<figcaption><a href=\"https:\/\/ai.facebook.com\/research\/publications\/unsupervised-speech-recognition\">How wav2vec-U is trained<\/a><\/figcaption>\n<\/figure>\n<\/div>\n\n\n\n<p>The performance of this method on benchmark datasets is truly astounding. Wav2Vec-U is able to achieve a word error rate of 5.9% with zero hours of annotated data. This is comparable to the best speech recognition system from 2019, Spec-Augment, which required at least 960+ hours of labeled data.\u00a0\u00a0<\/p>\n\n\n\n<p>The dependency on labelled training data is a huge barrier to scaling and democratizing machine learning technology globally, since it just isn\u2019t possible to collect high-quality, annotated data for every possible language. This approach from Facebook seems like the breakthrough that could lead to speech recognition systems being available in all spoken languages. Learning across all language types implies that these models are comprehensive enough to learn some \u201chigher level\u201d representation of the data in a way that humans cannot.\u00a0\u00a0<\/p>\n\n\n\n<p>\u201cWe are moving from \u2018all problems that can be solved with a declarative program\u2019 to \u2018all problems that can be express[ed] as inputs and outputs\u2019 &#8211; which is &#8211; um &#8211; a lot of problems?\u201d\u00a0<\/p>\n\n\n\n<p><a href=\"https:\/\/twitter.com\/schrep\/status\/1396927610911813633\">Mike Schroepfer<\/a> (@schrep), CTO at Facebook<\/p>\n\n\n\n<p><a href=\"https:\/\/ai.facebook.com\/blog\/wav2vec-unsupervised-speech-recognition-without-supervision\/\">Read the full overview from Facebook AI.<\/a><\/p>\n\n\n<center>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/center>\n\n\n<p>INDUSTRY | <strong>WHAT WE&#8217;RE READING<\/strong> | PROJECTS | OPINION<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">GPT-3 and In-context Learning<\/h2>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-5006\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/Stanford-in-context-learning-1-1024x469.png\" alt=\"\" width=\"512\" height=\"235\" \/>\n<figcaption><a href=\"http:\/\/ai.stanford.edu\/blog\/in-context-learning\/\">Image Source<\/a><\/figcaption>\n<\/figure>\n<\/div>\n\n\n\n<p>In this post, researchers at Stanford explore an interesting emergent phenomenon within the GPT-3 model: <em>In-context learning<\/em>.\u00a0<\/p>\n\n\n\n<p>In-context learning refers to a form of learning where the model is fed an input that describes a new task along with a set of examples. The resulting output behaves as if the model has \u201clearned\u201d the input description task.<\/p>\n\n\n\n<p>The aim is to have the model produce an output that is simply a copy of the input. The model has not been explicitly trained for this purpose.\u00a0 The output sequence is generated based on the context provided by the prompt.\u00a0<\/p>\n\n\n\n<p>For example, the researchers fed in the following sequence completion task as a prompt<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">Input: g, c, b, h, d<br \/>Output: g, c, b, h, d<br \/>Input: b, g, d, h, a<br \/>Output: b, g, d, h, a<br \/>Input: a, b, c, d, e<br \/>Output:<\/pre>\n\n\n\n<p>The expected completion:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">\u00a0a, b, c, d, e<\/pre>\n\n\n\n<p>The team went through many different prompts to test GPT-3\u2019s in-context learning ability. They tested the accuracy of the model by varying the number of prompts and the model size. What they found was that the accuracy of the output consistently increased when more in-context examples were provided and in general, results were better when the task was concordant with the expected semantics.\u00a0<\/p>\n\n\n\n<p>\u201cThe idea that simply minimizing the negative log loss of a single next-word-prediction objective implies this apparent optimization of many more arbitrary tasks \u2013 amounting to a paradigm of \u201clearning\u201d during inference time \u2013 is a powerful one, and one that raises many questions.\u201d &#8211; Frieda Rong (@frieda_rong)<\/p>\n\n\n\n<p><a href=\"http:\/\/ai.stanford.edu\/blog\/in-context-learning\/\">Read Frieda\u2019s full blog post here.<\/a><\/p>\n\n\n<center>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/center>\n\n\n<p>INDUSTRY | WHAT WE&#8217;RE READING | <strong>PROJECTS <\/strong> | OPINION<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Automatically Track your EDA with Sweetviz + Comet<\/h3>\n\n\n\n<p>&nbsp;<\/p>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-5009\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/Comet-sweetviz-asset-1024x517.jpg\" alt=\"\" width=\"580\" height=\"293\" \/>\n<figcaption><a href=\"https:\/\/towardsdatascience.com\/automatically-track-all-your-eda-using-sweetviz-and-comet-ml-9cb7545b0fab\">Sweetviz + Comet integration<\/a><\/figcaption>\n<\/figure>\n<\/div>\n\n\n\n<p><a href=\"https:\/\/bit.ly\/3fLQBJm\">Sweetviz<\/a> is an excellent Python library that, with just two lines of code, allows you to jumpstart your Exploratory Data Analysis (EDA) with a range of rich visualizations. In other words&#8230;a perfect fit to integrate with Comet!<\/p>\n\n\n\n<p>In a new article on Towards Data Science, Francois Bertrand, the engineer behind Sweetviz, shows how this integration can help accelerate your EDA work, while ensuring you&#8217;re able to track and manage your dataset experimentation history.<\/p>\n\n\n\n<p><a href=\"https:\/\/towardsdatascience.com\/automatically-track-all-your-eda-using-sweetviz-and-comet-ml-9cb7545b0fab\">Read Francois\u2019s full article here<\/a><\/p>\n\n\n<center>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/center>\n\n\n<p>INDUSTRY | WHAT WE&#8217;RE READING | PROJECTS | <strong>OPINION<\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Please Commit More Blatant Academic Fraud<\/h3>\n\n\n\n<p>&nbsp;<\/p>\n\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/img_60b655dc47df8.jpg\" alt=\"\" \/>\n<figcaption><a href=\"https:\/\/cacm.acm.org\/magazines\/2021\/6\/252840-collusion-rings-threaten-the-integrity-of-computer-science-research\/fulltext#FNA\">Image Source<\/a><\/figcaption>\n<\/figure>\n<\/div>\n\n\n\n<p><a href=\"https:\/\/jacobbuckman.com\/\">Jacob Buckman<\/a> (@jacobmbuckman), current PhD student at MILA and former Google AI Resident, shares his perspective on the first well-documented case of <a href=\"https:\/\/cacm.acm.org\/magazines\/2021\/6\/252840-collusion-rings-threaten-the-integrity-of-computer-science-research\/fulltext#FNA\">academic fraud<\/a> in the Artificial Intelligence Community. The incident centers on a group of academics forming collusion rings in order to get their papers accepted at top conferences.\u00a0\u00a0\u00a0<\/p>\n\n\n\n<p>Buckman posits that this blatant fraud is the natural extension of the day-to-day fraud that most academics in the ML community are prone to committing. Things such as, \u201cRunning a big hyperparameter sweep on your proposed approach but using the defaults for the baseline\u201d or\u00a0 \u201cCherry-picking examples where your model looks good, or cherry-picking whole datasets to test on, where you\u2019ve confirmed your model\u2019s advantage.\u201d<\/p>\n\n\n\n<p>He argues that this type of low-key fraud is indistinguishable from simple mistakes\u2014and comes with plausible deniability. Almost everyone in the ML community is complicit with this subtle fraud, and as a result nobody is willing to accept its existence.\u00a0<\/p>\n\n\n\n<p>\u201cWe have developed a collective blind-spot around a depressing reality,\u201d Buckman says. \u201cEven at top conferences, the median published paper contains no truth or insight.\u201d<\/p>\n\n\n\n<p>By participating in this blatant case of fraud that is no longer within the bounds of plausible deniability, the researchers in the collusion ring may have finally succeeded in forcing the community to acknowledge one of its blind spots.\u00a0<\/p>\n\n\n\n<p><a href=\"https:\/\/jacobbuckman.com\/2021-05-29-please-commit-more-blatant-academic-fraud\/\">Read Jacob\u2019s full essay here. <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Facebook AI&#8217;s new unsupervised approach to speech recognition, Sweetviz + Comet for supercharged EDA, and more<\/p>\n","protected":false},"author":1,"featured_media":2304,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[10],"tags":[],"coauthors":[130],"class_list":["post-2321","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Issue 3: Comet + Sweetviz, Unsupervised Speech Recognition, GPT-3 and In-context Learning - Comet<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Issue 3: Comet + Sweetviz, Unsupervised Speech Recognition, GPT-3 and In-context Learning\" \/>\n<meta property=\"og:description\" content=\"Facebook AI&#039;s new unsupervised approach to speech recognition, Sweetviz + Comet for supercharged EDA, and more\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2021-06-02T23:06:38+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/Comet-Newsletter-Ad-1200x627-02.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"627\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Austin Kodra\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Austin Kodra\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Issue 3: Comet + Sweetviz, Unsupervised Speech Recognition, GPT-3 and In-context Learning - Comet","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/","og_locale":"en_US","og_type":"article","og_title":"Issue 3: Comet + Sweetviz, Unsupervised Speech Recognition, GPT-3 and In-context Learning","og_description":"Facebook AI's new unsupervised approach to speech recognition, Sweetviz + Comet for supercharged EDA, and more","og_url":"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2021-06-02T23:06:38+00:00","og_image":[{"width":1200,"height":627,"url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/Comet-Newsletter-Ad-1200x627-02.jpg","type":"image\/jpeg"}],"author":"Austin Kodra","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Austin Kodra","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/"},"author":{"name":"engineering@atre.net","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/550ac35e8e821db8064c5bd1f0a04e6b"},"headline":"Issue 3: Comet + Sweetviz, Unsupervised Speech Recognition, GPT-3 and In-context Learning","datePublished":"2021-06-02T23:06:38+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/"},"wordCount":1023,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/Comet-Newsletter-Ad-1200x627-02.jpg","articleSection":["Industry"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/","url":"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/","name":"Issue 3: Comet + Sweetviz, Unsupervised Speech Recognition, GPT-3 and In-context Learning - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/Comet-Newsletter-Ad-1200x627-02.jpg","datePublished":"2021-06-02T23:06:38+00:00","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/#primaryimage","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/Comet-Newsletter-Ad-1200x627-02.jpg","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/Comet-Newsletter-Ad-1200x627-02.jpg","width":1200,"height":627,"caption":"Comet Newsletter"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/comet-newsletter-3-comet-sweetviz-facebook-unsupervised-speech-recognition-gpt3-in-context-learning\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Issue 3: Comet + Sweetviz, Unsupervised Speech Recognition, GPT-3 and In-context Learning"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/550ac35e8e821db8064c5bd1f0a04e6b","name":"engineering@atre.net","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/027c18177377edf459980f0cfb83706c","url":"https:\/\/secure.gravatar.com\/avatar\/d002a459a297e0d1779329318029aee19868c312b3e1f3c9ec9b3e3add2740de?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/d002a459a297e0d1779329318029aee19868c312b3e1f3c9ec9b3e3add2740de?s=96&d=mm&r=g","caption":"engineering@atre.net"},"sameAs":["https:\/\/live-cometml.pantheonsite.io"],"url":"https:\/\/www.comet.com\/site\/blog\/author\/engineeringatre-net\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/2321","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=2321"}],"version-history":[{"count":0,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/2321\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media\/2304"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=2321"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=2321"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=2321"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=2321"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}