{"id":2513,"date":"2021-09-09T15:33:31","date_gmt":"2021-09-09T23:33:31","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/"},"modified":"2021-09-09T15:33:31","modified_gmt":"2021-09-09T23:33:31","slug":"the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/","title":{"rendered":"Issue 17: Graph Neural Nets, Concept Drift Deep-dive, Rethinking Large Language Models"},"content":{"rendered":"\n<p>Welcome to issue #17 of The Comet Newsletter!<br \/><br \/>In this week&#8217;s issue, we share a two-part deep dive on graph neural networks (<a href=\"https:\/\/distill.pub\/2021\/gnn-intro\/?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\">part 1<\/a>; <a href=\"https:\/\/distill.pub\/2021\/understanding-gnns\/?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\">part 2<\/a>), as well as <a href=\"https:\/\/venturebeat.com\/2021\/09\/07\/large-language-models-arent-always-more-complex\/?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\">a re-evaluation of the idea that bigger = more complex when it comes to language models<\/a>.<br \/><br \/>Additionally, we explore an <a href=\"https:\/\/huyenchip.com\/2021\/09\/07\/a-friendly-introduction-to-machine-learning-compilers-and-optimizers.html?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\">introduction to ML optimizers and compilers<\/a> and an <a href=\"https:\/\/concept-drift.fastforwardlabs.com\/?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\">in-depth guide to the ins and outs of concept drift once a model is deployed<\/a>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Like what you\u2019re reading? <a href=\"https:\/\/info.comet.ml\/newsletter-signup\/\">Subscribe here.<\/a><\/h3>\n\n\n\n<p>And be sure to follow us on <a href=\"https:\/\/twitter.com\/Cometml\">Twitter<\/a> and <a href=\"https:\/\/www.linkedin.com\/company\/comet-ml\/\">LinkedIn<\/a> \u2014 drop us a note if you have something we should cover in an upcoming issue!<\/p>\n\n\n\n<p>Happy Reading,<\/p>\n\n\n\n<p>Austin<\/p>\n\n\n\n<p>Head of Community, Comet<\/p>\n\n\n<center>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/center>\n\n\n<p><strong>INDUSTRY<\/strong> | WHAT WE&#8217;RE READING | PROJECTS<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><a href=\"https:\/\/venturebeat.com\/2021\/09\/07\/large-language-models-arent-always-more-complex\/?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\" target=\"_blank\" rel=\"noreferrer noopener\">Large language models aren&#8217;t always more complex<\/a><\/h2>\n\n\n\n<p>Whenever a new state-of-the-art language model arrives (think GPT-3), it&#8217;s generally accompanied by media coverage and analysis touting the number of parameters it was trained on (these days, in the 150+ billion range), as well as its complexity.<\/p>\n\n\n\n<p>But as Kyle Wiggers writes for VentureBeat, industry experts are becoming increasingly skeptical that the size of the models and the datasets they&#8217;re trained on correspond directly to improved performance.<\/p>\n\n\n\n<p>In terms of parameter counts and how the models are trained, this boils down to explicit &#8220;instruction tuning&#8221;, or the specific kinds of NLP tasks the model is trained to solve (i.e. translation, sentiment analysis). Interestingly, models trained in this fashion are still often able to generalize to other tasks, much like GPT-3. Wiggers cites the FLAN architecture (&#8220;Fine-tuned language network&#8221;) as an example of this dynamic.<\/p>\n\n\n\n<p>When it comes to dataset problems, Wiggers notes that &#8220;over-filtering&#8221; for data that achieves high classifier scores can actually lead to worse model performance. This kind of excessive optimization can lead to a &#8220;misalignment between proxy and true objective(s)&#8221; in what the model is designed to achieve, according to Leo Gao, data scientist at EleutherAI.<\/p>\n\n\n\n<p><a href=\"https:\/\/venturebeat.com\/2021\/09\/07\/large-language-models-arent-always-more-complex\/?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\" target=\"_blank\" rel=\"noreferrer noopener\">Read the full story in VentureBeat<\/a>\u00a0<\/p>\n\n\n<center>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/center>\n\n\n<p>INDUSTRY | <strong>WHAT WE&#8217;RE READING<\/strong> | PROJECTS<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><a href=\"https:\/\/concept-drift.fastforwardlabs.com\/?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\" target=\"_blank\" rel=\"noreferrer noopener\">Inferring Concept Drift Without Labeled Data<\/a><\/h2>\n\n\n\n<p>In this incredibly thorough guide from\u00a0<a href=\"https:\/\/twitter.com\/andrew_reed_r?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\" target=\"_blank\" rel=\"noreferrer noopener\">Andrew Reed<\/a>\u00a0and\u00a0<a href=\"https:\/\/twitter.com\/NishaMuktewar?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\" target=\"_blank\" rel=\"noreferrer noopener\">Nisha Muktewar<\/a>\u00a0of Fast Forward Labs, the authors offer a deeply researched survey of the background, problematic nature, and approaches to solving what&#8217;s known as &#8220;concept drift&#8221; in production ML models. Put simply, concept drift refers to the fact that data that models are trained on often don&#8217;t reflect dynamic (and shifting) real-world conditions\u2014and therefore, resulting models, left alone, won&#8217;t be able to adapt and generalize to these changing conditions.<\/p>\n\n\n\n<p>This impressive guide hammers home the point that an ML team&#8217;s work isn&#8217;t done once a model has been deployed\u2014in fact, in most cases, the work is just beginning. And not only does this guide do a great job of clearly defining concept drift, it also provides practical approaches to solving the problem, a sample use case, and a number of additional concerns that ML teams should consider, ranging from ethical considerations, to handling concept drift within &#8220;big data&#8221; systems.<\/p>\n\n\n\n<p><a href=\"https:\/\/concept-drift.fastforwardlabs.com\/?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\" target=\"_blank\" rel=\"noreferrer noopener\">Read the full report from Fast Forward Labs<\/a><\/p>\n\n\n<center>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/center>\n\n\n<p>INDUSTRY | WHAT WE&#8217;RE READING | <strong>PROJECTS<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><a href=\"https:\/\/distill.pub\/2021\/gnn-intro\/?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\" target=\"_blank\" rel=\"noreferrer noopener\">From Distill Pub: Graph Neural Networks Deep Dive<\/a><\/h2>\n\n\n\n<p>While we recently reported on Distill&#8217;s hiatus (which they are still on, apparently), we were treated to a bit of a surprise last week when they shared a 2-part series on graph neural networks, authored by several researchers at Google.<\/p>\n\n\n\n<p>Part 1 provides an overview of graph neural networks, including how they&#8217;re structured, how they work, and the kinds of problems they&#8217;re fit to solve.<\/p>\n\n\n\n<p>Part 2 zooms in on convolutions on graphs, which make up and impact the &#8220;building blocks and design choices of graph neural networks.&#8221;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/distill.pub\/2021\/gnn-intro\/?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\" target=\"_blank\" rel=\"noreferrer noopener\">Read Part 1: &#8220;A Gentle Introduction to Graph Neural Networks&#8221;<\/a><\/li>\n<li><a href=\"https:\/\/distill.pub\/2021\/understanding-gnns\/?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\" target=\"_blank\" rel=\"noreferrer noopener\">Read Part 2: &#8220;Understanding Convolutions on Graphs&#8221;<\/a><\/li>\n<\/ul>\n\n\n<center>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/center>\n\n\n<p>INDUSTRY | WHAT WE&#8217;RE READING | <strong>PROJECTS<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><a href=\"https:\/\/huyenchip.com\/2021\/09\/07\/a-friendly-introduction-to-machine-learning-compilers-and-optimizers.html?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\" target=\"_blank\" rel=\"noreferrer noopener\">A friendly introduction to machine learning compilers and optimizers<\/a><\/h2>\n\n\n\n<p>In this helpful guide, Stanford instructor and ML engineer\u00a0<a href=\"https:\/\/huyenchip.com\/?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\" target=\"_blank\" rel=\"noreferrer noopener\">Chip Huyen<\/a>\u00a0focuses in on deploying ML models on the edge, drilling down into topics like model compatibility and performance, the differences in deploying via the cloud vs. the edge, best practices for optimizing models toward this kind of deployment, a survey of the different kinds of compilers and optimization methods, and more.<\/p>\n\n\n\n<p><a href=\"https:\/\/huyenchip.com\/2021\/09\/07\/a-friendly-introduction-to-machine-learning-compilers-and-optimizers.html?utm_campaign=Monthly%20newsletter&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9RZO2uVsa3iQNDeFeBy9NGeK30wns-8z9EeW1oL_ozdNNReUXDkrCC5fdU35AA7NKYOFrh\" target=\"_blank\" rel=\"noreferrer noopener\">Read Huyen&#8217;s full article here<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Graph neural nets, concept drift deep-dive, rethinking large language models, and more<\/p>\n","protected":false},"author":1,"featured_media":2304,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[10],"tags":[],"coauthors":[130],"class_list":["post-2513","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Issue 17: Graph Neural Nets, Concept Drift Deep-dive, Rethinking Large Language Models - Comet<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Issue 17: Graph Neural Nets, Concept Drift Deep-dive, Rethinking Large Language Models\" \/>\n<meta property=\"og:description\" content=\"Graph neural nets, concept drift deep-dive, rethinking large language models, and more\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2021-09-09T23:33:31+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/Comet-Newsletter-Ad-1200x627-02.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"627\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Austin Kodra\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Austin Kodra\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Issue 17: Graph Neural Nets, Concept Drift Deep-dive, Rethinking Large Language Models - Comet","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/","og_locale":"en_US","og_type":"article","og_title":"Issue 17: Graph Neural Nets, Concept Drift Deep-dive, Rethinking Large Language Models","og_description":"Graph neural nets, concept drift deep-dive, rethinking large language models, and more","og_url":"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2021-09-09T23:33:31+00:00","og_image":[{"width":1200,"height":627,"url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/Comet-Newsletter-Ad-1200x627-02.jpg","type":"image\/jpeg"}],"author":"Austin Kodra","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Austin Kodra","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/"},"author":{"name":"engineering@atre.net","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/550ac35e8e821db8064c5bd1f0a04e6b"},"headline":"Issue 17: Graph Neural Nets, Concept Drift Deep-dive, Rethinking Large Language Models","datePublished":"2021-09-09T23:33:31+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/"},"wordCount":723,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/Comet-Newsletter-Ad-1200x627-02.jpg","articleSection":["Industry"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/","url":"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/","name":"Issue 17: Graph Neural Nets, Concept Drift Deep-dive, Rethinking Large Language Models - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/Comet-Newsletter-Ad-1200x627-02.jpg","datePublished":"2021-09-09T23:33:31+00:00","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/#primaryimage","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/Comet-Newsletter-Ad-1200x627-02.jpg","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2022\/06\/Comet-Newsletter-Ad-1200x627-02.jpg","width":1200,"height":627,"caption":"Comet Newsletter"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/the-comet-newsletter-issue-17-graph-neural-nets-concept-drift-deep-dive-rethinking-large-language-models-and-more\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Issue 17: Graph Neural Nets, Concept Drift Deep-dive, Rethinking Large Language Models"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/550ac35e8e821db8064c5bd1f0a04e6b","name":"engineering@atre.net","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/027c18177377edf459980f0cfb83706c","url":"https:\/\/secure.gravatar.com\/avatar\/d002a459a297e0d1779329318029aee19868c312b3e1f3c9ec9b3e3add2740de?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/d002a459a297e0d1779329318029aee19868c312b3e1f3c9ec9b3e3add2740de?s=96&d=mm&r=g","caption":"engineering@atre.net"},"sameAs":["https:\/\/live-cometml.pantheonsite.io"],"url":"https:\/\/www.comet.com\/site\/blog\/author\/engineeringatre-net\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/2513","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=2513"}],"version-history":[{"count":0,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/2513\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media\/2304"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=2513"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=2513"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=2513"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=2513"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}