{"id":19134,"date":"2026-02-17T21:38:40","date_gmt":"2026-02-17T21:38:40","guid":{"rendered":"https:\/\/www.comet.com\/site\/?p=19134"},"modified":"2026-02-17T21:38:40","modified_gmt":"2026-02-17T21:38:40","slug":"optimize-ai-ide-cost","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/","title":{"rendered":"Optimizing AI IDEs at Scale"},"content":{"rendered":"\n<p>Using AI development tools at scale comes with real overhead for every engineer. It\u2019s an additional cost layered on top of the team, and while the productivity gains usually make it worth it, the real question becomes how that spend maps to actual output.<\/p>\n\n\n\n<p>In other words: if we spend X more on AI development, do we actually get K\u00b7X more productivity back?<\/p>\n\n\n\n<p>Engineering productivity has always been hard to quantify at a high level. But you don\u2019t need a perfect metric to debug specific problems in a workflow. If AI spend is climbing while issues are getting solved at roughly the same rate and code output isn\u2019t really changing, it\u2019s fair to ask whether that extra spend is doing anything useful.<\/p>\n\n\n\n<p>We saw exactly this pattern over a three-month period. AI costs kept going up, but it didn\u2019t feel like we were moving faster.<\/p>\n\n\n\n<p>When we dug into the data, one thing stood out immediately: cache reads were by far our biggest expense. Normally, that\u2019s a good sign\u2014it means you\u2019re reusing context efficiently\u2014but the magnitude was a red flag.<\/p>\n\n\n\n<p>So we went a level deeper and reviewed the transcripts themselves. What we found were contexts packed with rules, <a href=\"https:\/\/www.comet.com\/site\/blog\/model-context-protocol\/\">Model Context Protocol<\/a> (MCP) configs, long conversation histories, and accumulated guidance that often had nothing to do with the task at hand anymore.<\/p>\n\n\n\n<p>More broadly, it became obvious that AI development configuration starts to accumulate its own kind of tech debt.<\/p>\n\n\n\n<p>Rules grow organically. Bits of intuition get encoded into prompts. A developer tells the IDE \u201cDon\u2019t make that mistake again,\u201d and now that instruction lives forever. A useful tool gets enabled and never turned off. None of this is wrong\u2014it\u2019s how people naturally work\u2014but over time, you end up paying (in latency, correctness, and tokens) for context that isn\u2019t directly contributing to the work you\u2019re trying to deliver.<\/p>\n\n\n\n<p>Like any good engineering team would, we decided to refactor it.<\/p>\n\n\n\n<p>The fixes were straightforward:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Standardize AI development configuration so it stays DRY across IDEs<\/li>\n\n\n\n<li>Make rule loading deterministic and minimal, moving heavier guidance into on-demand skills<\/li>\n\n\n\n<li>Tighten the development loop so context stays clean<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-agent-interoperability\">Agent Interoperability<\/h2>\n\n\n\n<p>One of the early mistakes it\u2019s easy to make with AI development tools is treating your IDE configuration as something tied to a specific product. It works fine at first until different engineers prefer different tools, or a new \u201cnext big thing\u201d comes along.<\/p>\n\n\n\n<p>In practice, engineers should be able to choose the AI tooling that fits their workflow without the team having to maintain parallel configuration stacks. And when better tools show up (which they will), switching shouldn\u2019t mean rewriting your entire AI development setup.<\/p>\n\n\n\n<p>To make that possible, we refactored our AI editor configuration so <code>.agents\/<\/code> is the single source of truth, then added automation to sync it into editor-specific formats:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><code>make cursor <\/code>to keep Cursor aligned<\/li>\n\n\n\n<li><code>make claude<\/code> to generate Claude Code\u2013friendly versions (including format conversions)<\/li>\n\n\n\n<li>Conversion scripts to handle frontmatter and MCP differences<\/li>\n\n\n\n<li>A single consolidated <code>.agents\/mcp.json<\/code> to keep tool configuration consistent across environments<\/li>\n<\/ul>\n\n\n\n<p>None of this is flashy, but it matters. Once you have multiple IDEs in play, \u201calmost the same rules in two places\u201d becomes a constant tax on the team. Centralizing configuration made it simple for us to stay up to date with the industry without fragmenting our workflow or duplicating effort every time a new tool gained traction.<\/p>\n\n\n\n<p>\ud83d\udc49 <a href=\"https:\/\/github.com\/comet-ml\/opik\/pull\/4981\">Link to Code<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-rules-to-skills\">Rules to skills<\/h2>\n\n\n\n<p>Early on, most AI IDE setups start with a growing set of global rules. It makes sense: you discover things the model should always do (or never do), and you encode that knowledge so it doesn\u2019t have to be relearned every session. Over time, though, that global ruleset becomes the dumping ground for everything: hard invariants, workflow preferences, domain knowledge, and one-off fixes.<\/p>\n\n\n\n<p>Even with careful scoping using globs and file-level targeting, for a mature full-stack codebase the global ruleset still kept growing. Each addition was reasonable on its own, but once a rule made it into the always-on context, it almost never came back out.<\/p>\n\n\n\n<p>Once we picked up rules, we couldn\u2019t put them down.<\/p>\n\n\n\n<p>We refactored that model into a clearer separation:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Tiny always-on rules for true invariants only<br>skills that are loaded when relevant (backend, frontend, SDK, docs, local dev, E2E patterns)<\/li>\n\n\n\n<li>Purpose-built subagents for common work modes (planner, test runner, code reviewer, build fixer, and similar tasks)<\/li>\n<\/ul>\n\n\n\n<p>The measurable outcome was simple: the \u201calways-loaded\u201d ruleset essentially disappeared. What used to be a large, permanent block of context became a small set of hard constraints, while broader guidance moved into explicitly pullable skills and focused <a href=\"https:\/\/www.comet.com\/site\/blog\/ai-agents\/\">AI agents<\/a>.<\/p>\n\n\n\n<p>More importantly, agent behavior improved. With fewer always-available actions and less persistent instruction noise, tool selection became more reliable and outputs became tighter. In practice, smaller and more constrained context consistently outperformed broad, do-everything prompts.<\/p>\n\n\n\n<p>Fewer tools and targeted guidance turned out to be smarter than throwing everything at the model at once.<\/p>\n\n\n\n<p>\ud83d\udc49 <a href=\"https:\/\/github.com\/comet-ml\/opik\/pull\/4890\">Link to Code<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-changing-how-we-work\">Changing How We Work<\/h2>\n\n\n\n<p>The repo changes reduced baseline overhead. The workflow changes reduced <a href=\"https:\/\/www.comet.com\/site\/blog\/prompt-drift\/\">prompt drift<\/a>. We learned pretty quickly that if you keep working the same way, the agent will happily absorb whatever context you give it, and long sessions naturally turn into a junk drawer of half-relevant state.<\/p>\n\n\n\n<p>So we got more deliberate about the loop.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-rapid-loop-plan-execute-commit-compact\">Rapid Loop: Plan \u2192 Execute \u2192 Commit\/Compact<\/h3>\n\n\n\n<p>We switched to a cadence that\u2019s boring on purpose:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Plan a bounded chunk of work with concrete acceptance criteria<\/li>\n\n\n\n<li>Execute with minimal intervention<\/li>\n\n\n\n<li>Evaluate with tests when possible<\/li>\n\n\n\n<li>Commit + compact\/reset and move on<\/li>\n<\/ul>\n\n\n\n<p>The biggest shift wasn\u2019t in execution; it was in planning. We started iterating on the plan more than we iterated on the code produced from the plan. If the plan is vague, the agent will wander. If the plan is tight and scoped properly, execution becomes almost mechanical.<\/p>\n\n\n\n<p>A good plan should make follow-through obvious. When we got that right, the agent didn\u2019t need constant steering and the diffs got smaller and cleaner.<\/p>\n\n\n\n<p>Long, continuous sessions are where context poisoning shows up: stale intent, partially relevant rules, tool sprawl, and \u201chelpful\u201d history that keeps getting dragged forward even when it no longer applies. Compaction should be something you run intentionally. If you take one thing away from this workflow, it\u2019s commit and compact.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-tests-as-the-default-evaluation-language\">Tests as the Default Evaluation Language<\/h3>\n\n\n\n<p>We also pushed hard on <a href=\"https:\/\/www.comet.com\/site\/blog\/llm-evaluation-guide\/\">LLM evaluation<\/a>. Agents perform better when success is machine-checkable. While test-driven development can be controversial for humans, it turns out to be extremely effective for AI-assisted development, not as a design philosophy, but as a communication protocol.<\/p>\n\n\n\n<p>Writing tests up front gives the agent concrete acceptance criteria instead of vague intent. That one shift, treating tests as the primary evaluation mechanism, made outputs tighter, reduced back-and-forth, and took the human further out of the loop when it isn\u2019t necessary.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-results-before-vs-after-the-refactor\">Results: Before vs. After the Refactor<\/h2>\n\n\n\n<p>We compared behavior before and after the configuration and workflow changes, focusing on output efficiency rather than raw spend. Output here is a proxy for delivered work\u2014normalizing by it lets us separate productivity from overhead.<\/p>\n\n\n\n<p>To understand what was actually driving the numbers, we paired usage data with direct analysis of agent transcripts in Opik, tracing cost patterns back to specific prompt structures, context buildup, and workflow behaviors.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"493\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimize-ai-ide-cost-opik-1024x493.png\" alt=\"A screenshot within Opik showing the beginning process of refactoring by tracing cost patterns back to prompt structures and workflow behaviors.\" class=\"wp-image-19148\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimize-ai-ide-cost-opik-1024x493.png 1024w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimize-ai-ide-cost-opik-300x144.png 300w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimize-ai-ide-cost-opik-768x370.png 768w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimize-ai-ide-cost-opik-1536x739.png 1536w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimize-ai-ide-cost-opik.png 1600w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-each-unit-of-generated-work-became-cheaper\">Each Unit of Generated Work Became Cheaper<\/h3>\n\n\n\n<p>We normalized cost by output volume to isolate efficiency from usage. After the refactor, producing the same amount of useful work required materially fewer tokens.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"725\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimizing-ai-ide-cost-example-1024x725.png\" alt=\"A red and green graph showing the impact of refactoring with before and after views of the cost per 1M output tokens \" class=\"wp-image-19149\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimizing-ai-ide-cost-example-1024x725.png 1024w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimizing-ai-ide-cost-example-300x213.png 300w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimizing-ai-ide-cost-example-768x544.png 768w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimizing-ai-ide-cost-example.png 1032w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Median output cost dropped from <strong>$229 to $181 per million output tokens<\/strong> (roughly a <strong>21% decrease<\/strong>) while the overall distribution tightened as well.<\/p>\n\n\n\n<p>This is the core result: the same level of development output, with meaningfully less overhead per unit of work.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-less-context-more-work\">Less Context, More Work<\/h3>\n\n\n\n<p>Before the refactor, a large portion of the token budget went to replayed context. Rules, tool schemas, and accumulated history being carried forward on every request.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"725\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimizing-ai-ide-cost-results-1024x725.png\" alt=\"A red and green graph showing the impact of refactoring with before and after views of the context token value per output token.\" class=\"wp-image-19150\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimizing-ai-ide-cost-results-1024x725.png 1024w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimizing-ai-ide-cost-results-300x213.png 300w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimizing-ai-ide-cost-results-768x544.png 768w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimizing-ai-ide-cost-results.png 1032w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Afterward, a larger share went directly to output.<\/p>\n\n\n\n<p>That shift explains the efficiency gains. We didn\u2019t slow development or generate less work. We removed the prompt overhead that had been quietly compounding over time.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-takeaways\">Takeaways<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Smaller always-on context leads directly to less drift and lower cost per unit of work. The gains didn\u2019t come from using AI less\u2014they came from removing entropy from the system.<\/li>\n\n\n\n<li>Treat AI IDE configuration like real code. Centralize it, refactor it as it grows, and automate synchronization across tools to avoid drift and hidden overhead.<\/li>\n\n\n\n<li>Keep global rules brutally small. Move domain knowledge, workflows, and guidance into explicit on-demand skills instead of permanent context.<\/li>\n\n\n\n<li>Prefer purpose-built subagents over a single agent with every tool enabled. Fewer actions consistently produce more reliable behavior.<\/li>\n\n\n\n<li>Spend more time tightening the plan than iterating on the code that comes out of it. A good plan makes execution almost mechanical.<\/li>\n\n\n\n<li>Use tests as the default evaluation mechanism for agents. Machine-checkable success shortens loops and reduces unnecessary output.<\/li>\n\n\n\n<li>Run short, bounded cycles: plan, execute, evaluate, then commit and compact intentionally. Long-running sessions are where drift and context bloat appear.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Using AI development tools at scale comes with real overhead for every engineer. It\u2019s an additional cost layered on top of the team, and while the productivity gains usually make it worth it, the real question becomes how that spend maps to actual output. In other words: if we spend X more on AI development, [&hellip;]<\/p>\n","protected":false},"author":140,"featured_media":19154,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[65,9],"tags":[],"coauthors":[361],"class_list":["post-19134","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-llmops","category-product"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Optimizing AI IDEs at Scale<\/title>\n<meta name=\"description\" content=\"How we reduced AI development overhead by refactoring agent configuration, shrinking always-on context, and cut output cost per token by 21%.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Optimizing AI IDEs at Scale\" \/>\n<meta property=\"og:description\" content=\"How we reduced AI development overhead by refactoring agent configuration, shrinking always-on context, and cut output cost per token by 21%.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2026-02-17T21:38:40+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimize-ai-ide.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1280\" \/>\n\t<meta property=\"og:image:height\" content=\"719\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Collin Cunningham\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Collin Cunningham\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Optimizing AI IDEs at Scale","description":"How we reduced AI development overhead by refactoring agent configuration, shrinking always-on context, and cut output cost per token by 21%.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/","og_locale":"en_US","og_type":"article","og_title":"Optimizing AI IDEs at Scale","og_description":"How we reduced AI development overhead by refactoring agent configuration, shrinking always-on context, and cut output cost per token by 21%.","og_url":"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2026-02-17T21:38:40+00:00","og_image":[{"width":1280,"height":719,"url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimize-ai-ide.png","type":"image\/png"}],"author":"Collin Cunningham","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Collin Cunningham","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/"},"author":{"name":"Caroline Borders","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/8500e2f020e85676c245e00af46bae3c"},"headline":"Optimizing AI IDEs at Scale","datePublished":"2026-02-17T21:38:40+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/"},"wordCount":1526,"commentCount":0,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimize-ai-ide.png","articleSection":["LLMOps","Product"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/","url":"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/","name":"Optimizing AI IDEs at Scale","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimize-ai-ide.png","datePublished":"2026-02-17T21:38:40+00:00","description":"How we reduced AI development overhead by refactoring agent configuration, shrinking always-on context, and cut output cost per token by 21%.","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/#primaryimage","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimize-ai-ide.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/02\/optimize-ai-ide.png","width":1280,"height":719,"caption":"blue and black gradient title card with the text \"Optimizing AI IDEs at Scale\""},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/optimize-ai-ide-cost\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Optimizing AI IDEs at Scale"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/8500e2f020e85676c245e00af46bae3c","name":"Caroline Borders","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/77bfb2d62bc772cc39672e46e3e8059f","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2024\/12\/cropped-1672334331755-2-96x96.jpeg","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2024\/12\/cropped-1672334331755-2-96x96.jpeg","caption":"Caroline Borders"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/carolineb\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/19134","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/140"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=19134"}],"version-history":[{"count":3,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/19134\/revisions"}],"predecessor-version":[{"id":19153,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/19134\/revisions\/19153"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media\/19154"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=19134"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=19134"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=19134"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=19134"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}