{"id":19974,"date":"2026-05-15T20:37:31","date_gmt":"2026-05-15T20:37:31","guid":{"rendered":"https:\/\/www.comet.com\/site\/?p=19974"},"modified":"2026-05-15T20:37:32","modified_gmt":"2026-05-15T20:37:32","slug":"llm-cost-tracking-solution","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/","title":{"rendered":"LLM Cost Tracking Solution: How to Monitor and Control AI Spend in Agentic Systems"},"content":{"rendered":"\n<p>The first sign of trouble isn\u2019t always performance. Sometimes it\u2019s the invoice. Your team ships a new agent that routes requests, calls tools, runs retrieval, and orchestrates multiple LLM calls to deliver high-quality answers. It looks like a win until the first full-month bill hits, and your LLM spend has quietly tripled. Finance wants answers, engineering is digging through dashboards, but no one knows which agents, prompts, or customers are burning all those tokens. You need an LLM cost tracking solution that treats cost as an observability problem, not just a billing line item, so you can see where tokens go inside your app or agent, trace by trace and span by span.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/05\/LLM-Cost-Tracking-solution-1024x576.png\" alt=\"A screenshot of the opik UI showing LLM cost tracking solution options within the platform\" class=\"wp-image-19989\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/05\/LLM-Cost-Tracking-solution-1024x576.png 1024w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/05\/LLM-Cost-Tracking-solution-300x169.png 300w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/05\/LLM-Cost-Tracking-solution-768x432.png 768w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/05\/LLM-Cost-Tracking-solution-1536x864.png 1536w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/05\/LLM-Cost-Tracking-solution-2048x1152.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Our team designed Opik to help with this \u2014 because end-to-end <a href=\"https:\/\/www.comet.com\/site\/blog\/llm-observability\/\">LLM observability<\/a> needs to include visibility into model costs, and insights to help you optimize those costs. With LLM cost tracking included in the <a href=\"https:\/\/www.comet.com\/signup?from=llm\">free cloud<\/a> and <a href=\"https:\/\/github.com\/comet-ml\/opik\">open-source<\/a> versions of Opik, you can ship ambitious agentic systems without losing control of your budget.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-new-reality-of-llm-costs-in-agentic-systems\">The New Reality of LLM Costs in Agentic Systems<\/h2>\n\n\n\n<p>Early <a href=\"https:\/\/www.comet.com\/site\/blog\/prompt-engineering\/\">prompt engineering<\/a> was simple: one prompt in, one completion out. If you knew the model, token pricing, and approximate prompt length, you could estimate cost. Agentic systems broke that model. Now, a single user query can trigger multiple layers of computation, including retrieval across indexes, routing between models and tools, multi-step planning and tool-calling loops, as well as retries, fallbacks, and guardrail checks.<\/p>\n\n\n\n<p>The relationship between user requests and LLM calls is no longer linear. Two identical queries can produce very different execution paths \u2014 one with a few model calls, another with a complex workflow involving dozens.<\/p>\n\n\n\n<p>This complexity introduces clear failure modes:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Agents stuck in loops, repeatedly calling a planner model<\/li>\n\n\n\n<li>Routing prompts overusing expensive frontier models when cheaper ones would suffice<\/li>\n\n\n\n<li>Poorly managed <a href=\"https:\/\/www.comet.com\/site\/blog\/retrieval-augmented-generation\/\">RAG<\/a> pipelines resending full conversation history and long documents on every turn, increasing input tokens<\/li>\n<\/ul>\n\n\n\n<p>At the same time, expectations are rising. FinOps and leadership don\u2019t just want total LLM spend. They want per-feature, per-team, and per-customer visibility. How much does search cost versus summarization? Which tenants are unprofitable? How do experiments compare to their controls? That level of insight requires embedding cost into your system\u2019s telemetry.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-llm-token-billing-works\">How LLM Token Billing Works<\/h2>\n\n\n\n<p>Most LLM providers bill you for tokens, the small chunks of text that make up inputs (prompts, tool calls, context) and outputs (model responses). Each model has its own input and output token prices, which vary by vendor, and every token contributes to total cost.<\/p>\n\n\n\n<p>On paper, this is straightforward, but in practice, things get messy fast. You may run multiple providers behind an abstraction, mix premium models for hard cases with cheaper ones for routine tasks, and use gateways or proxies that rewrite or enrich prompts before they reach the provider.<\/p>\n\n\n\n<p>Effective LLM cost tracking requires observing spend at two different levels:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Micro view (span level):<br><\/strong>For each LLM call and tool span, you want to know how many input and output tokens were used, which model was called, and what that translated to in cost. This is how you spot expensive prompts or misconfigured steps.<\/li>\n\n\n\n<li><strong>Macro view (trace and project level):<br><\/strong>For each end-to-end interaction \u2014 such as a conversation, job, or API request \u2014 you want the total number of tokens and the cost. Aggregate those over features, projects, or tenants to understand overall spend patterns.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-why-tracing-is-the-key-llm-cost-tracking-solution\">Why Tracing is the Key LLM Cost Tracking Solution<\/h2>\n\n\n\n<p><a href=\"https:\/\/www.comet.com\/site\/blog\/llm-tracing\/\">LLM tracing<\/a> captures a request\u2019s full execution path, from the initial user input through all agent decisions, tool invocations, and LLM spans, until the final response is returned. Each LLM call becomes a span in a hierarchical trace; each tool call and internal step is another span. Together, they form a structured timeline of your agent&#8217;s actions.<\/p>\n\n\n\n<p>Once each LLM span includes token counts and model identifiers, cost is easy to compute:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Span level:<\/strong> the cost of a single prompt and completion<\/li>\n\n\n\n<li><strong>Trace level:<\/strong> the total cost of an interaction (e.g., \u201cthis conversation cost 3.2 cents\u201d)<\/li>\n\n\n\n<li><strong>Project level:<\/strong> aggregated spend across traces (e.g., \u201cthis feature drove 30% of last week\u2019s cost\u201d)<\/li>\n<\/ul>\n\n\n\n<p>This enables a simple zoom-in\/zoom-out debugging workflow:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Start with a spike in daily spend for a project.<\/li>\n\n\n\n<li>Drill into the most expensive traces behind that spike.<\/li>\n\n\n\n<li>Within a trace, sort spans by cost and inspect the prompts or routes that dominate spend.<\/li>\n\n\n\n<li>Make targeted changes to prompts, routing, or model selection.<\/li>\n<\/ol>\n\n\n\n<p>Opik\u2019s LLM tracing feature set is built for this workflow. You can automatically log spans for LLM calls, tool calls, and more across your agentic footprint, then automatically estimate and display cost at the span, trace, and project levels in USD. That\u2019s what a robust LLM cost tracking solution looks like in practice.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-to-optimize-llm-costs-without-losing-quality\">How to Optimize LLM Costs Without Losing Quality<\/h2>\n\n\n\n<p>Cutting costs is easy if you don\u2019t care about quality. You can just downshift every model to the cheapest option. The hard part is finding the sweet spot where your system is \u201cgood enough\u201d for real users at a sustainable price point. Opik\u2019s <a href=\"https:\/\/www.comet.com\/site\/blog\/llm-evaluation-guide\/\">LLM evaluation<\/a> capabilities can help achieve that balance.<\/p>\n\n\n\n<p>At Pattern, an AI-powered ecommerce accelerator, the AI Ops team used Opik to define <a href=\"https:\/\/www.comet.com\/site\/blog\/llm-evaluation-metrics-every-developer-should-know\/\">LLM evaluation metrics<\/a> grounded in what \u201cgood enough\u201d meant for their workflow. Their approach followed three steps:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Define datasets that reflect real traffic \u2014 representative inputs, expected outputs, and edge cases.<\/li>\n\n\n\n<li>Attach scoring rubrics like <a href=\"https:\/\/www.comet.com\/site\/blog\/llm-as-a-judge\/\">LLM-as-a-judge<\/a> evaluations, rule-based checks, or task-specific success criteria to quantify quality.<\/li>\n\n\n\n<li>Run experiments across models, prompts, or agent strategies, logging evaluation scores and trace-level metadata like token usage and cost to analyze quality and cost together.<\/li>\n<\/ul>\n\n\n\n<p>This process led Pattern to a mid-tier model that reduced projected annual spend by an estimated $60K without sacrificing the quality users depended on. Instead of guessing at the tradeoff, they could see it clearly in the data.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-to-find-llm-cost-outliers-in-prompts-and-multi-turn-flows\">How to Find LLM Cost Outliers in Prompts and Multi-Turn Flows<\/h2>\n\n\n\n<p>A key challenge in LLM costs is that many of the worst offenders hide in plain sight:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Prompts that gradually grow until they barely fit in the context window.<\/li>\n\n\n\n<li>Systems that resend long conversation histories on every call.<\/li>\n\n\n\n<li>\u201cHelpful\u201d summaries that re-explain the entire context on each agent step.<\/li>\n<\/ul>\n\n\n\n<p>In multi-turn agentic systems, these patterns quietly multiply token usage. A slightly wordy prompt becomes expensive when repeated 15 times per session.<\/p>\n\n\n\n<p>Once each span includes token and cost metadata, these outliers are easy to spot. In Opik, you can sort spans by cost, filter by prompt template or tool, and quickly identify consistently expensive prompts. From there, you can refactor \u2014 shortening boilerplate, trimming redundant instructions, or moving information into tools or system configuration \u2014 while tracking the impact on both cost and quality.<\/p>\n\n\n\n<p>The same principle applies at the trace level. By running multi-turn evaluation\u2014feeding test conversations through your full agent stack and logging them in Opik \u2014 you can measure tokens and spans per end-to-end interaction. This is where patterns emerge:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Agent loops where the planner keeps calling itself.<\/li>\n\n\n\n<li>Redundant sub-agent calls that rarely change the outcome.<\/li>\n\n\n\n<li>Tool invocations that add cost and latency without improving results.<\/li>\n<\/ul>\n\n\n\n<p>Once visible, these outliers are easier to address. For example, if certain queries trigger long back-and-forth interactions, the agent can ask a single clarifying question upfront, collapsing multiple uncertain steps into one. The result is fewer tokens, lower latency, and a better user experience.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-llm-cost-tracking-solution-for-every-workflow\">The LLM Cost Tracking Solution for Every Workflow<\/h2>\n\n\n\n<p>With tracing configured, cost tracking becomes easier to manage. For all major providers and models, Opik uses token counts, model identifiers, and pricing tables to estimate cost and attach it to each LLM span automatically. Span-level costs roll up to the trace, showing what each conversation, job, or pipeline run actually costs. Trace costs then roll up to the project, giving you total spend per workflow and over time in the Opik dashboard. You don\u2019t need to maintain a custom cost calculator in every service.<\/p>\n\n\n\n<p>Real-world systems are rarely that simple, though. Pricing often varies across environments and use cases. You may have:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Enterprise-negotiated pricing for certain tenants.<\/li>\n\n\n\n<li>Self-hosted models with hardware-based cost structures.<\/li>\n\n\n\n<li>Gateways that abstract multiple backends away from application code.<\/li>\n<\/ul>\n\n\n\n<p>Opik supports custom and manual pricing, so you can pass model metadata or override span costs when needed. This keeps your cost tracking consistent even when pricing is unique or proprietary. And because all cost-enriched traces are accessible via APIs, engineering and FinOps teams can export this data into internal dashboards, chargeback systems, or broader observability stacks.<\/p>\n\n\n\n<p>In other words, Opik can be your LLM cost tracking solution, while still playing nicely with the rest of your infrastructure.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-get-started-with-opik-llm-cost-tracking\">Get Started with Opik LLM Cost Tracking<\/h2>\n\n\n\n<p>The good news is that you don\u2019t need a full rewrite to start. You can begin with a single, suspiciously expensive workflow and expand from there.<\/p>\n\n\n\n<p>A simple rollout looks like this:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Instrument a few key entry points.<br><\/strong>Integrate Opik in just a few minutes following the quickstart guide, configure a workspace, and wrap the LLM calls or agent entry points for a single feature. You\u2019ll quickly start seeing traces with token and cost data in the Opik UI.<\/li>\n\n\n\n<li><strong>Compare trace-level insights to your invoice.<br><\/strong>Look at a recent billing period, then drill into the traces from that window. Which features and customers drive the most spend? Which traces seem expensive relative to the value they deliver?<\/li>\n\n\n\n<li><strong>Iterate on prompts, routing, and models.<br><\/strong>Use Opik\u2019s evaluation tooling to define what \u201cgood enough\u201d means, then run experiments to tighten prompts, swap models, or restructure flows, always measuring cost and quality together.<\/li>\n<\/ul>\n\n\n\n<p>Because Opik is open source and offers a generous free plan, you can adopt LLM cost tracking without committing to heavy upfront infrastructure costs. You get the visibility you need today, with the flexibility to extend or self-host tomorrow.<\/p>\n\n\n\n<p>If your team is already feeling the pain of opaque LLM bills, or if agentic systems are on your roadmap, now is the time to treat cost as a first-class design parameter. Start instrumenting one workflow this week, plug it into Opik, and turn your LLM bill from an unwelcome surprise into something you can inspect, understand, and actively control.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The first sign of trouble isn\u2019t always performance. Sometimes it\u2019s the invoice. Your team ships a new agent that routes requests, calls tools, runs retrieval, and orchestrates multiple LLM calls to deliver high-quality answers. It looks like a win until the first full-month bill hits, and your LLM spend has quietly tripled. Finance wants answers, [&hellip;]<\/p>\n","protected":false},"author":140,"featured_media":19989,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[65],"tags":[],"coauthors":[359],"class_list":["post-19974","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-llmops"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>LLM Cost Tracking Solution: How to Monitor AI Spend<\/title>\n<meta name=\"description\" content=\"Learn how an LLM cost tracking solution helps monitor token usage, trace AI spend across agentic systems, and optimize costs without sacrificing quality.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"LLM Cost Tracking Solution: How to Monitor and Control AI Spend in Agentic Systems\" \/>\n<meta property=\"og:description\" content=\"Learn how an LLM cost tracking solution helps monitor token usage, trace AI spend across agentic systems, and optimize costs without sacrificing quality.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2026-05-15T20:37:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-05-15T20:37:32+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/05\/LLM-Cost-Tracking-solution-1024x576.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"576\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Dr. Cayla Eagon\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Dr. Cayla Eagon\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"LLM Cost Tracking Solution: How to Monitor AI Spend","description":"Learn how an LLM cost tracking solution helps monitor token usage, trace AI spend across agentic systems, and optimize costs without sacrificing quality.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/","og_locale":"en_US","og_type":"article","og_title":"LLM Cost Tracking Solution: How to Monitor and Control AI Spend in Agentic Systems","og_description":"Learn how an LLM cost tracking solution helps monitor token usage, trace AI spend across agentic systems, and optimize costs without sacrificing quality.","og_url":"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2026-05-15T20:37:31+00:00","article_modified_time":"2026-05-15T20:37:32+00:00","og_image":[{"width":1024,"height":576,"url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/05\/LLM-Cost-Tracking-solution-1024x576.png","type":"image\/png"}],"author":"Dr. Cayla Eagon","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Dr. Cayla Eagon","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/"},"author":{"name":"Caroline Borders","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/8500e2f020e85676c245e00af46bae3c"},"headline":"LLM Cost Tracking Solution: How to Monitor and Control AI Spend in Agentic Systems","datePublished":"2026-05-15T20:37:31+00:00","dateModified":"2026-05-15T20:37:32+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/"},"wordCount":1711,"commentCount":0,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/05\/LLM-Cost-Tracking-solution-scaled.png","articleSection":["LLMOps"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/","url":"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/","name":"LLM Cost Tracking Solution: How to Monitor AI Spend","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/05\/LLM-Cost-Tracking-solution-scaled.png","datePublished":"2026-05-15T20:37:31+00:00","dateModified":"2026-05-15T20:37:32+00:00","description":"Learn how an LLM cost tracking solution helps monitor token usage, trace AI spend across agentic systems, and optimize costs without sacrificing quality.","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/#primaryimage","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/05\/LLM-Cost-Tracking-solution-scaled.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/05\/LLM-Cost-Tracking-solution-scaled.png","width":2560,"height":1440,"caption":"A screenshot of the opik UI showing LLM cost tracking solution options within the platform"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/llm-cost-tracking-solution\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"LLM Cost Tracking Solution: How to Monitor and Control AI Spend in Agentic Systems"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/8500e2f020e85676c245e00af46bae3c","name":"Caroline Borders","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/77bfb2d62bc772cc39672e46e3e8059f","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2024\/12\/cropped-1672334331755-2-96x96.jpeg","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2024\/12\/cropped-1672334331755-2-96x96.jpeg","caption":"Caroline Borders"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/carolineb\/"}]}},"jetpack_featured_media_url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2026\/05\/LLM-Cost-Tracking-solution-scaled.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/19974","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/140"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=19974"}],"version-history":[{"count":3,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/19974\/revisions"}],"predecessor-version":[{"id":19993,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/19974\/revisions\/19993"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media\/19989"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=19974"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=19974"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=19974"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=19974"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}