{"id":7268,"date":"2023-08-21T09:39:31","date_gmt":"2023-08-21T17:39:31","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=7268"},"modified":"2025-04-24T17:14:35","modified_gmt":"2025-04-24T17:14:35","slug":"why-deep-learning-underperforms-with-tabular-data","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/","title":{"rendered":"Why Deep Learning Underperforms with Tabular Data"},"content":{"rendered":"\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\">\n\n\n\n<div class=\"fh fi fj fk fl\">\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<figure class=\"mi mj mk ml mm mn mf mg paragraph-image\">\n<div class=\"mo mp eb mq bg mr\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg ms mt c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*7d66HCDf-bBImW19\" alt=\"\" width=\"700\" height=\"525\"><\/figure><div class=\"mf mg mh\"><picture><\/picture><\/div>\n<\/div><figcaption class=\"mu mv mw mf mg mx my be b bf z dv\" data-selectable-paragraph=\"\">Photo by <a class=\"af mz\" href=\"https:\/\/unsplash.com\/@fabioha?utm_source=medium&amp;utm_medium=referral\" target=\"_blank\" rel=\"noopener ugc nofollow\">fabio<\/a> on <a class=\"af mz\" href=\"https:\/\/unsplash.com\/?utm_source=medium&amp;utm_medium=referral\" target=\"_blank\" rel=\"noopener ugc nofollow\">Unsplash<\/a><\/figcaption><\/figure>\n<p id=\"5d12\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">Deep learning and tabular data infamously have a difficult relationship with each other. Research has, over the years, led to tremendous spikes in performance for deep learning in various different domains and this has led many to logically assume that the same gains could be replicated in <a class=\"af mz\" href=\"https:\/\/medium.com\/cometheartbeat\/using-rename-and-replace-in-python-with-image-data-33ef329c41d6\" rel=\"noopener\">tabular data<\/a>.<\/p>\n<p id=\"808b\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">So far, a multitude of results have arisen through painstaking research; some are encouraging and some raise serious questions about the feasibility of using deep learning in this manner. I will explore both perspectives.<\/p>\n<h1 id=\"79a0\" class=\"nv nw fo be nx ny nz go oa ob oc gr od oe of og oh oi oj ok ol om on oo op oq bj\" data-selectable-paragraph=\"\">Deep Learning Use Cases<\/h1>\n<p id=\"a86f\" class=\"pw-post-body-paragraph na nb fo be b gm or nd ne gp os ng nh ni ot nk nl nm ou no np nq ov ns nt nu fh bj\" data-selectable-paragraph=\"\">Deep learning relies on artificial neural networks to perform a specified function. Some of these functions may include coming up with predictions for classification problems or even more complex problems such as coloring images and detecting fraud.<\/p>\n<p id=\"b0d8\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\"><a class=\"af mz\" href=\"https:\/\/arxiv.org\/abs\/2110.01889\" target=\"_blank\" rel=\"noopener ugc nofollow\">Borisov et al.<\/a> state that \u201cdeep learning methods perform outstandingly well for classification tasks or data generation tasks on homogenous data (Borisov et al., 2021).\u201d Homogenous data is comprised of a few categories such as images, audio, and text.<\/p>\n<p id=\"43cd\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">Tabular data on the other hand falls under the category of heterogenous data as it comes with many characteristics such as having many types of numerical or categorical data types within it.<\/p>\n<p id=\"8fef\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">Deep learning has largely been constrained by these characteristics and has gained the interest of researchers such as Borisov who have suggested that it could be possible to improve its performance on tabular data. It then raises the question, which exact characteristics cause the weak performance of deep learning on tabular data?<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"fh fi fj fk fl\">\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<blockquote class=\"pe\"><p id=\"19a7\" class=\"pf pg fo be ph pi pj pk pl pm pn nu dv\" data-selectable-paragraph=\"\">Centralizing knowledge means being able to reproduce, extrapolate, and tailor experiments. <a class=\"af mz\" href=\"https:\/\/www.youtube.com\/watch?v=tIgya4PaCWM&amp;list=PLX9GmL8cVn_yout9BRYNj43XJco3gsZ3r&amp;index=10\" target=\"_blank\" rel=\"noopener ugc nofollow\">Learn how large scale companies like Uber share internal knowledge.<\/a><\/p><\/blockquote>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"fh fi fj fk fl\">\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<h1 id=\"278f\" class=\"nv nw fo be nx ny po go oa ob pp gr od oe pq og oh oi pr ok ol om ps oo op oq bj\" data-selectable-paragraph=\"\">Tabular Data Characteristics<\/h1>\n<p id=\"13ce\" class=\"pw-post-body-paragraph na nb fo be b gm or nd ne gp os ng nh ni ot nk nl nm ou no np nq ov ns nt nu fh bj\" data-selectable-paragraph=\"\">The homogeneity of data is not something that someone immediately thinks about when tackling tabular data. It unavoidably presents itself as a daunting task when trying to improve the performance of deep learning algorithms.<\/p>\n<p id=\"b246\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">For instance, different features in tables have largely different statistical properties. Some features correlate strongly with others and have a strong influence on what the final outcome may be while others have minimal impact.<\/p>\n<p id=\"2bd0\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">Additionally, this correlation is also weaker than any of the correlations prevalent in spatial or semantic relationships (image and audio) according to Borisov et al. One is then left with the problem of having to rely on other methods to come up with intuition for these models.<\/p>\n<h1 id=\"8a96\" class=\"nv nw fo be nx ny nz go oa ob oc gr od oe of og oh oi oj ok ol om on oo op oq bj\" data-selectable-paragraph=\"\">Overcoming these Problems<\/h1>\n<p id=\"ec64\" class=\"pw-post-body-paragraph na nb fo be b gm or nd ne gp os ng nh ni ot nk nl nm ou no np nq ov ns nt nu fh bj\" data-selectable-paragraph=\"\">Attempts to overcome these problems have led to interesting lines of research that have further magnified a problem with deep learning models; a shortage of interpretability. Using neural networks that have many hidden layers may make it more difficult to know what alterations lead to changes in performance.<\/p>\n<p id=\"5353\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">A few research papers have come up with promising results that it is possible to improve the performance of deep learning on tabular data. The same research also shows that it is possible to surpass gradient-boosting models on tabular data.<\/p>\n<p id=\"4f8f\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">The proposed methods heavily lean on one part of machine learning but move in different directions to achieve better performance. Regularization, according to both, <a class=\"af mz\" href=\"https:\/\/arxiv.org\/abs\/2106.11189v1?utm_source=jesper&amp;utm_medium=email&amp;utm_campaign=coming-up-with-weather-puns-is-a-breeze\" target=\"_blank\" rel=\"noopener ugc nofollow\">Kadra et al<\/a>. and <a class=\"af mz\" href=\"https:\/\/proceedings.neurips.cc\/paper\/2018\/file\/500e75a036dc2d7d2fec5da1b71d36cc-Paper.pdf\" target=\"_blank\" rel=\"noopener ugc nofollow\">Shavit et al.<\/a>, is where the new focus of improving deep learning for tabular datasets should occur.<\/p>\n<h1 id=\"c42c\" class=\"nv nw fo be nx ny nz go oa ob oc gr od oe of og oh oi oj ok ol om on oo op oq bj\" data-selectable-paragraph=\"\">Regularization: The Magical Bullet<\/h1>\n<p id=\"4840\" class=\"pw-post-body-paragraph na nb fo be b gm or nd ne gp os ng nh ni ot nk nl nm ou no np nq ov ns nt nu fh bj\" data-selectable-paragraph=\"\">In the papers I mentioned above, there is a general consensus that some form of regularization is necessary in order to get the spectacular results that we are looking for.<\/p>\n<p id=\"3e91\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">For instance, Kadra et al.\u2019s paper demonstrates that it is possible for even the simplest neural networks to outperform traditional state-of-the-art gradient boosting models such as XGBoost.<\/p>\n<p id=\"0056\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">They suggested that using regularization techniques tailored for raw data makes deep learning perform much better on tabular data. They go further to propose using \u201cregularization cocktails.\u201d According to the paper\u2019s author, \u201cthe optimal regularizer is a cocktail mixture of a large set of regularization methods, all being simultaneously applied with different strengths (Kadra et al., 2021).\u201d<\/p>\n<p id=\"5c04\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">The problem with this technique is that it does not offer opportunities for the broad utilization of suggested hyperparameters for many different datasets. Kadra et al. admit that these hyperparameters would need to be \u201cdataset-specific.\u201d One would have to specifically tailor different cocktails for individual datasets to squeeze performance out of them.<\/p>\n<p id=\"5c12\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">Shavit and Segal\u2019s <a class=\"af mz\" href=\"https:\/\/arxiv.org\/abs\/1805.06440\" target=\"_blank\" rel=\"noopener ugc nofollow\">paper<\/a>, on the other hand, argues that the introduction of a loss function known as the \u201cCounterfactual Loss \u201dcould drastically improve hyperparameter tuning. Additionally, they name the neural networks that use this particular method of regularization \u201cRegularization Neural Networks.\u201d<\/p>\n<p id=\"d73b\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">They state that a Regularization Learning Network would \u201cuse the Counterfactual Loss to tune its regularization hyperparameters efficiently during learning together with the learning of the weights of the network (Shavit &amp; Segal, 2018).\u201d Additionally, these RLNs performed best when ensembled with Gradient-Boosting algorithms.<\/p>\n<p id=\"65a3\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">The implementation of the above can be found on <a class=\"af mz\" href=\"https:\/\/github.com\/irashavitt\/regularization_learning_networks\" target=\"_blank\" rel=\"noopener ugc nofollow\">GitHub<\/a>.<\/p>\n<p id=\"834e\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">Both methods yield promising results that could potentially be developed to ensure that deep learning gets better at predicting tabular data.<\/p>\n<h1 id=\"f765\" class=\"nv nw fo be nx ny nz go oa ob oc gr od oe of og oh oi oj ok ol om on oo op oq bj\" data-selectable-paragraph=\"\">Conclusion<\/h1>\n<p id=\"3fbe\" class=\"pw-post-body-paragraph na nb fo be b gm or nd ne gp os ng nh ni ot nk nl nm ou no np nq ov ns nt nu fh bj\" data-selectable-paragraph=\"\">We still have to assess whether it is a necessary struggle to come up with better techniques to ensure that deep learning works with tabular data. Is it more productive to develop the already promising gradient boosting models such as XGBoost or should more effort be devoted to deep learning?<\/p>\n<p id=\"dca8\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">More research into these questions will yield the answers we want.<\/p>\n<p id=\"bf18\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\"><strong class=\"be pt\">Sources:<\/strong><\/p>\n<p id=\"0593\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">[1] Borisov, V., Leemann, T., Se\u00dfler, K., Haug, J., Pawelczyk, M., &amp; Kasneci, G. (2021). Deep neural networks and tabular data: A survey. <em class=\"pu\">arXiv preprint arXiv:2110.01889<\/em>.<\/p>\n<p id=\"e1ef\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">[2] Kadra, A., Lindauer, M., Hutter, F., &amp; Grabocka, J. (2021). Regularization is all you need: Simple neural nets can excel on tabular data. <em class=\"pu\">arXiv preprint arXiv:2106.11189<\/em>.<\/p>\n<p id=\"9a59\" class=\"pw-post-body-paragraph na nb fo be b gm nc nd ne gp nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu fh bj\" data-selectable-paragraph=\"\">[3] Shavitt, I., &amp; Segal, E. (2018). Regularization learning networks: deep learning for tabular datasets. <em class=\"pu\">Advances in Neural Information Processing Systems<\/em>, <em class=\"pu\">31<\/em>.<\/p>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Photo by fabio on Unsplash Deep learning and tabular data infamously have a difficult relationship with each other. Research has, over the years, led to tremendous spikes in performance for deep learning in various different domains and this has led many to logically assume that the same gains could be replicated in tabular data. So [&hellip;]<\/p>\n","protected":false},"author":79,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[6],"tags":[],"coauthors":[176],"class_list":["post-7268","post","type-post","status-publish","format-standard","hentry","category-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Why Deep Learning Underperforms with Tabular Data - Comet<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Why Deep Learning Underperforms with Tabular Data\" \/>\n<meta property=\"og:description\" content=\"Photo by fabio on Unsplash Deep learning and tabular data infamously have a difficult relationship with each other. Research has, over the years, led to tremendous spikes in performance for deep learning in various different domains and this has led many to logically assume that the same gains could be replicated in tabular data. So [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2023-08-21T17:39:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:14:35+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*7d66HCDf-bBImW19\" \/>\n<meta name=\"author\" content=\"Mwanikii Njagi\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Mwanikii Njagi\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Why Deep Learning Underperforms with Tabular Data - Comet","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/","og_locale":"en_US","og_type":"article","og_title":"Why Deep Learning Underperforms with Tabular Data","og_description":"Photo by fabio on Unsplash Deep learning and tabular data infamously have a difficult relationship with each other. Research has, over the years, led to tremendous spikes in performance for deep learning in various different domains and this has led many to logically assume that the same gains could be replicated in tabular data. So [&hellip;]","og_url":"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2023-08-21T17:39:31+00:00","article_modified_time":"2025-04-24T17:14:35+00:00","og_image":[{"url":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*7d66HCDf-bBImW19","type":"","width":"","height":""}],"author":"Mwanikii Njagi","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Mwanikii Njagi","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/"},"author":{"name":"Mwanikii Njagi","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/c7043b3e6b992af7b3220aa1f27d2162"},"headline":"Why Deep Learning Underperforms with Tabular Data","datePublished":"2023-08-21T17:39:31+00:00","dateModified":"2025-04-24T17:14:35+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/"},"wordCount":953,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*7d66HCDf-bBImW19","articleSection":["Machine Learning"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/","url":"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/","name":"Why Deep Learning Underperforms with Tabular Data - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*7d66HCDf-bBImW19","datePublished":"2023-08-21T17:39:31+00:00","dateModified":"2025-04-24T17:14:35+00:00","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/#primaryimage","url":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*7d66HCDf-bBImW19","contentUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*7d66HCDf-bBImW19"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/why-deep-learning-underperforms-with-tabular-data\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Why Deep Learning Underperforms with Tabular Data"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/c7043b3e6b992af7b3220aa1f27d2162","name":"Mwanikii Njagi","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/1a3c516cf04aca9418dfb2213081f4df","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/cropped-1_2jy9gyk0G_yaniWm8gJFVA-1-96x96.webp","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/cropped-1_2jy9gyk0G_yaniWm8gJFVA-1-96x96.webp","caption":"Mwanikii Njagi"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/freddynjagigmail-com\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/7268","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/79"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=7268"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/7268\/revisions"}],"predecessor-version":[{"id":15571,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/7268\/revisions\/15571"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=7268"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=7268"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=7268"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=7268"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}