{"id":8246,"date":"2023-11-29T09:25:29","date_gmt":"2023-11-29T17:25:29","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=8246"},"modified":"2025-04-24T17:04:19","modified_gmt":"2025-04-24T17:04:19","slug":"using-xgboost-for-deep-learning","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning\/","title":{"rendered":"Using XGBoost for Deep Learning"},"content":{"rendered":"\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning\">\n\n\n\n<div class=\"fk fl fm fn fo\">\n<div class=\"ab ca\">\n<div class=\"ch bg ew ex ey ez\">\n<figure class=\"mr ms mt mu mv mw mo mp paragraph-image\">\n<div class=\"mx my ee mz bg na\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg lw nb c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*AOtq4ZS_nATb4CfH\" alt=\"\" width=\"700\" height=\"467\"><\/figure><div class=\"mo mp mq\"><picture><\/picture><\/div>\n<\/div><figcaption class=\"nc nd ne mo mp nf ng be b bf z dw\" data-selectable-paragraph=\"\">Photo by <a class=\"af nh\" href=\"https:\/\/unsplash.com\/es\/@sharonmccutcheon?utm_source=medium&amp;utm_medium=referral\" target=\"_blank\" rel=\"noopener ugc nofollow\">Alexander Grey<\/a> on <a class=\"af nh\" href=\"https:\/\/unsplash.com\/?utm_source=medium&amp;utm_medium=referral\" target=\"_blank\" rel=\"noopener ugc nofollow\">Unsplash<\/a><\/figcaption><\/figure>\n<p id=\"32f1\" class=\"pw-post-body-paragraph ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc fk bj\" data-selectable-paragraph=\"\">XGBoost is a powerful library that performs gradient boosting. It has an excellent reputation as a tool for predicting many kinds of problems in data science and machine learning. It is also used in regression and classification problems as it is highly intuitive and easily interpretable.<\/p>\n<p id=\"29d9\" class=\"pw-post-body-paragraph ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc fk bj\" data-selectable-paragraph=\"\">One good indicator of how powerful XGBoost is is the number of winning solutions that utilize this model on Kaggle. For many years, gradient-boosting models and deep-learning solutions have won the lion&#8217;s share of Kaggle competitions.<\/p>\n<p id=\"dd64\" class=\"pw-post-body-paragraph ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc fk bj\" data-selectable-paragraph=\"\">Using XGBoost can quickly help you get from start to finish of a project with minimal initial feature engineering.<\/p>\n<p id=\"d718\" class=\"pw-post-body-paragraph ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc fk bj\" data-selectable-paragraph=\"\">One robust use case for XGBoost is integrating it with neural networks to perform a given task. XGBoost is not limited to machine learning tasks, as its incredible power can be harnessed when harmonized with deep learning algorithms.<\/p>\n<p id=\"42f7\" class=\"pw-post-body-paragraph ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc fk bj\" data-selectable-paragraph=\"\">We can now focus on how to approach XGBoost from a deep learning standpoint and even leverage technical information on building the architecture we aim to leverage.<\/p>\n<p id=\"8f4a\" class=\"pw-post-body-paragraph ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc fk bj\" data-selectable-paragraph=\"\">First of all, to implement this, we need to be familiar with the libraries that contain our model:<\/p>\n<ol class=\"\">\n<li id=\"948f\" class=\"ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc od oe of bj\" data-selectable-paragraph=\"\">XGBoost<\/li>\n<li id=\"14d7\" class=\"ni nj fr be b gp og nl nm gs oh no np nq oi ns nt nu oj nw nx ny ok oa ob oc od oe of bj\" data-selectable-paragraph=\"\">Tensorflow and Keras<\/li>\n<\/ol>\n<p id=\"8a31\" class=\"pw-post-body-paragraph ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc fk bj\" data-selectable-paragraph=\"\">All the above libraries can be installed by checking on their official sites. For clarity, Tensorflow and Pytorch can be used for building neural networks.<\/p>\n<h1 id=\"fff6\" class=\"ol om fr be on oo op gr oq or os gu ot ou ov ow ox oy oz pa pb pc pd pe pf pg bj\" data-selectable-paragraph=\"\">ConvXGB Introduction<\/h1>\n<p id=\"adab\" class=\"pw-post-body-paragraph ni nj fr be b gp ph nl nm gs pi no np nq pj ns nt nu pk nw nx ny pl oa ob oc fk bj\" data-selectable-paragraph=\"\">ConvXGB is a model that incorporates both Convolutional Neural Networks and XGBoost. It was envisioned by Thongsuwan et al., who found inherent advantages to the architecture and highlighted a few shortcomings by trying to leverage each model individually. Still, there was an excellent way to look at this:<\/p>\n<ol class=\"\">\n<li id=\"2371\" class=\"ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc od oe of bj\" data-selectable-paragraph=\"\">Using XGBoost is &#8220;still unclear for feature learning [1].&#8221;<\/li>\n<li id=\"3bb1\" class=\"ni nj fr be b gp og nl nm gs oh no np nq oi ns nt nu oj nw nx ny ok oa ob oc od oe of bj\" data-selectable-paragraph=\"\">Integrating Convolutional Neural Networks provides the added benefit of better feature learning, eliminating the shortcoming of XGBoost.<\/li>\n<\/ol>\n<p id=\"31d5\" class=\"pw-post-body-paragraph ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc fk bj\" data-selectable-paragraph=\"\">The process above is standard, as practitioners often combine models in different ways for better performance. ConvXGB explores this but with a few exciting caveats that the researchers included. The Convolutional Neural Network&#8217;s architecture differs slightly from the usual architecture.<\/p>\n<p id=\"79e6\" class=\"pw-post-body-paragraph ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc fk bj\" data-selectable-paragraph=\"\">We can now explore ConvXGB&#8217;s architecture and look keenly at this model&#8217;s nitty-gritty details.<\/p>\n<h1 id=\"b3e7\" class=\"ol om fr be on oo op gr oq or os gu ot ou ov ow ox oy oz pa pb pc pd pe pf pg bj\" data-selectable-paragraph=\"\">Architecture<\/h1>\n<p id=\"356f\" class=\"pw-post-body-paragraph ni nj fr be b gp ph nl nm gs pi no np nq pj ns nt nu pk nw nx ny pl oa ob oc fk bj\" data-selectable-paragraph=\"\">This model&#8217;s architecture is categorized into two sections: The feature learner and the class predictor. Both sections are further subdivided into the critical steps to implement this model successfully.<\/p>\n<p id=\"9fe2\" class=\"pw-post-body-paragraph ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc fk bj\" data-selectable-paragraph=\"\">The feature learning section has an input section and a data preprocessing section. The idea behind this is that it&#8217;s necessary to have data in the appropriate form to be ingested by the convolutional neural network. For data that is not in a tuple or Numpy array form, the data preprocessing layer is responsible for performing this conversion. If you aren&#8217;t familiar with this step, check out my<a class=\"af nh\" href=\"https:\/\/medium.com\/cometheartbeat\/understanding-memory-mapping-in-numpy-for-deep-learning-pt-1-2bfa8319f79b\" rel=\"noopener\"> articles<\/a> discussing Numpy arrays and the constraints that you may experience dealing with them.<\/p>\n<p id=\"b377\" class=\"pw-post-body-paragraph ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc fk bj\" data-selectable-paragraph=\"\">The convolutional neural network has some exciting features that are not typical of the average CNN. Thongsuwan et al. hypothesized that reducing parameters would reduce simplicity for a few reasons. This would happen by not including a fully connected layer and pooling layers because &#8220;it is not necessary to bring weights from the FC layers back to re-adjust weights in the previous layers [1].&#8221;<\/p>\n<p id=\"4e60\" class=\"pw-post-body-paragraph ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc fk bj\" data-selectable-paragraph=\"\">Now, the data can be fed into the predicting section of the model. This section consists of three parts: the reshaping layer, the class prediction layer, and the output layer.<\/p>\n<h1 id=\"eade\" class=\"ol om fr be on oo op gr oq or os gu ot ou ov ow ox oy oz pa pb pc pd pe pf pg bj\" data-selectable-paragraph=\"\">Benefits and Drawbacks<\/h1>\n<p id=\"1d64\" class=\"pw-post-body-paragraph ni nj fr be b gp ph nl nm gs pi no np nq pj ns nt nu pk nw nx ny pl oa ob oc fk bj\" data-selectable-paragraph=\"\">This architecture has several benefits, as the paper&#8217;s writers demonstrated, but they have shortcomings.<\/p>\n<p id=\"4ee3\" class=\"pw-post-body-paragraph ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc fk bj\" data-selectable-paragraph=\"\">One of the most impressive benefits of this architecture is how well it performs against other similar algorithms. In many of the benchmarks in the paper, the ConvXGB architecture outperformed other similar models.<\/p>\n<p id=\"4046\" class=\"pw-post-body-paragraph ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc fk bj\" data-selectable-paragraph=\"\">Despite the excellent performance, there were a few drawbacks. For instance, one has to get the number of convolution layers right, as coming up with too many may increase the architecture&#8217;s computational cost. The authors admit that it might &#8220;outweigh any advantage, and we must balance carefully any increase in accuracy with the cost incurred [1].&#8221;<\/p>\n<h1 id=\"e77a\" class=\"ol om fr be on oo op gr oq or os gu ot ou ov ow ox oy oz pa pb pc pd pe pf pg bj\" data-selectable-paragraph=\"\">Conclusion<\/h1>\n<p id=\"f607\" class=\"pw-post-body-paragraph ni nj fr be b gp ph nl nm gs pi no np nq pj ns nt nu pk nw nx ny pl oa ob oc fk bj\" data-selectable-paragraph=\"\">The ConvXGB is a powerful model that combines the power of XGBoost with deep learning. Its simplicity and ease of implementation allow for inference and strong performance if implemented correctly.<\/p>\n<p id=\"559b\" class=\"pw-post-body-paragraph ni nj fr be b gp nk nl nm gs nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc fk bj\" data-selectable-paragraph=\"\">In our next article, we can try an implementation of the model.<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"fk fl fm fn fo\">\n<div class=\"ab ca\">\n<div class=\"ch bg ew ex ey ez\">\n<h1 id=\"dc5b\" class=\"ol om fr be on oo pu gr oq or pv gu ot ou pw ow ox oy px pa pb pc py pe pf pg bj\" data-selectable-paragraph=\"\">Sources:<\/h1>\n<p id=\"4b2e\" class=\"pw-post-body-paragraph ni nj fr be b gp ph nl nm gs pi no np nq pj ns nt nu pk nw nx ny pl oa ob oc fk bj\" data-selectable-paragraph=\"\">[1]Thongsuwan, Setthanun, Saichon Jaiyen, Anantachai Padcharoen, and Praveen Agarwal. &#8220;ConvXGB: A new deep learning model for classification problems based on CNN and XGBoost.&#8221; <em class=\"pz\">Nuclear Engineering and Technology<\/em>53, no. 2 (2021): 522\u2013531.<\/p>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Photo by Alexander Grey on Unsplash XGBoost is a powerful library that performs gradient boosting. It has an excellent reputation as a tool for predicting many kinds of problems in data science and machine learning. It is also used in regression and classification problems as it is highly intuitive and easily interpretable. One good indicator [&hellip;]<\/p>\n","protected":false},"author":79,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[6],"tags":[],"coauthors":[176],"class_list":["post-8246","post","type-post","status-publish","format-standard","hentry","category-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Using XGBoost for Deep Learning - Comet<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Using XGBoost for Deep Learning\" \/>\n<meta property=\"og:description\" content=\"Photo by Alexander Grey on Unsplash XGBoost is a powerful library that performs gradient boosting. It has an excellent reputation as a tool for predicting many kinds of problems in data science and machine learning. It is also used in regression and classification problems as it is highly intuitive and easily interpretable. One good indicator [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-29T17:25:29+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:04:19+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*AOtq4ZS_nATb4CfH\" \/>\n<meta name=\"author\" content=\"Mwanikii Njagi\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Mwanikii Njagi\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Using XGBoost for Deep Learning - Comet","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning","og_locale":"en_US","og_type":"article","og_title":"Using XGBoost for Deep Learning","og_description":"Photo by Alexander Grey on Unsplash XGBoost is a powerful library that performs gradient boosting. It has an excellent reputation as a tool for predicting many kinds of problems in data science and machine learning. It is also used in regression and classification problems as it is highly intuitive and easily interpretable. One good indicator [&hellip;]","og_url":"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2023-11-29T17:25:29+00:00","article_modified_time":"2025-04-24T17:04:19+00:00","og_image":[{"url":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*AOtq4ZS_nATb4CfH","type":"","width":"","height":""}],"author":"Mwanikii Njagi","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Mwanikii Njagi","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning\/"},"author":{"name":"Mwanikii Njagi","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/c7043b3e6b992af7b3220aa1f27d2162"},"headline":"Using XGBoost for Deep Learning","datePublished":"2023-11-29T17:25:29+00:00","dateModified":"2025-04-24T17:04:19+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning\/"},"wordCount":744,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*AOtq4ZS_nATb4CfH","articleSection":["Machine Learning"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning\/","url":"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning","name":"Using XGBoost for Deep Learning - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*AOtq4ZS_nATb4CfH","datePublished":"2023-11-29T17:25:29+00:00","dateModified":"2025-04-24T17:04:19+00:00","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning#primaryimage","url":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*AOtq4ZS_nATb4CfH","contentUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*AOtq4ZS_nATb4CfH"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/using-xgboost-for-deep-learning#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Using XGBoost for Deep Learning"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/c7043b3e6b992af7b3220aa1f27d2162","name":"Mwanikii Njagi","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/1a3c516cf04aca9418dfb2213081f4df","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/cropped-1_2jy9gyk0G_yaniWm8gJFVA-1-96x96.webp","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/cropped-1_2jy9gyk0G_yaniWm8gJFVA-1-96x96.webp","caption":"Mwanikii Njagi"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/freddynjagigmail-com\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8246","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/79"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=8246"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8246\/revisions"}],"predecessor-version":[{"id":15443,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8246\/revisions\/15443"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=8246"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=8246"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=8246"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=8246"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}