{"id":8057,"date":"2023-10-31T11:49:25","date_gmt":"2023-10-31T19:49:25","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=8057"},"modified":"2025-04-24T17:05:03","modified_gmt":"2025-04-24T17:05:03","slug":"does-bootstrap-aggregation-help-in-improving-model-performance-and-stability","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability\/","title":{"rendered":"Does Bootstrap Aggregation Help in Improving Model Performance and Stability?"},"content":{"rendered":"\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability\">\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"3da7\">An ensemble technique called bootstrap aggregation (bagging) addresses overfitting for classification or regression issues. The goal of bagging is to enhance the performance and accuracy of machine learning models. To achieve this, random subsets of the original dataset are taken with replacement, and each subset is fitted with either a classifier (for classification) or a regressor (for regression). To increase prediction accuracy, each subset&#8217;s forecasts are combined via majority vote for classification or average for regression.<\/p>\n\n\n\n<h1 class=\"wp-block-heading ms mt fr be mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no np bj\" id=\"8c1c\">Evaluating a Base Classifier<\/h1>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx nq lz ma mb nr md me mf ns mh mi mj nt ml mm mn nu mp mq mr fk bj\" id=\"1bf9\">We must first assess the performance of the base classifier on the dataset before we can understand how bagging can enhance model performance. Before continuing, revisit the lesson on decision trees if you need help understanding what they are. Bagging is a development of this idea.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"dc2d\">In sklearn&#8217;s wine dataset, we&#8217;ll be trying to identify various wine classifications.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"5b9d\"><strong class=\"be nv\">Importing the essential modules<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image nz oa ob oc od oe nw nx paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*tBGGRscCVLY00lcSLh5xMw.png\" alt=\"\"\/><figcaption class=\"wp-element-caption\">Importing libraries<\/figcaption><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"5c37\">The data must then be loaded and stored in the variables X (input features) and Y. (target). To preserve the feature names when loading the data, the parameter as frame is set to True.<\/p>\n\n\n\n<figure class=\"wp-block-image nz oa ob oc od oe nw nx paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*2DX9m--xI8auGUTi\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"45f7\">We need to separate X and y into train and test sets to assess our model on unobserved data appropriately. You may get more details about data splitting in the Train\/Test lesson.<\/p>\n\n\n\n<figure class=\"wp-block-image nz oa ob oc od oe nw nx paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*dcZlepRe1weH5O9D\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"7aeb\">Now that our data is ready, we can instantiate a basic classifier and fit the training set.<\/p>\n\n\n\n<figure class=\"wp-block-image nz oa ob oc od oe nw nx paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*_pg8Yy6bWiTx2do8\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"df8e\">Now that the test set is unknown, we may forecast the wine class and assess the model&#8217;s performance.<\/p>\n\n\n\n<figure class=\"wp-block-image nz oa ob oc od oe nw nx paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*9XdEqiktVg5X_CcV\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"97a8\">Output:<\/p>\n\n\n\n<figure class=\"wp-block-image nz oa ob oc od oe nw nx paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*suAxVnw73sCt4PPT\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"0102\">With the current parameters, the base classifier works admirably on the dataset, obtaining 82% accuracy on the test dataset (other outcomes may be seen if the random state option is not set).<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"9f4a\">We can compare the performance of the Bagging Classifier and a single Decision Tree Classifier now that we know the baseline accuracy for the test dataset.<\/p>\n\n\n\n<h1 class=\"wp-block-heading ms mt fr be mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no np bj\" id=\"4576\">Creating a Bagging Classifier<\/h1>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx nq lz ma mb nr md me mf ns mh mi mj nt ml mm mn nu mp mq mr fk bj\" id=\"e32b\">The number of base classifiers our model will aggregate is the parameter n estimators, which must be set to do bagging.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"9d19\">Although there aren&#8217;t many estimators for this sample dataset, considerably more comprehensive ranges are frequently investigated. However, for now, we will utilize a particular range of values for the number of estimators. Hyperparameter tweaking is often done via a grid search.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"0bbd\">First, we import the required model.<\/p>\n\n\n\n<figure class=\"wp-block-image nz oa ob oc od oe nw nx paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*qWJui-bw3VKYoXMY\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"d4f1\">Let&#8217;s establish a range of numbers to indicate the number of estimators we intend to employ in each ensemble.<\/p>\n\n\n\n<figure class=\"wp-block-image nz oa ob oc od oe nw nx paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*WVeXEFM8I1CWKiHi\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"4bc7\">We need a method to iterate over the range of values and record the results from each ensemble to compare how the Bagging Classifier performs with various values of n estimators. To do this, a for loop will be built, with the models and scores being stored in separate lists for future visualizations.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"4e02\">Note: Since the DecisionTreeClassifier is the default value for the base classifier in the BaggingClassifier, we don&#8217;t need to set it when we initialize the bagging model.<\/p>\n\n\n\n<figure class=\"wp-block-image nz oa ob oc od oe nw nx paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*PoFKZILjV3lSq7UE\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"e5c1\">Let&#8217;s visualize the improvement with the models and scores stored.<\/p>\n\n\n\n<figure class=\"wp-block-image nz oa ob oc od oe nw nx paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*Sy5yVQR--vFrrAjH\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"c58d\">Output:<\/p>\n\n\n\n<figure class=\"wp-block-image nz oa ob oc od oe nw nx paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*ljclUdcfvglGwZAf\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"04f5\">We can observe a rise in model performance from <strong class=\"be nv\">82.2%<\/strong> to <strong class=\"be nv\">95.5% <\/strong>by iterating through various settings for the number of estimators. The accuracy starts to decline after 14 estimators; once more, the values you see will change if you specify a different random state. Cross-validation is recommended as best practice to provide reliable results because of this.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"2133\">In this instance, we observe a <strong class=\"be nv\">13.3%<\/strong> improvement in wine type identification precision.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"38bb\">Now, it&#8217;s clear how bootstrap aggregation helps improve model performance and stability. If you want to read some of my other blogs, you can read them below:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><a class=\"af pb\" href=\"https:\/\/heartbeat.comet.ml\/knn-a-complete-guide-ade060c416d9\" target=\"_blank\" rel=\"noopener ugc nofollow\">KNN: A Complete Guide<\/a><\/li>\n\n\n\n<li><a class=\"af pb\" href=\"https:\/\/heartbeat.comet.ml\/naive-bayes-a-complete-guide-73171d01e480\" target=\"_blank\" rel=\"noopener ugc nofollow\">Naive Bayes: A Complete Guide<\/a><\/li>\n\n\n\n<li><a class=\"af pb\" href=\"https:\/\/heartbeat.comet.ml\/linear-regression-a-complete-guide-1c124fc7988a\" target=\"_blank\" rel=\"noopener ugc nofollow\">Linear Regression: A Complete Guide<\/a><\/li>\n<\/ol>\n\n\n\n<p class=\"pw-post-body-paragraph lv lw fr be b lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr fk bj\" id=\"5ffb\">I advise you to give it a shot. You are welcome to ask me any questions in the comment section if you have any as well.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>An ensemble technique called bootstrap aggregation (bagging) addresses overfitting for classification or regression issues. The goal of bagging is to enhance the performance and accuracy of machine learning models. To achieve this, random subsets of the original dataset are taken with replacement, and each subset is fitted with either a classifier (for classification) or a [&hellip;]<\/p>\n","protected":false},"author":107,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[6],"tags":[],"coauthors":[205],"class_list":["post-8057","post","type-post","status-publish","format-standard","hentry","category-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Does Bootstrap Aggregation Help in Improving Model Performance and Stability? - Comet<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Does Bootstrap Aggregation Help in Improving Model Performance and Stability?\" \/>\n<meta property=\"og:description\" content=\"An ensemble technique called bootstrap aggregation (bagging) addresses overfitting for classification or regression issues. The goal of bagging is to enhance the performance and accuracy of machine learning models. To achieve this, random subsets of the original dataset are taken with replacement, and each subset is fitted with either a classifier (for classification) or a [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2023-10-31T19:49:25+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:05:03+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*tBGGRscCVLY00lcSLh5xMw.png\" \/>\n<meta name=\"author\" content=\"Sandeep Painuly\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sandeep Painuly\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Does Bootstrap Aggregation Help in Improving Model Performance and Stability? - Comet","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability","og_locale":"en_US","og_type":"article","og_title":"Does Bootstrap Aggregation Help in Improving Model Performance and Stability?","og_description":"An ensemble technique called bootstrap aggregation (bagging) addresses overfitting for classification or regression issues. The goal of bagging is to enhance the performance and accuracy of machine learning models. To achieve this, random subsets of the original dataset are taken with replacement, and each subset is fitted with either a classifier (for classification) or a [&hellip;]","og_url":"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2023-10-31T19:49:25+00:00","article_modified_time":"2025-04-24T17:05:03+00:00","og_image":[{"url":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*tBGGRscCVLY00lcSLh5xMw.png","type":"","width":"","height":""}],"author":"Sandeep Painuly","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Sandeep Painuly","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability\/"},"author":{"name":"Sandeep Painuly","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/2159891ba1770da2ee19de65c6b3a779"},"headline":"Does Bootstrap Aggregation Help in Improving Model Performance and Stability?","datePublished":"2023-10-31T19:49:25+00:00","dateModified":"2025-04-24T17:05:03+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability\/"},"wordCount":639,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*tBGGRscCVLY00lcSLh5xMw.png","articleSection":["Machine Learning"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability\/","url":"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability","name":"Does Bootstrap Aggregation Help in Improving Model Performance and Stability? - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*tBGGRscCVLY00lcSLh5xMw.png","datePublished":"2023-10-31T19:49:25+00:00","dateModified":"2025-04-24T17:05:03+00:00","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability#primaryimage","url":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*tBGGRscCVLY00lcSLh5xMw.png","contentUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*tBGGRscCVLY00lcSLh5xMw.png"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/does-bootstrap-aggregation-help-in-improving-model-performance-and-stability#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Does Bootstrap Aggregation Help in Improving Model Performance and Stability?"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/2159891ba1770da2ee19de65c6b3a779","name":"Sandeep Painuly","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/197b6cf78f637e5b06bd1f8ba47ad3ea","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/11\/7NZMvlNL_400x400-96x96.jpg","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/11\/7NZMvlNL_400x400-96x96.jpg","caption":"Sandeep Painuly"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/sandeeppainuly16gmail-com\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8057","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/107"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=8057"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8057\/revisions"}],"predecessor-version":[{"id":15479,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8057\/revisions\/15479"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=8057"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=8057"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=8057"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=8057"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}