{"id":6917,"date":"2023-07-24T15:57:35","date_gmt":"2023-07-24T23:57:35","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=6917"},"modified":"2025-04-24T17:15:06","modified_gmt":"2025-04-24T17:15:06","slug":"random-forest-regression-in-python-using-scikit-learn","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/","title":{"rendered":"Random Forest Regression in Python Using Scikit-Learn"},"content":{"rendered":"\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\">\n\n\n\n<div class=\"fh fi fj fk fl\">\n<div class=\"mg bg\">\n<figure class=\"mh mi mj mk ml mg bg paragraph-image\"><picture><img loading=\"lazy\" decoding=\"async\" class=\"bg mm mn c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:2500\/1*_D1frsTtlLU-uDDC5j8PKg.jpeg\" alt=\"\" width=\"2400\" height=\"1665\"><\/picture><figcaption class=\"mo mp mq mr ms mt mu be b bf z dv\" data-selectable-paragraph=\"\">Photo by <a class=\"af mv\" href=\"https:\/\/unsplash.com\/@szmigieldesign?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText\" target=\"_blank\" rel=\"noopener ugc nofollow\">Lukasz Szmigiel<\/a> on <a class=\"af mv\" href=\"https:\/\/unsplash.com\/s\/photos\/forest?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText\" target=\"_blank\" rel=\"noopener ugc nofollow\">Unsplash<\/a><\/figcaption><\/figure>\n<\/div>\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<h1 id=\"3b7e\" class=\"mw mx fo be my mz na go nb nc nd gr ne nf ng nh ni nj nk nl nm nn no np nq nr bj\" data-selectable-paragraph=\"\">Introduction<\/h1>\n<p id=\"e7fb\" class=\"pw-post-body-paragraph ns nt fo be b gm nu nv nw gp nx ny nz oa ob oc od oe of og oh oi oj ok ol om fh bj\" data-selectable-paragraph=\"\">A random forest is an ensemble model that consists of many <a class=\"af mv\" href=\"https:\/\/heartbeat.comet.ml\/implementing-regression-using-a-decision-tree-and-scikit-learn-ac98552b43d7\" target=\"_blank\" rel=\"noopener ugc nofollow\">decision trees<\/a>. Predictions are made by averaging the predictions of each decision tree. Or, to extend the analogy\u2014much like a forest is a collection of trees, the random forest model is also a collection of decision tree models. This makes random forests a strong modeling technique that\u2019s much more powerful than a single decision tree.<\/p>\n<p id=\"9da6\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">Each tree in a random forest is trained on the subset of data provided. The subset is obtained both with respect to rows and columns. This means each random forest tree is trained on a random data point sample, while at each decision node, a random set of features is considered for splitting.<\/p>\n<p id=\"bd60\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">In the realm of machine learning, the random forest regression algorithm can be more suitable for regression problems than other common and popular algorithms. Below are a few cases where you\u2019d likely prefer a random forest algorithm over other regression algorithms:<\/p>\n<ol class=\"\">\n<li id=\"00c0\" class=\"ns nt fo be b gm on nv nw gp oo ny nz os op oc od ot oq og oh ou or ok ol om ov ow ox bj\" data-selectable-paragraph=\"\">There are non-linear or complex relationships between features and labels.<\/li>\n<li id=\"146a\" class=\"ns nt fo be b gm oy nv nw gp oz ny nz os pa oc od ot pb og oh ou pc ok ol om ov ow ox bj\" data-selectable-paragraph=\"\">You need a model that\u2019s robust, meaning its dependence on the noise in the training set is limited. The random forest algorithm is more robust than a single decision tree, as it uses a set of uncorrelated decision trees.<\/li>\n<li id=\"89fc\" class=\"ns nt fo be b gm oy nv nw gp oz ny nz os pa oc od ot pb og oh ou pc ok ol om ov ow ox bj\" data-selectable-paragraph=\"\">If your other linear model implementations are suffering from overfitting, you may want to use a random forest.<\/li>\n<\/ol>\n<p id=\"ed7a\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">A bit more about point 3. Decision trees are prone to overfitting, especially if we don\u2019t limit the max depth\u2014they have unlimited flexibility, and hence can keep growing until they have exactly one leaf node for every single data point, thus perfectly predicting all of them.<\/p>\n<p id=\"3bcd\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">When we limit the max depth of a decision tree, the variance is reduced, but bias increases. To keep both variance and bias low, the random forest algorithm combines many decision trees with randomness to reduce overfitting.<\/p>\n<p id=\"2f94\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">Before we jump into an implementation of random forest for a regression problem, let\u2019s define some key terms.<\/p>\n<figure class=\"mh mi mj mk ml mg\">\n<div class=\"pd is l eb\">\n<div class=\"pe pf l\"><iframe loading=\"lazy\" class=\"ek n fc dx bg\" title=\"Random Forest Regression Machine Learning in Python and Sklearn\" src=\"https:\/\/cdn.embedly.com\/widgets\/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FVo0EqP0IBIQ%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DVo0EqP0IBIQ&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FVo0EqP0IBIQ%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube\" width=\"854\" height=\"480\" frameborder=\"0\" scrolling=\"no\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/div>\n<\/div>\n<\/figure>\n<h1 id=\"37bd\" class=\"mw mx fo be my mz na go nb nc nd gr ne nf ng nh ni nj nk nl nm nn no np nq nr bj\" data-selectable-paragraph=\"\">Key Terms<\/h1>\n<h2 id=\"f7a0\" class=\"pg mx fo be my ph pi pj nb pk pl pm ne oa pn po pp oe pq pr ps oi pt pu pv pw bj\" data-selectable-paragraph=\"\"><strong class=\"al\">Bootstrapping<\/strong><\/h2>\n<p id=\"193a\" class=\"pw-post-body-paragraph ns nt fo be b gm nu nv nw gp nx ny nz oa ob oc od oe of og oh oi oj ok ol om fh bj\" data-selectable-paragraph=\"\">This is the process of sampling data, where you draw a sample data point out of a population, measure the sample, and return the sample back to the population, before drawing the next sample point.<\/p>\n<p id=\"bacd\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">For example, out of the <em class=\"px\">n <\/em>data points given,<em class=\"px\"> s<\/em> sample data points are chosen with replacement. We train decision trees on each of these sample. Sampling with replacement is used to make the resampling a random event. If we do resampling without replacement, the samples drawn will be dependent on the previous ones and thus not be random.<\/p>\n<p id=\"bde5\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">It is worth noting that as samples are drawn with replacement, some samples may be used multiple times in a single tree.<\/p>\n<h2 id=\"5867\" class=\"pg mx fo be my ph pi pj nb pk pl pm ne oa pn po pp oe pq pr ps oi pt pu pv pw bj\" data-selectable-paragraph=\"\"><strong class=\"al\">Bagging<\/strong><\/h2>\n<p id=\"df85\" class=\"pw-post-body-paragraph ns nt fo be b gm nu nv nw gp nx ny nz oa ob oc od oe of og oh oi oj ok ol om fh bj\" data-selectable-paragraph=\"\">Random forests train each individual decision tree on different bootstrapped samples of the data, and then average the predictions to make an overall prediction. This is called bagging.<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"fh fi fj fk fl\">\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<h1 id=\"e340\" class=\"mw mx fo be my mz qq go nb nc qr gr ne nf qs nh ni nj qt nl nm nn qu np nq nr bj\" data-selectable-paragraph=\"\">Implementation<\/h1>\n<p id=\"cfca\" class=\"pw-post-body-paragraph ns nt fo be b gm nu nv nw gp nx ny nz oa ob oc od oe of og oh oi oj ok ol om fh bj\" data-selectable-paragraph=\"\">Now let\u2019s start our implementation using <a class=\"af mv\" href=\"https:\/\/github.com\/dhirajk100\/RFR\" target=\"_blank\" rel=\"noopener ugc nofollow\">Python and a Jupyter Notebook.<\/a><\/p>\n<p id=\"6f00\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">Once the Jupyter Notebook is up and running, the first thing we should do is import the necessary libraries.<\/p>\n<p id=\"b55c\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">We need to import:<\/p>\n<ul class=\"\">\n<li id=\"eb0d\" class=\"ns nt fo be b gm on nv nw gp oo ny nz os op oc od ot oq og oh ou or ok ol om qv ow ox bj\" data-selectable-paragraph=\"\">NumPy<\/li>\n<li id=\"9457\" class=\"ns nt fo be b gm oy nv nw gp oz ny nz os pa oc od ot pb og oh ou pc ok ol om qv ow ox bj\" data-selectable-paragraph=\"\">Pandas<\/li>\n<li id=\"2cd2\" class=\"ns nt fo be b gm oy nv nw gp oz ny nz os pa oc od ot pb og oh ou pc ok ol om qv ow ox bj\" data-selectable-paragraph=\"\">RandomForestRegressor<\/li>\n<li id=\"b35f\" class=\"ns nt fo be b gm oy nv nw gp oz ny nz os pa oc od ot pb og oh ou pc ok ol om qv ow ox bj\" data-selectable-paragraph=\"\">train_test_split<\/li>\n<li id=\"f8f1\" class=\"ns nt fo be b gm oy nv nw gp oz ny nz os pa oc od ot pb og oh ou pc ok ol om qv ow ox bj\" data-selectable-paragraph=\"\">r2_score<\/li>\n<li id=\"4e97\" class=\"ns nt fo be b gm oy nv nw gp oz ny nz os pa oc od ot pb og oh ou pc ok ol om qv ow ox bj\" data-selectable-paragraph=\"\">mean squared error<\/li>\n<li id=\"4c4b\" class=\"ns nt fo be b gm oy nv nw gp oz ny nz os pa oc od ot pb og oh ou pc ok ol om qv ow ox bj\" data-selectable-paragraph=\"\"><a class=\"af mv\" href=\"https:\/\/heartbeat.comet.ml\/seaborn-heatmaps-13-ways-to-customize-correlation-matrix-visualizations-f1c49c816f07\" target=\"_blank\" rel=\"noopener ugc nofollow\">Seabor<\/a>n<\/li>\n<\/ul>\n<p id=\"f401\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">To actually implement the random forest regressor, we\u2019re going to use scikit-learn, and we\u2019ll import our <code class=\"cw qw qx qy qz b\">RandomForestRegressor<\/code> from <code class=\"cw qw qx qy qz b\">sklearn.ensemble<\/code>.<\/p>\n<pre>import numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.metrics import r2_score,mean_squared_error\nimport seaborn as sns<\/pre>\n<figure class=\"mh mi mj mk ml mg\">\n<figcaption class=\"mo mp mq mr ms mt mu be b bf z dv\">Import Libraries for Random Forest Regression<\/figcaption>\n<\/figure>\n<h1 id=\"f2c1\" class=\"mw mx fo be my mz na go nb nc nd gr ne nf ng nh ni nj nk nl nm nn no np nq nr bj\" data-selectable-paragraph=\"\">Load the Data<\/h1>\n<p id=\"c0c2\" class=\"pw-post-body-paragraph ns nt fo be b gm nu nv nw gp nx ny nz oa ob oc od oe of og oh oi oj ok ol om fh bj\" data-selectable-paragraph=\"\">Once the libraries are imported, our next step is to load the data, stored <a class=\"af mv\" href=\"https:\/\/github.com\/dhirajk100\/RFR\" target=\"_blank\" rel=\"noopener ugc nofollow\">here<\/a>. You can download the data and keep it in your local folder. After that we can use the <code class=\"cw qw qx qy qz b\">read_csv<\/code> method of Pandas to load the data into a Pandas data frame <code class=\"cw qw qx qy qz b\">df<\/code>, as shown below.<\/p>\n<p id=\"f668\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">Also shown in the snapshot of the data below, the data frame has two columns, x and y. Here, x is the feature and y is the label. We\u2019re going to predict y using x as an independent variable.<\/p>\n<p id=\"c325\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\"><code class=\"cw qw qx qy qz b\">df = pd.read_csv(\u2018Random-Forest-Regression-Data.csv\u2019)<\/code><\/p>\n<figure class=\"mh mi mj mk ml mg mr ms paragraph-image\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mm mn c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:223\/1*EBFs-vaziTmF4D0o54Xd5w.png\" alt=\"\" width=\"223\" height=\"162\"><\/figure><div class=\"mr ms rb\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*EBFs-vaziTmF4D0o54Xd5w.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*EBFs-vaziTmF4D0o54Xd5w.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*EBFs-vaziTmF4D0o54Xd5w.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*EBFs-vaziTmF4D0o54Xd5w.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*EBFs-vaziTmF4D0o54Xd5w.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*EBFs-vaziTmF4D0o54Xd5w.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:446\/format:webp\/1*EBFs-vaziTmF4D0o54Xd5w.png 446w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 223px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*EBFs-vaziTmF4D0o54Xd5w.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*EBFs-vaziTmF4D0o54Xd5w.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*EBFs-vaziTmF4D0o54Xd5w.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*EBFs-vaziTmF4D0o54Xd5w.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*EBFs-vaziTmF4D0o54Xd5w.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*EBFs-vaziTmF4D0o54Xd5w.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:446\/1*EBFs-vaziTmF4D0o54Xd5w.png 446w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 223px\" data-testid=\"og\"><\/picture><\/div>\n<figcaption class=\"mo mp mq mr ms mt mu be b bf z dv\" data-selectable-paragraph=\"\">Data snapshot for Random Forest Regression<\/figcaption>\n<\/figure>\n<h1 id=\"cc25\" class=\"mw mx fo be my mz na go nb nc nd gr ne nf ng nh ni nj nk nl nm nn no np nq nr bj\" data-selectable-paragraph=\"\">Data pre-processing<\/h1>\n<p id=\"0d64\" class=\"pw-post-body-paragraph ns nt fo be b gm nu nv nw gp nx ny nz oa ob oc od oe of og oh oi oj ok ol om fh bj\" data-selectable-paragraph=\"\">Before feeding the data to the random forest regression model, we need to do some <a class=\"af mv\" href=\"https:\/\/heartbeat.comet.ml\/data-preprocessing-and-visualization-implications-for-your-machine-learning-model-8dfbaaa51423\" target=\"_blank\" rel=\"noopener ugc nofollow\">pre-processing<\/a>.<\/p>\n<p id=\"409b\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">Here, we\u2019ll create the x and y variables by taking them from the dataset and using the <code class=\"cw qw qx qy qz b\">train_test_split<\/code> function of scikit-learn to split the data into training and test sets.<\/p>\n<p id=\"03a5\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">We also need to reshape the values using the <code class=\"cw qw qx qy qz b\">reshape<\/code> method so that we can pass the data to <code class=\"cw qw qx qy qz b\">train_test_split<\/code> in the format required.<\/p>\n<p id=\"999a\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">Note that the test size of 0.3 indicates we\u2019ve used 30% of the data for testing. <code class=\"cw qw qx qy qz b\">random_state<\/code> ensures reproducibility. For the output of <code class=\"cw qw qx qy qz b\">train_test_split<\/code>, we get <code class=\"cw qw qx qy qz b\">x_train<\/code>, <code class=\"cw qw qx qy qz b\">x_test<\/code>, <code class=\"cw qw qx qy qz b\">y_train<\/code>, and <code class=\"cw qw qx qy qz b\">y_test<\/code>values.<\/p>\n<pre>x = df.x.values.reshape(-1, 1)\ny = df.y.values.reshape(-1, 1)\nx_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.30, random_state=42)<\/pre>\n<figure class=\"mh mi mj mk ml mg\">\n<figcaption class=\"mo mp mq mr ms mt mu be b bf z dv\">Data Pre processing for Random Forest Regression<\/figcaption>\n<\/figure>\n<h1 id=\"9c2d\" class=\"mw mx fo be my mz na go nb nc nd gr ne nf ng nh ni nj nk nl nm nn no np nq nr bj\" data-selectable-paragraph=\"\">Train the model<\/h1>\n<p id=\"fd13\" class=\"pw-post-body-paragraph ns nt fo be b gm nu nv nw gp nx ny nz oa ob oc od oe of og oh oi oj ok ol om fh bj\" data-selectable-paragraph=\"\">We\u2019re going to use <code class=\"cw qw qx qy qz b\">x_train<\/code> and <code class=\"cw qw qx qy qz b\">y_train<\/code>, obtained above, to train our random forest regression model. We\u2019re using the <code class=\"cw qw qx qy qz b\">fit<\/code> method and passing the parameters as shown below.<\/p>\n<p id=\"db78\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">Note that the output of this cell is describing a large number of parameters like criteria, max depth, etc. for the model. All these parameters are configurable, and you\u2019re free to tune them to match your requirements.<\/p>\n<figure class=\"mh mi mj mk ml mg mr ms paragraph-image\">\n<div class=\"rd re eb rf bg rg\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mm mn c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*enXVaG4bbZloacv144G0TA.png\" alt=\"\" width=\"700\" height=\"253\"><\/figure><div class=\"mr ms rc\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*enXVaG4bbZloacv144G0TA.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*enXVaG4bbZloacv144G0TA.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*enXVaG4bbZloacv144G0TA.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*enXVaG4bbZloacv144G0TA.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*enXVaG4bbZloacv144G0TA.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*enXVaG4bbZloacv144G0TA.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*enXVaG4bbZloacv144G0TA.png 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*enXVaG4bbZloacv144G0TA.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*enXVaG4bbZloacv144G0TA.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*enXVaG4bbZloacv144G0TA.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*enXVaG4bbZloacv144G0TA.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*enXVaG4bbZloacv144G0TA.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*enXVaG4bbZloacv144G0TA.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*enXVaG4bbZloacv144G0TA.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mo mp mq mr ms mt mu be b bf z dv\" data-selectable-paragraph=\"\">Random Forest Regression Model Training using Fit method<\/figcaption>\n<\/figure>\n<h1 id=\"641f\" class=\"mw mx fo be my mz na go nb nc nd gr ne nf ng nh ni nj nk nl nm nn no np nq nr bj\" data-selectable-paragraph=\"\">Prediction<\/h1>\n<p id=\"7040\" class=\"pw-post-body-paragraph ns nt fo be b gm nu nv nw gp nx ny nz oa ob oc od oe of og oh oi oj ok ol om fh bj\" data-selectable-paragraph=\"\">Once the model is trained, it\u2019s ready to make predictions. We can use the <code class=\"cw qw qx qy qz b\">predict<\/code> method on the model and pass <code class=\"cw qw qx qy qz b\">x_test<\/code> as a parameter to get the output as <code class=\"cw qw qx qy qz b\">y_pred<\/code>.<\/p>\n<p id=\"8559\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">Notice that the prediction output is an array of real numbers corresponding to the input array.<\/p>\n<figure class=\"mh mi mj mk ml mg mr ms paragraph-image\">\n<div class=\"rd re eb rf bg rg\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mm mn c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*EtBpR2TK8vIRaan7wPFayA.png\" alt=\"\" width=\"700\" height=\"291\"><\/figure><div class=\"mr ms rh\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*EtBpR2TK8vIRaan7wPFayA.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*EtBpR2TK8vIRaan7wPFayA.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*EtBpR2TK8vIRaan7wPFayA.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*EtBpR2TK8vIRaan7wPFayA.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*EtBpR2TK8vIRaan7wPFayA.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*EtBpR2TK8vIRaan7wPFayA.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*EtBpR2TK8vIRaan7wPFayA.png 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*EtBpR2TK8vIRaan7wPFayA.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*EtBpR2TK8vIRaan7wPFayA.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*EtBpR2TK8vIRaan7wPFayA.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*EtBpR2TK8vIRaan7wPFayA.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*EtBpR2TK8vIRaan7wPFayA.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*EtBpR2TK8vIRaan7wPFayA.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*EtBpR2TK8vIRaan7wPFayA.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mo mp mq mr ms mt mu be b bf z dv\" data-selectable-paragraph=\"\">Random Forest Regression Model Prediction<\/figcaption>\n<\/figure>\n<h1 id=\"c080\" class=\"mw mx fo be my mz na go nb nc nd gr ne nf ng nh ni nj nk nl nm nn no np nq nr bj\" data-selectable-paragraph=\"\">Model Evaluation<\/h1>\n<p id=\"9412\" class=\"pw-post-body-paragraph ns nt fo be b gm nu nv nw gp nx ny nz oa ob oc od oe of og oh oi oj ok ol om fh bj\" data-selectable-paragraph=\"\">Finally, we need to check to see how well our model is performing on the test data. For this, we evaluate our model by finding the root mean squared error produced by the model.<\/p>\n<p id=\"9946\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">Mean squared error is a built in function, and we are using NumPy\u2019s square root function (<code class=\"cw qw qx qy qz b\">np.sqrt<\/code>) on top of it to find the root mean squared error value.<\/p>\n<figure class=\"mh mi mj mk ml mg mr ms paragraph-image\">\n<div class=\"rd re eb rf bg rg\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mm mn c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*N9XOrHhsuJgWUG9IeVx6Ug.png\" alt=\"\" width=\"700\" height=\"214\"><\/figure><div class=\"mr ms ri\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*N9XOrHhsuJgWUG9IeVx6Ug.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*N9XOrHhsuJgWUG9IeVx6Ug.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*N9XOrHhsuJgWUG9IeVx6Ug.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*N9XOrHhsuJgWUG9IeVx6Ug.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*N9XOrHhsuJgWUG9IeVx6Ug.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*N9XOrHhsuJgWUG9IeVx6Ug.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*N9XOrHhsuJgWUG9IeVx6Ug.png 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*N9XOrHhsuJgWUG9IeVx6Ug.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*N9XOrHhsuJgWUG9IeVx6Ug.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*N9XOrHhsuJgWUG9IeVx6Ug.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*N9XOrHhsuJgWUG9IeVx6Ug.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*N9XOrHhsuJgWUG9IeVx6Ug.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*N9XOrHhsuJgWUG9IeVx6Ug.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*N9XOrHhsuJgWUG9IeVx6Ug.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mo mp mq mr ms mt mu be b bf z dv\" data-selectable-paragraph=\"\">Random Forest Regression Evaluation<\/figcaption>\n<\/figure>\n<h1 id=\"855a\" class=\"mw mx fo be my mz na go nb nc nd gr ne nf ng nh ni nj nk nl nm nn no np nq nr bj\" data-selectable-paragraph=\"\">End notes<\/h1>\n<p id=\"4a3c\" class=\"pw-post-body-paragraph ns nt fo be b gm nu nv nw gp nx ny nz oa ob oc od oe of og oh oi oj ok ol om fh bj\" data-selectable-paragraph=\"\">In this article, we discussed how to implement linear regression using a random forest algorithm. We also looked at how to pre-process and split the data into features as variable x and labels as variable y.<\/p>\n<p id=\"1de9\" class=\"pw-post-body-paragraph ns nt fo be b gm on nv nw gp oo ny nz oa op oc od oe oq og oh oi or ok ol om fh bj\" data-selectable-paragraph=\"\">After that, we trained our model and then used it to run predictions. You can find the data used <a class=\"af mv\" href=\"https:\/\/github.com\/dhirajk100\/RFR\" target=\"_blank\" rel=\"noopener ugc nofollow\">here<\/a>.<\/p>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Photo by Lukasz Szmigiel on Unsplash Introduction A random forest is an ensemble model that consists of many decision trees. Predictions are made by averaging the predictions of each decision tree. Or, to extend the analogy\u2014much like a forest is a collection of trees, the random forest model is also a collection of decision tree [&hellip;]<\/p>\n","protected":false},"author":48,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[7],"tags":[],"coauthors":[119],"class_list":["post-6917","post","type-post","status-publish","format-standard","hentry","category-tutorials"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Random Forest Regression in Python Using Scikit-Learn - Comet<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Random Forest Regression in Python Using Scikit-Learn\" \/>\n<meta property=\"og:description\" content=\"Photo by Lukasz Szmigiel on Unsplash Introduction A random forest is an ensemble model that consists of many decision trees. Predictions are made by averaging the predictions of each decision tree. Or, to extend the analogy\u2014much like a forest is a collection of trees, the random forest model is also a collection of decision tree [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2023-07-24T23:57:35+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:15:06+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/miro.medium.com\/v2\/resize:fit:2500\/1*_D1frsTtlLU-uDDC5j8PKg.jpeg\" \/>\n<meta name=\"author\" content=\"Dhiraj K\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Dhiraj K\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Random Forest Regression in Python Using Scikit-Learn - Comet","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/","og_locale":"en_US","og_type":"article","og_title":"Random Forest Regression in Python Using Scikit-Learn","og_description":"Photo by Lukasz Szmigiel on Unsplash Introduction A random forest is an ensemble model that consists of many decision trees. Predictions are made by averaging the predictions of each decision tree. Or, to extend the analogy\u2014much like a forest is a collection of trees, the random forest model is also a collection of decision tree [&hellip;]","og_url":"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2023-07-24T23:57:35+00:00","article_modified_time":"2025-04-24T17:15:06+00:00","og_image":[{"url":"https:\/\/miro.medium.com\/v2\/resize:fit:2500\/1*_D1frsTtlLU-uDDC5j8PKg.jpeg","type":"","width":"","height":""}],"author":"Dhiraj K","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Dhiraj K","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/"},"author":{"name":"Dhiraj K","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/249d8447ec1dd5591b4602e18408815a"},"headline":"Random Forest Regression in Python Using Scikit-Learn","datePublished":"2023-07-24T23:57:35+00:00","dateModified":"2025-04-24T17:15:06+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/"},"wordCount":1022,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:2500\/1*_D1frsTtlLU-uDDC5j8PKg.jpeg","articleSection":["Tutorials"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/","url":"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/","name":"Random Forest Regression in Python Using Scikit-Learn - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:2500\/1*_D1frsTtlLU-uDDC5j8PKg.jpeg","datePublished":"2023-07-24T23:57:35+00:00","dateModified":"2025-04-24T17:15:06+00:00","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/#primaryimage","url":"https:\/\/miro.medium.com\/v2\/resize:fit:2500\/1*_D1frsTtlLU-uDDC5j8PKg.jpeg","contentUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:2500\/1*_D1frsTtlLU-uDDC5j8PKg.jpeg"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/random-forest-regression-in-python-using-scikit-learn\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Random Forest Regression in Python Using Scikit-Learn"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/249d8447ec1dd5591b4602e18408815a","name":"Dhiraj K","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/09c08b3ff448a15a03f233edc2fbd654","url":"https:\/\/secure.gravatar.com\/avatar\/0bcf1f88f82cbd9a2dd63d2c7cbfa8b3af7e49735653bfbad4abf19c709c2db5?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/0bcf1f88f82cbd9a2dd63d2c7cbfa8b3af7e49735653bfbad4abf19c709c2db5?s=96&d=mm&r=g","caption":"Dhiraj K"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/dhiraj10099gmail-com\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/6917","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/48"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=6917"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/6917\/revisions"}],"predecessor-version":[{"id":15597,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/6917\/revisions\/15597"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=6917"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=6917"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=6917"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=6917"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}