{"id":4148,"date":"2022-10-20T13:42:47","date_gmt":"2022-10-20T21:42:47","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=4148"},"modified":"2025-04-24T17:16:57","modified_gmt":"2025-04-24T17:16:57","slug":"hyperparameter-optimization-with-comet","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/","title":{"rendered":"Hyperparameter Optimization With Comet"},"content":{"rendered":"\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\">\n\n\n\n<div class=\"ir is it iu iv\">\n<p id=\"6f61\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">We face optimization problems all the time in our daily life: you don\u2019t merely pick up a random pair of jeans and head to the checkout when you\u2019re doing clothes shopping \u2014 hopefully not, I should say. There\u2019s a process to it:<\/p>\n<blockquote class=\"ly lz ma\"><p id=\"38a1\" class=\"ld le mb bm b lf lg jz lh li lj kc lk mc lm ln lo md lq lr ls me lu lv lw lx ir ga\" data-selectable-paragraph=\"\">You may want a specific brand of jeans.<br>\nMaybe dark-wash jeans are your go-to.<br>\nMaybe you have a certain fit you prefer: straight, slim, skinny.<br>\nMost important of all, the jeans have got to fit you \u2014 is your size available?<\/p><\/blockquote>\n<p id=\"1ef6\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">To feel like you\u2019ve got your money\u2019s worth, as you sift through the in-store jeans, you\u2019re mentally recording whether they meet the criteria for the type of jeans you want to purchase.<\/p>\n<p id=\"2387\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">The best jeans for you would mark all of your criteria (i.e. Levi branded, dark wash, stretch skinny fit, and your size) \u2014 those are the optimal jeans.<\/p>\n<p id=\"0e0b\" class=\"mg mh iy bm mi mj mk ml mm mn mo lx cn\" data-selectable-paragraph=\"\"><strong>Let\u2019s see what this optimization means in a machine learning context.<\/strong><\/p>\n<p id=\"a2ec\" class=\"pw-post-body-paragraph ld le iy bm b lf mp jz lh li mq kc lk ll mr ln lo lp ms lr ls lt mt lv lw lx ir ga\" data-selectable-paragraph=\"\">In this article we are going to discover the following, using&nbsp;<a class=\"au lc\" href=\"https:\/\/www.comet.com\/site\/?utm_campaign=commuinty-comet-optimizer&amp;utm_source=blog&amp;utm_medium=heartbeat\" target=\"_blank\" rel=\"noopener ugc nofollow\">Comet\u2019s experiment management platform<\/a>:<br>\n\u2192 What it means to optimize a learning algorithm<br>\n\u2192 Comet\u2019s&nbsp;<code class=\"fp mu mv mw mx b\">Optimizer<\/code>&nbsp;class<br>\n\u2192 Optimization approaches<br>\n\u2192 An End-to-end example<\/p>\n<p id=\"027b\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">Do you prefer to watch this tutorial? See&nbsp;<a class=\"au lc\" href=\"https:\/\/youtu.be\/eQktU6ytOpI\" target=\"_blank\" rel=\"noopener ugc nofollow\"><strong class=\"bm my\">Hyperparameter optimization with CometML<\/strong><\/a><\/p>\n<figure class=\"ko kp kq kr gx ks\">\n<div class=\"m fs l do\">\n<div class=\"mz na l\"><iframe loading=\"lazy\" class=\"fo aq as ag ce\" title=\"- YouTube\" src=\"https:\/\/cdn.embedly.com\/widgets\/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FeQktU6ytOpI&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DeQktU6ytOpI&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube\" width=\"854\" height=\"480\" frameborder=\"0\" scrolling=\"auto\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"><\/iframe><\/div>\n<\/div>\n<\/figure>\n<h2 id=\"51b6\" class=\"nb nc iy bm nd ne nf ng nh ni nj nk nl ke nm kf nn kh no ki np kk nq kl nr ns ga\">Optimizing learning algorithms<\/h2>\n<p id=\"edd9\" class=\"pw-post-body-paragraph ld le iy bm b lf nt jz lh li nu kc lk ll nv ln lo lp nw lr ls lt nx lv lw lx ir ga\" data-selectable-paragraph=\"\">The hyperparameter optimization problem we face in machine learning is not too dissimilar from the one we face when out jeans shopping (or whatever we want to optimize for).<\/p>\n<p id=\"c44a\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">In the same way we\u2019d search through various jeans, we need to search through various algorithms we wish to use to solve a problem at hand. Once we feel as though we\u2019re onto something with a certain algorithm, it\u2019s important to optimize the algorithm, which means&nbsp;<strong class=\"bm my\">we minimize the error to ensure the model is solving the problem to the best of its abilities<\/strong>&nbsp;\u2014 we\u2019re \u201cgetting our money\u2019s worth!\u201d<\/p>\n<p id=\"0af1\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">Hyperparameters are values that control the learning process of an algorithm. When implementing a learning algorithm, we define these values beforehand, as there is no way for the algorithm to learn them from training. Examples of hyperparameters include:<\/p>\n<ul class=\"\">\n<li id=\"e708\" class=\"ny nz iy bm b lf lg li lj ll oa lp ob lt oc lx od oe of og ga\" data-selectable-paragraph=\"\">the number of trees in a&nbsp;<a class=\"au lc\" href=\"https:\/\/towardsdatascience.com\/random-forest-overview-746e7983316\" target=\"_blank\" rel=\"noopener\">random forest<\/a><\/li>\n<li id=\"7988\" class=\"ny nz iy bm b lf oh li oi ll oj lp ok lt ol lx od oe of og ga\" data-selectable-paragraph=\"\">the learning rate of an algorithm<\/li>\n<li id=\"82f7\" class=\"ny nz iy bm b lf oh li oi ll oj lp ok lt ol lx od oe of og ga\" data-selectable-paragraph=\"\">the number of layers in a neural network<\/li>\n<\/ul>\n<p id=\"a24f\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">Implementing the optimal version of an algorithm means selecting the hyperparameters that minimize the error for the problem at hand \u2014or put another way, we are trying to maximize the performance of our algorithm for the dataset being used.<\/p>\n<p id=\"65e6\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">When you begin on a problem, there is no clear way to know what hyperparameters will result in the optimal model. To find them we must do&nbsp;<strong class=\"bm my\">hyperparameter optimization<\/strong>.<\/p>\n<h2 id=\"0aa6\" class=\"nb nc iy bm nd ne nf ng nh ni nj nk nl ke nm kf nn kh no ki np kk nq kl nr ns ga\">Optimization with Comet<\/h2>\n<p id=\"2ea6\" class=\"pw-post-body-paragraph ld le iy bm b lf nt jz lh li nu kc lk ll nv ln lo lp nw lr ls lt nx lv lw lx ir ga\" data-selectable-paragraph=\"\"><a class=\"au lc\" href=\"https:\/\/www.comet.com\/site\/?utm_campaign=commuinty-comet-optimizer&amp;utm_source=blog&amp;utm_medium=heartbeat\" target=\"_blank\" rel=\"noopener ugc nofollow\">Comet<\/a>&nbsp;is a machine learning platform that permits data scientists and teams to track, monitor, compare, explain, and optimize experiments as well as models. The optimizer will be our main focus in this article.<\/p>\n<p id=\"f093\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">In line with Comet\u2019s&nbsp;<a class=\"au lc\" href=\"https:\/\/www.comet.com\/docs\/python-sdk\/introduction-optimizer\/#optimizer-configuration\" target=\"_blank\" rel=\"noopener ugc nofollow\">documentation<\/a>, the&nbsp;<code class=\"fp mu mv mw mx b\">Optimizer<\/code>&nbsp;class may be used to:<\/p>\n<p id=\"e0fc\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\"><em class=\"mb\">\u201cdynamically find the best set of hyperparameter values that will minimize or maximize a particular metric.\u201d<\/em><\/p>\n<p id=\"7ed0\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">The class is also capable of making suggestions as to what hyperparameter values may be worth trying next \u2014 which is done in serial, parallel, or a combination of both.<\/p>\n<p id=\"81a8\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">Arguments used to define the&nbsp;<code class=\"fp mu mv mw mx b\">Optimizer<\/code>&nbsp;include:<br>\n\u2192&nbsp;<strong class=\"bm my\">config<\/strong>:&nbsp;<em class=\"mb\">optional<\/em>, if&nbsp;<code class=\"fp mu mv mw mx b\">COMET_OPTIMIZER_ID<\/code>&nbsp;is configured, otherwise is either a config dictionary, optimizer id, or a config filename.<br>\n\u2192&nbsp;<strong class=\"bm my\">trials<\/strong>:&nbsp;<em class=\"mb\">int<\/em>(optional, default 1) number of trials per parameter set to test<br>\n\u2192&nbsp;<strong class=\"bm my\">verbose<\/strong>:&nbsp;<em class=\"mb\">boolean<\/em>&nbsp;(optional, default 1) verbosity level where 0 means no output, and 1 (or greater) means to show more detail.<br>\n\u2192&nbsp;<strong class=\"bm my\">experiment_class<\/strong>:&nbsp;<em class=\"mb\">string&nbsp;<\/em>or callable (optional, default None), class to use (for example, OfflineExperiment).<\/p>\n<p id=\"adca\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">Notice there&#8217;s an option to pass a configuration dictionary to the&nbsp;<strong class=\"bm my\">config<\/strong>&nbsp;parameter. This is where we detail the optimization approach we want the&nbsp;<code class=\"fp mu mv mw mx b\">Optimizer<\/code>&nbsp;to perform.<\/p>\n<p id=\"d850\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">The dictionary the config parameter wants us to pass consists of the following keys:<br>\n\u2192&nbsp;<strong class=\"bm my\">algorithm<\/strong>:&nbsp;<em class=\"mb\">string<\/em>, the search algorithm to be used<br>\n\u2192&nbsp;<strong class=\"bm my\">spec<\/strong>:&nbsp;<em class=\"mb\">dictionary<\/em>, the algorithm-specific specifications.<br>\n\u2192&nbsp;<strong class=\"bm my\">parameters<\/strong>:&nbsp;<em class=\"mb\">dictionary<\/em>, the parameter distribution space descriptions<br>\n\u2192&nbsp;<strong class=\"bm my\">name<\/strong>:&nbsp;<em class=\"mb\">string<\/em>, a distinct name to associate with the search instance (optional)<br>\n\u2192&nbsp;<strong class=\"bm my\">trials<\/strong>:&nbsp;<em class=\"mb\">integer<\/em>, the number of trials per experiment to run (optional, defaults to 1).<\/p>\n<blockquote class=\"ly lz ma\"><p id=\"5b8e\" class=\"ld le mb bm b lf lg jz lh li lj kc lk mc lm ln lo md lq lr ls me lu lv lw lx ir ga\" data-selectable-paragraph=\"\"><strong class=\"bm my\">Note<\/strong>: We will cover the various algorithms and their&nbsp;<code class=\"fp mu mv mw mx b\">spec<\/code>&nbsp;in the&nbsp;<em class=\"iy\">Optimization methods<\/em>&nbsp;section.<\/p><\/blockquote>\n<p id=\"c56f\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">The parameters dictionary is where we define what hyperparameters to tune in our model. Let\u2019s take a closer look at what it consists of.<\/p>\n<h3 id=\"d874\" class=\"om nc iy bm nd on oo op nh oq or os nl ll ot ou nn lp ov ow np lt ox oy nr oz ga\"><strong>The&nbsp;<code class=\"fp mu mv mw mx b\">Parameters<\/code>&nbsp;dictionary<\/strong><\/h3>\n<p id=\"5292\" class=\"pw-post-body-paragraph ld le iy bm b lf nt jz lh li nu kc lk ll nv ln lo lp nw lr ls lt nx lv lw lx ir ga\" data-selectable-paragraph=\"\">In our configuration dictionary, we have a&nbsp;<code class=\"fp mu mv mw mx b\">parameters<\/code>&nbsp;key, which takes a dictionary. In our&nbsp;<code class=\"fp mu mv mw mx b\">parameters<\/code>&nbsp;dictionary, we must define specific data types that are in accord with the data type our model hyperparameter is expecting.<\/p>\n<p id=\"b049\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">Comet provides us with four types: 1) integer 2) double or float 3) discrete (for a list of numbers) 4) categorical (for a list of strings). The formatting of each parameter is inspired by&nbsp;<a class=\"au lc\" href=\"https:\/\/static.googleusercontent.com\/media\/research.google.com\/en\/\/pubs\/archive\/46180.pdf\" target=\"_blank\" rel=\"noopener ugc nofollow\">Google\u2019s Vizier<\/a>. Let\u2019s dive deeper into each one.<\/p>\n<h3 id=\"0c49\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\"><strong class=\"bm my\">Integer and Double\/Float<\/strong><\/h3>\n<p class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">Integers and double\/float types allow us to determine the scaling type of our values. Comet provides five possible distributions to select from: linear, uniform, normal, log uniform, log normal.<\/p>\n<p id=\"2b12\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">The scaling type we use determines the distribution between the min and max values of the hyperparameter.<\/p>\n<pre class=\"ko kp kq kr gx pa bs pb\"><span id=\"d9a0\" class=\"ga om nc iy mx b dm pc pd l pe\" data-selectable-paragraph=\"\">{\"PARAMETER-NAME\":\n   {\"type\": \"integer\",\n   \"scalingType\": \"linear\" | \"uniform\" | \"normal\" | \"loguniform\" | \"lognormal\",\n   \"min\": INTEGER,\n   \"max\": INTEGER\n   },\n ....\n}<\/span><\/pre>\n<p id=\"e557\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">For clarity, the definitions are as follows:<\/p>\n<p id=\"df07\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">\u2192&nbsp;<strong class=\"bm my\">linear<\/strong>: for integers, this means an independent distribution (used for things like seed values); for double, the same as uniform<br>\n\u2192&nbsp;<strong class=\"bm my\">uniform<\/strong>: a uniform distribution between \u201cmin\u201d and \u201cmax\u201d<br>\n\u2192&nbsp;<strong class=\"bm my\">normal<\/strong>: a normal distribution centered on \u201cmu\u201d, with a standard deviation of \u201csigma\u201d<br>\n\u2192&nbsp;<strong class=\"bm my\">lognormal<\/strong>: a log-normal distribution centered on \u201cmu\u201d, with a standard deviation of \u201csigma\u201d<br>\n\u2192&nbsp;<strong class=\"bm my\">loguniform<\/strong>: a log-uniform distribution between \u201cmin\u201d and \u201cmax\u201d. Computes&nbsp;<code class=\"fp mu mv mw mx b\">exp(uniform(log(min), log(max)))<\/code><\/p>\n<h3 id=\"958d\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\"><strong class=\"bm my\">Categorical &amp; Discrete<\/strong><\/h3>\n<p class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\">For categorical hyperparameters, the possible values are a list of strings.<\/p>\n<p id=\"20b8\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">For discrete hyperparameters, the possible values are a list of integers.<\/p>\n<pre class=\"ko kp kq kr gx pa bs pb\"><span id=\"9c79\" class=\"ga om nc iy mx b dm pc pd l pe\" data-selectable-paragraph=\"\">{PARAMETER-NAME:\n   {\"type\": \"categorical\",\n   {\"values\": [\"this\", \"is\", \"a\", \"list\"]\n   },\n  ...,\n}<\/span><\/pre>\n<p id=\"cebc\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">Now we can move on to the&nbsp;<code class=\"fp mu mv mw mx b\">algorithm<\/code>&nbsp;parameter and see what goes into the&nbsp;<code class=\"fp mu mv mw mx b\">spec<\/code> dictionary.<\/p>\n<\/div>\n\n\n\n<div class=\"ir is it iu iv\">\n<p id=\"264b\" class=\"mg mh iy bm mi mj mk ml mm mn mo lx cn\" data-selectable-paragraph=\"\"><strong>More from the Comet Report Library: <a class=\"au lc\" href=\"https:\/\/www.comet.com\/team-comet-ml\/parameter-optimizations\/reports\/advanced-ml-parameter-optimization?utm_campaign=community-hyperparameter-optimization&amp;utm_source=blog-cta&amp;utm_medium=heartbeat\" target=\"_blank\" rel=\"noopener ugc nofollow\">A guide to using an iterative strategy for hyperparameter optimization.<\/a><\/strong><\/p>\n<\/div>\n\n\n\n<div class=\"ir is it iu iv\">\n<h2 id=\"d35e\" class=\"nb nc iy bm nd ne pm ng nh ni pn nk nl ke po kf nn kh pp ki np kk pq kl nr ns ga\">Optimization methods<\/h2>\n<p id=\"3752\" class=\"pw-post-body-paragraph ld le iy bm b lf nt jz lh li nu kc lk ll nv ln lo lp nw lr ls lt nx lv lw lx ir ga\" data-selectable-paragraph=\"\">Comet\u2019s&nbsp;<code class=\"fp mu mv mw mx b\">Optimizer<\/code>&nbsp;focuses on three popular algorithms you could use for&nbsp;<a class=\"au lc\" href=\"https:\/\/towardsdatascience.com\/hyperparameter-optimization-for-beginners-32e3ab07b09c\" target=\"_blank\" rel=\"noopener\">hyperparameter optimization<\/a>. Let\u2019s dive deeper into each approach:<\/p>\n<h3 id=\"d6b2\" class=\"om nc iy bm nd on oo op nh oq or os nl ll ot ou nn lp ov ow np lt ox oy nr oz ga\">#1 Bayes<\/h3>\n<p id=\"d454\" class=\"pw-post-body-paragraph ld le iy bm b lf nt jz lh li nu kc lk ll nv ln lo lp nw lr ls lt nx lv lw lx ir ga\" data-selectable-paragraph=\"\">Comet&nbsp;<a class=\"au lc\" href=\"https:\/\/www.comet.com\/docs\/python-sdk\/introduction-optimizer\/#optimizer-configuration\" target=\"_blank\" rel=\"noopener ugc nofollow\">documentation<\/a>&nbsp;states \u201c<em class=\"mb\">the Bayes algorithm may be the best choice for most of your Optimizer uses.<\/em>\u201d<\/p>\n<blockquote class=\"ly lz ma\"><p id=\"6db8\" class=\"ld le mb bm b lf lg jz lh li lj kc lk mc lm ln lo md lq lr ls me lu lv lw lx ir ga\" data-selectable-paragraph=\"\">\u201cBayesian optimization has been shown to obtain better results in fewer evaluations compared to grid search and random search, due to the ability to reason about the quality of experiments before they are run.\u201d \u2014&nbsp;<a class=\"au lc\" href=\"https:\/\/en.wikipedia.org\/wiki\/Bayesian_optimization\" target=\"_blank\" rel=\"noopener ugc nofollow\"><strong class=\"bm my\">Wikipedia<\/strong><\/a><\/p><\/blockquote>\n<p id=\"ab36\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">Bayes optimization works by iteratively evaluating a promising hyperparameter configuration based on the current model, then updating it. The main aim of the technique is to gather observations that reveal as much information as possible about the location of the optimum.<\/p>\n<p id=\"e032\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">To define the Bayes algorithm in Comet, we simply set the algorithm key to&nbsp;<code class=\"fp mu mv mw mx b\">\"bayes\u201d<\/code>. As mentioned earlier, each algorithm can be given a&nbsp;<code class=\"fp mu mv mw mx b\">spec<\/code>. For the Bayes algorithm, the&nbsp;<code class=\"fp mu mv mw mx b\">spec<\/code>&nbsp;parameters include:<\/p>\n<p id=\"480b\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">\u2192&nbsp;<strong class=\"bm my\">maxCombo<\/strong>:&nbsp;<em class=\"mb\">integer<\/em>, the limit of parameter combinations to try (default 0, meaning to use 10 times the number of hyperparameters)<br>\n\u2192&nbsp;<strong class=\"bm my\">objective<\/strong>:&nbsp;<em class=\"mb\">string<\/em>, \u201cminimize\u201d or \u201cmaximize\u201d, for the objective metric (default \u201cminimize\u201d)<br>\n\u2192&nbsp;<strong class=\"bm my\">metric<\/strong>:&nbsp;<em class=\"mb\">string<\/em>, the metric name that you are logging and want to minimize\/maximize (default \u201closs\u201d)<br>\n\u2192&nbsp;<strong class=\"bm my\">minSampleSize<\/strong>:&nbsp;<em class=\"mb\">integer<\/em>, the number of samples to help find appropriate grid ranges (default 100)<br>\n\u2192&nbsp;<strong class=\"bm my\">retryLimit<\/strong>:&nbsp;<em class=\"mb\">integer<\/em>, the limit to try creating a unique parameter set before giving up (default 20)<br>\n\u2192&nbsp;<strong class=\"bm my\">retryAssignLimit<\/strong>:&nbsp;<em class=\"mb\">integer<\/em>, the limit to re-assign non-completed experiments (default 0)<\/p>\n<pre class=\"ko kp kq kr gx pa bs pb\"><span id=\"ac81\" class=\"ga om nc iy mx b dm pc pd l pe\" data-selectable-paragraph=\"\">{\"algorithm\": \"bayes\",\n \"spec\": {\n    \"maxCombo\": 0,\n    \"objective\": \"minimize\",\n    \"metric\": \"loss\",\n    \"minSampleSize\": 100,\n    \"retryLimit\": 20,\n    \"retryAssignLimit\": 0,\n },\n \"trials\": 1,\n \"parameters\": {...},\n \"name\": \"My Optimizer Name\",\n}<\/span><\/pre>\n<h3 id=\"597b\" class=\"om nc iy bm nd on oo op nh oq or os nl ll ot ou nn lp ov ow np lt ox oy nr oz ga\">#2 Grid<\/h3>\n<p id=\"a5df\" class=\"pw-post-body-paragraph ld le iy bm b lf nt jz lh li nu kc lk ll nv ln lo lp nw lr ls lt nx lv lw lx ir ga\" data-selectable-paragraph=\"\">Grid search is another popular hyperparameter optimization method. It is useful for performing a wide, initial search of a set of parameter values.<\/p>\n<p id=\"b7bc\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">The algorithm works by exhaustively searching through a manual subset of specific values in the hyperparameter space of an algorithm. Comet\u2019s grid algorithm is slightly more flexible than many, as each time you run it, you will sample from the set of possible grids defined by the parameter space distribution. Unlike Bayes optimization, grid search does not use past experiments to inform future experiments.<\/p>\n<p id=\"74f9\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">The following options can be configured in the&nbsp;<code class=\"fp mu mv mw mx b\">spec<\/code>&nbsp;when you opt to use&nbsp;<code class=\"fp mu mv mw mx b\">grid<\/code>&nbsp;search:<\/p>\n<p id=\"b400\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">\u2192&nbsp;<strong class=\"bm my\">randomize<\/strong>:&nbsp;<em class=\"mb\">boolean<\/em>, if True, then the grid is traversed randomly; otherwise it\u2019s traversed in order (default False)<br>\n\u2192&nbsp;<strong class=\"bm my\">maxCombo<\/strong>:&nbsp;<em class=\"mb\">integer<\/em>, the limit of parameter combinations to try (default 0, meaning to use 10 times the number of hyperparameters)<br>\n\u2192&nbsp;<strong class=\"bm my\">metric<\/strong>:&nbsp;<em class=\"mb\">string<\/em>, the metric name that you are logging and want to minimize\/maximize (default \u201closs\u201d)<br>\n\u2192&nbsp;<strong class=\"bm my\">gridSize<\/strong>:&nbsp;<em class=\"mb\">integer<\/em>, when creating a grid, the number of bins per parameter (default 10)<br>\n\u2192&nbsp;<strong class=\"bm my\">minSampleSize<\/strong>:&nbsp;<em class=\"mb\">integer<\/em>, the number of samples to help find appropriate grid ranges (default 100)<br>\n\u2192&nbsp;<strong class=\"bm my\">retryLimit<\/strong>:&nbsp;<em class=\"mb\">integer<\/em>, the limit to try creating a unique parameter set before giving up (default\u201d 20)<br>\n\u2192&nbsp;<strong class=\"bm my\">retryAssignLimit<\/strong>:&nbsp;<em class=\"mb\">integer<\/em>, the limit to re-assign non-completed experiments (default 0)<\/p>\n<pre class=\"ko kp kq kr gx pa bs pb\"><span id=\"48ad\" class=\"ga om nc iy mx b dm pc pd l pe\" data-selectable-paragraph=\"\">{\"algorithm\": \"grid\",\n \"spec\": {\n    \"randomize\": True,\n    \"maxCombo\": 0,\n    \"metric\": \"loss\",\n    \"gridSize\": 10,\n    \"minSampleSize\": 100,\n    \"retryLimit\": 20,\n    \"retryAssignLimit\": 0,\n },\n \"trials\": 1,\n \"parameters\": {...},\n \"name\": \"My Optimizer Name\",\n}<\/span><\/pre>\n<h3 id=\"8b0b\" class=\"om nc iy bm nd on oo op nh oq or os nl ll ot ou nn lp ov ow np lt ox oy nr oz ga\">Random<\/h3>\n<p id=\"24b6\" class=\"pw-post-body-paragraph ld le iy bm b lf nt jz lh li nu kc lk ll nv ln lo lp nw lr ls lt nx lv lw lx ir ga\" data-selectable-paragraph=\"\">Random search offers slightly more flexibility than grid search. Instead of exhaustively iterating through all possible combinations like in the grid search algorithm, random search selects combinations at random from the possible parameter values until the run is explicitly stopped or the max combinations are met.<\/p>\n<p id=\"38c4\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">Similar to grid search, the random algorithm does not use past experiments to inform future experiments, but&nbsp;<strong class=\"bm my\">when only a small number of hyperparameters have an effect on the final model performance, the random search can outperform grid search.<\/strong><\/p>\n<p id=\"2bed\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">The \u201crandom\u201d search algorithm uses the following options for its&nbsp;<code class=\"fp mu mv mw mx b\">spec<\/code>:<\/p>\n<p id=\"a1f9\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">\u2192&nbsp;<strong class=\"bm my\">maxCombo<\/strong>:&nbsp;<em class=\"mb\">integer<\/em>, the limit of parameter combinations to try (default 0, meaning to use 10 times the number of hyperparameters)<br>\n\u2192&nbsp;<strong class=\"bm my\">metric<\/strong>:&nbsp;<em class=\"mb\">string<\/em>, the metric name that you are logging and want to minimize\/maximize (default \u201closs\u201d)<br>\n\u2192&nbsp;<strong class=\"bm my\">gridSize<\/strong>:&nbsp;<em class=\"mb\">integer<\/em>, when creating a grid, the number of bins per parameter (default 10)<br>\n\u2192&nbsp;<strong class=\"bm my\">minSampleSize<\/strong>:&nbsp;<em class=\"mb\">integer<\/em>, the number of samples to help find appropriate grid ranges (default 100)<br>\n\u2192&nbsp;<strong class=\"bm my\">retryLimit<\/strong>:&nbsp;<em class=\"mb\">integer<\/em>, the limit to try creating a unique parameter set before giving up (default\u201d 20)<br>\n\u2192&nbsp;<strong class=\"bm my\">retryAssignLimit<\/strong>:&nbsp;<em class=\"mb\">integer<\/em>, the limit to re-assign non-completed experiments (default 0)<\/p>\n<pre class=\"ko kp kq kr gx pa bs pb\"><span id=\"c442\" class=\"ga om nc iy mx b dm pc pd l pe\" data-selectable-paragraph=\"\">{\"algorithm\": \"random\",\n \"spec\": {\n    \"maxCombo\": 100,\n    \"metric\": \"loss\",\n    \"gridSize\": 10,\n    \"minSampleSize\": 100,\n    \"retryLimit\": 20,\n    \"retryAssignLimit\": 0,\n },\n \"trials\": 1,\n \"parameters\": {...},\n \"name\": \"My Optimizer Name\",\n}<\/span><\/pre>\n<p id=\"8f40\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">Now, let\u2019s see an end-to-end example.<\/p>\n<h2 id=\"dc95\" class=\"nb nc iy bm nd ne nf ng nh ni nj nk nl ke nm kf nn kh no ki np kk nq kl nr ns ga\">End-to-end example<\/h2>\n<p id=\"e98e\" class=\"pw-post-body-paragraph ld le iy bm b lf nt jz lh li nu kc lk ll nv ln lo lp nw lr ls lt nx lv lw lx ir ga\" data-selectable-paragraph=\"\">We will be using a generated binary classification problem from the&nbsp;<code class=\"fp mu mv mw mx b\">make_classification<\/code>&nbsp;function in scikit-learn datasets. The data will consist of 5000 samples and 20 features, of which 3 are informative.<\/p>\n<p id=\"8ccd\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">We then split this into train and test sets so we have a way of evaluating the performance of our model on unseen instances.<\/p>\n<pre>import comet_ml\nimport matplotlib.pyplot as plt\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.datasets import make_classification\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\n\n# create a dataset\nX, y = make_classification(n_samples=5000, n_informative=3, random_state=25)\n\n# split into train and test\nX_train, X_test, y_train, y_test = train_test_split(X,y,shuffle=True,test_size=0.25,random_state=25)\n\n# visualize data\nplt.subplots(figsize=(8, 5))\nplt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=plt.cm.Spectral)\nplt.show()<\/pre>\n<figure class=\"ko kp kq kr gx ks gl gm paragraph-image\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"ce kx ky c aligncenter\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/max\/815\/1*ca10O4MLY9xaJFpH6zPdrA.png\" alt=\"\" width=\"543\" height=\"336\"><\/figure><div class=\"gl gm ps\"><picture><source srcset=\"https:\/\/miro.medium.com\/max\/640\/1*ca10O4MLY9xaJFpH6zPdrA.png 640w, https:\/\/miro.medium.com\/max\/720\/1*ca10O4MLY9xaJFpH6zPdrA.png 720w, https:\/\/miro.medium.com\/max\/750\/1*ca10O4MLY9xaJFpH6zPdrA.png 750w, https:\/\/miro.medium.com\/max\/786\/1*ca10O4MLY9xaJFpH6zPdrA.png 786w, https:\/\/miro.medium.com\/max\/828\/1*ca10O4MLY9xaJFpH6zPdrA.png 828w, https:\/\/miro.medium.com\/max\/1100\/1*ca10O4MLY9xaJFpH6zPdrA.png 1100w, https:\/\/miro.medium.com\/max\/1086\/1*ca10O4MLY9xaJFpH6zPdrA.png 1086w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 543px\" data-testid=\"og\"><\/picture><\/div>\n<\/figure>\n<p style=\"text-align: center;\" data-selectable-paragraph=\"\">A plot of the data we want to classify<\/p>\n<p id=\"eae0\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">For this example, we will be using the bayes algorithm. To do this, our algorithm key is set to&nbsp;<code class=\"fp mu mv mw mx b\">\"bayes\"<\/code> in the configuration dictionary as follows:<\/p>\n<pre># defining the configuration dictionary\nconfig_dict = {\"algorithm\": \"bayes\",\n               \"spec\": spec,\n               \"parameters\": model_params,\n               \"name\": \"Bayes Optimization\",\n               \"trials\": 1}<\/pre>\n<p id=\"047a\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">The&nbsp;<code class=\"fp mu mv mw mx b\">spec<\/code> defines the specifications of the bayes algorithm. We are going to test 20 different combinations to see which combination can minimize the loss of our model.<\/p>\n<pre># setting the spec for bayes algorithm\nspec = {\"maxCombo\": 20,\n        \"objective\": \"minimize\",\n        \"metric\": \"loss\",\n        \"minSampleSize\": 500,\n        \"retryLimit\": 20,\n        \"retryAssignLimit\": 0}<\/pre>\n<p id=\"864e\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">To train our model, we will be using a Random Forest classifier. There are several hyperparameters that we could tune, but for this example, we will only be tuning the number of estimators used to build the forest, the criterion to measure the quality of the split and the minimum number of samples required to be at a leaf node.<\/p>\n<pre># setting the parameters we are tuning\nmodel_params = {\"n_estimators\": {\n                      \"type\": \"integer\",\n                      \"scaling_type\": \"uniform\",\n                      \"min\": 100,\n                      \"max\": 300},\n                \"criterion\": {\n                      \"type\": \"categorical\",\n                      \"values\": [\"gini\", \"entropy\"]},\n                \"min_samples_leaf\": {\n                      \"type\": \"discrete\",\n                      \"values\": [1, 3, 5, 7, 9]}\n}<\/pre>\n<p id=\"37fb\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">Next, we initialize the&nbsp;<code class=\"fp mu mv mw mx b\">Optimizer<\/code>. To access your Comet dashboard, you\u2019ll need to provide an&nbsp;<code class=\"fp mu mv mw mx b\">api_key<\/code>,&nbsp;<a class=\"au lc\" href=\"https:\/\/www.comet.com\/docs\/rest-api\/getting-started\/#obtaining-your-api-key\" target=\"_blank\" rel=\"noopener ugc nofollow\">which can be accessed<\/a>&nbsp;through your Comet profile and account settings. You then assign the&nbsp;<code class=\"fp mu mv mw mx b\">config_dict<\/code>&nbsp;variable to the config parameter. I\u2019ve also provided a&nbsp;<code class=\"fp mu mv mw mx b\">project_name<\/code>&nbsp;and&nbsp;<code class=\"fp mu mv mw mx b\">workspace<\/code> so the experiments are saved to a project I created in my dashboard.<\/p>\n<pre># initializing the comet ml optimizer\nopt = comet_ml.Optimizer(api_key=\"yaUBuGWQQel4gQ5TaNCWbYXal\",\n                         config=config_dict,\n                         project_name=\"testing-hyperparameter-approaches\",\n                         workspace=\"kurtispykes\")<\/pre>\n<blockquote class=\"ly lz ma\"><p id=\"dc8f\" class=\"ld le mb bm b lf lg jz lh li lj kc lk mc lm ln lo md lq lr ls me lu lv lw lx ir ga\" data-selectable-paragraph=\"\"><strong class=\"bm my\">Note<\/strong>: Never share API keys. Comet allows you to set your API key as a config variable.&nbsp;<a class=\"au lc\" href=\"https:\/\/www.comet.com\/docs\/python-sdk\/advanced\/#comet-configuration-variables\" target=\"_blank\" rel=\"noopener ugc nofollow\">You can learn more about using this method here<\/a>.<\/p><\/blockquote>\n<p id=\"b666\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">To begin, we loop through the experiments with the&nbsp;<code class=\"fp mu mv mw mx b\"><a class=\"au lc\" href=\"https:\/\/www.comet.com\/docs\/r-sdk\/get_experiments\/\" target=\"_blank\" rel=\"noopener ugc nofollow\">get_experiments()<\/a><\/code>&nbsp;method. For each experiment, we define a Random Forest instance and use the&nbsp;<code class=\"fp mu mv mw mx b\">get_parameter()<\/code>&nbsp;method to get the parameter for the experiment being run.<\/p>\n<p id=\"a485\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">We then train the model and make predictions on the test set. To demonstrate more of Comet\u2019s functionality, I\u2019ve saved the&nbsp;<code class=\"fp mu mv mw mx b\">random_state<\/code>&nbsp;value, the&nbsp;<code class=\"fp mu mv mw mx b\">accuracy<\/code>&nbsp;of the model on the test data, and a confusion matrix to get a better understanding of how the model performed.<\/p>\n<p id=\"1e85\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">Once the run is completed, we end the experiment and begin the next one until we\u2019ve reached the&nbsp;<code class=\"fp mu mv mw mx b\">maxCombo<\/code>.<\/p>\n<\/div>\n\n\n\n<div class=\"ir is it iu iv\">\n<pre>for experiment in opt.get_experiments():\n    # initializing random forest\n    # setting the parameters to be optimized with get_parameter\n    random_forest=RandomForestClassifier(\n        n_estimators=experiment.get_parameter(\"n_estimators\"),\n        criterion=experiment.get_parameter(\"criterion\"),\n        min_samples_leaf=experiment.get_parameter(\"min_samples_leaf\"),\n        random_state=25)\n\n# training the model and making predictions\nrandom_forest.fit(X_train, y_train)\ny_hat = random_forest.predict(X_test)\n\n# logging the random state and accuracy of each model\nexperiment.log_parameter(\"random_state\", 25)\nexperiment.log_metric(\"accuracy\", accuracy_score(y_test, y_hat))\nexperiment.log_confusion_matrix(y_test, y_hat)\n\nexperiment.end()<\/pre>\n<p id=\"5ca7\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">And that\u2019s all.<\/p>\n<p id=\"5918\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">We can view the experiments that were conducted from our dashboard:<\/p>\n<figure class=\"ko kp kq kr gx ks gl gm paragraph-image\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"ce kx ky c aligncenter\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/max\/900\/1*FYhprjs0YfRpiUE2HPFbsg.gif\" alt=\"\" width=\"600\" height=\"338\"><\/figure><div class=\"gl gm pt\"><picture><source srcset=\"https:\/\/miro.medium.com\/max\/640\/1*FYhprjs0YfRpiUE2HPFbsg.gif 640w, https:\/\/miro.medium.com\/max\/720\/1*FYhprjs0YfRpiUE2HPFbsg.gif 720w, https:\/\/miro.medium.com\/max\/750\/1*FYhprjs0YfRpiUE2HPFbsg.gif 750w, https:\/\/miro.medium.com\/max\/786\/1*FYhprjs0YfRpiUE2HPFbsg.gif 786w, https:\/\/miro.medium.com\/max\/828\/1*FYhprjs0YfRpiUE2HPFbsg.gif 828w, https:\/\/miro.medium.com\/max\/1100\/1*FYhprjs0YfRpiUE2HPFbsg.gif 1100w, https:\/\/miro.medium.com\/max\/1200\/1*FYhprjs0YfRpiUE2HPFbsg.gif 1200w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 600px\" data-testid=\"og\"><\/picture><\/div>\n<\/figure>\n<p style=\"text-align: center;\" data-selectable-paragraph=\"\">experiment dashboard<\/p>\n<p id=\"84f0\" class=\"pw-post-body-paragraph ld le iy bm b lf lg jz lh li lj kc lk ll lm ln lo lp lq lr ls lt lu lv lw lx ir ga\" data-selectable-paragraph=\"\">By selecting an experiment, we can view different charts, code, hyperparameters, metrics, etc. This makes it easy to reproduce an experiment at any time in the future.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>We face optimization problems all the time in our daily life: you don\u2019t merely pick up a random pair of jeans and head to the checkout when you\u2019re doing clothes shopping \u2014 hopefully not, I should say. There\u2019s a process to it: You may want a specific brand of jeans. Maybe dark-wash jeans are your [&hellip;]<\/p>\n","protected":false},"author":8,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[6,9],"tags":[],"coauthors":[138],"class_list":["post-4148","post","type-post","status-publish","format-standard","hentry","category-machine-learning","category-product"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Hyperparameter Optimization With Comet<\/title>\n<meta name=\"description\" content=\"In this article, learn how to do hyperparameter optimization with the Comet platform.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Hyperparameter Optimization With Comet\" \/>\n<meta property=\"og:description\" content=\"In this article, learn how to do hyperparameter optimization with the Comet platform.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2022-10-20T21:42:47+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:16:57+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/miro.medium.com\/max\/815\/1*ca10O4MLY9xaJFpH6zPdrA.png\" \/>\n<meta name=\"author\" content=\"Kurtis Pykes\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kurtis Pykes\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Hyperparameter Optimization With Comet","description":"In this article, learn how to do hyperparameter optimization with the Comet platform.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/","og_locale":"en_US","og_type":"article","og_title":"Hyperparameter Optimization With Comet","og_description":"In this article, learn how to do hyperparameter optimization with the Comet platform.","og_url":"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2022-10-20T21:42:47+00:00","article_modified_time":"2025-04-24T17:16:57+00:00","og_image":[{"url":"https:\/\/miro.medium.com\/max\/815\/1*ca10O4MLY9xaJFpH6zPdrA.png","type":"","width":"","height":""}],"author":"Kurtis Pykes","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Kurtis Pykes","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/"},"author":{"name":"Team Comet Digital","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/6266601170c60a7a82b3e0043fbe8ddf"},"headline":"Hyperparameter Optimization With Comet","datePublished":"2022-10-20T21:42:47+00:00","dateModified":"2025-04-24T17:16:57+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/"},"wordCount":2131,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/max\/815\/1*ca10O4MLY9xaJFpH6zPdrA.png","articleSection":["Machine Learning","Product"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/","url":"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/","name":"Hyperparameter Optimization With Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/max\/815\/1*ca10O4MLY9xaJFpH6zPdrA.png","datePublished":"2022-10-20T21:42:47+00:00","dateModified":"2025-04-24T17:16:57+00:00","description":"In this article, learn how to do hyperparameter optimization with the Comet platform.","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/#primaryimage","url":"https:\/\/miro.medium.com\/max\/815\/1*ca10O4MLY9xaJFpH6zPdrA.png","contentUrl":"https:\/\/miro.medium.com\/max\/815\/1*ca10O4MLY9xaJFpH6zPdrA.png"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/hyperparameter-optimization-with-comet\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Hyperparameter Optimization With Comet"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/6266601170c60a7a82b3e0043fbe8ddf","name":"Team Comet Digital","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/4f0c0a8cc7c0e87c636ff6a420a6647c","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/Screen-Shot-2023-08-12-at-8.58.50-AM-96x96.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/Screen-Shot-2023-08-12-at-8.58.50-AM-96x96.png","caption":"Team Comet Digital"},"sameAs":["https:\/\/www.comet.ml\/"],"url":"https:\/\/www.comet.com\/site\/blog\/author\/teamcometdigital\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/4148","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=4148"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/4148\/revisions"}],"predecessor-version":[{"id":15667,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/4148\/revisions\/15667"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=4148"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=4148"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=4148"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=4148"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}