Skip to content

Running the Comet Optimizer in parallel

When tuning hyper-parameters it is common to want to speed up the time it takes to run the search by parallelizing the search. This can easily be achieved when using the Comet optimizer by running the search script using comet optimize.

The hyper-parameter search can be run in parallel by specifying the -j parameter when using the command line function comet optimize. For example if you wanted to train two models in parallel you can use:

comet optimize -j 2 training_script.py optimizer.config

Parallel execution example

In order to run the example defined above in parallel we will make two small changes:

  • Move the Optimizer config file to a separate file called optimizer.config
  • Update the training script to read the optimizer config via sys.argv

The code becomes:

comet optimize -j 2 training_script.py optimizer.config
training_script.py
from comet_ml import Optimizer
import sys

# Next, create an optimizer, passing in the config:
# (You can leave out API_KEY if you already set it)
opt = Optimizer(sys.argv[1])

# define fit function here!

# Finally, get experiments, and train your models:
for experiment in opt.get_experiments(
        project_name="optimizer-search-02"):
    # Test the model
    loss = fit(experiment.get_parameter("x"))
    experiment.log_metric("loss", loss)
optimizer.config
{
    # We pick the Bayes algorithm:
    "algorithm": "bayes",

    # Declare your hyperparameters in the Vizier-inspired format:
    "parameters": {
        "x": {"type": "integer", "min": 1, "max": 5},
    },

    # Declare what we will be optimizing, and how:
    "spec": {
    "metric": "loss",
        "objective": "minimize",
    },
}
Apr. 29, 2024