Skip to content

Best practices for Experiment Management¶

Welcome to the Comet Best Practices guide for Experiment Management!

This document gathers valuable tips and recommended practices for Comet users. Dive in to find inspiration and strategies to enhance your experience and fully unlock the potential of the platform.

Note

🚀 Share your best practices with us, and help us elevate this guide to the next level! Reach out to us on our Slack channel or tag us on Linkedin to join the conversation. We value your input!

Project Management¶

  • Set up a meaningful naming convention for your experiments, projects, and workspaces. We recommend using a prefix within underscores, i.e. _<your-naming-convention-here>_.

  • Keep your projects tidy by hiding or archiving redundant experiments. Be careful when deleting experiments, since the operation is not reversible and you may want full lineage in the future!

Comet SDK¶

  • Ensure that the built-in integrations are correctly loaded by importing the Comet SDKs at the top of your script.

  • Choose between different SDK classes as follows:

    • Use the Experiment class for logging new live experiments to Comet.
    • Use the APIExperiment class for retrieving and logging new data to existing experiments.

During experimentation¶

  • Create experiments following your organization's naming convention by using the experiment.set_name() method.

  • Must-have experiment information to track include:

    • The model asset with experiment.log_model().

      Useful to i) keep track of the mapping between your model outputs and the training runs that created them, and ii) easily deploy the model to the next stage if desired.

    • A dataset artifact created via the Artifact class.

      Useful to i) ensure your model reproducibility 2) keep track of broader project lineage (i.e., which models were trained on which dataset versions?).

    • Prediction samples such as image data, point clouds, and tabular data.

      Useful to support the analysis of experiment runs from the Comet UI.

  • Run each tuning trial in a separate experiment, both when using Comet Optimizer or when running a custom tuning workflow. This ensures that you can easily analyze and compare the different hyperparameter selections.

Comet UI¶

  • Create custom visualizations for any of your experiment metadata and assets using Python panels to define tailored analysis flows.

  • Share experiment results with users outside of Comet by creating a Report.

Jul. 25, 2024