skip to Main Content
Comet Launches Kangas, an Open Source Data Analysis, Exploration and Debugging Tool for Machine Learning.

Visualize your Object Detection Models with Jacques Verré

Whether you’re comparing model performance during a daily standup or onboarding a new teammate, you’ll need to log the training runs with an experiment management tool like Comet. In this session, Jacques Verré will walk you through the process of reviewing a YOLOv5 model in Comet.

2021 ML Practitioner Survey

AI is encountering another hurdle to delivering value, in the form of friction among and between teams. A survey of 508 machine learning practitioners that included data scientists and engineers found challenges related to people, process, and tools. This friction can cause delays in ML development that delay or halt model deployment to production.

Convergence 2022 – ML Highlights from 2021 and Lessons for 2022

March 2, 2022
Oren Etzioni, CEO at Allen Institute for Artificial Intelligence, was the keynote speaker at Comet's Convergence 2022 event, where he summarizes 15 highlights of 2021 in ML and suggests lessons for 2022 and beyond.

MLOps System Design for Development and Production

October 28, 2021
Comet CEO Gideon Mendels discusses system design principles for managing development-production feedback loops and shares industry case studies these principles are applied to production ML systems.

More than a Statistic: Diversity in Data Science and Machine Learning

April 12, 2022
Diversity has become an HR catchphrase, but what does it really mean to be from an underrepresented group in tech? This webinar explores the ongoing discussions about diversity, equity, and inclusion in ML and data science.

Overcoming Machine Learning Development Challenges

April 27, 2022
In this webinar, Gideon Mendels shares the results of Comet’s 2021 ML Practitioner Survey and talks to Ancestry's Stanley Fujimoto about overcoming ML development challenges.

Reproducibility in ML Development

May 24, 2022
Reproducibility can be a barrier to ensuring positive outcomes and scaling great work. Learn about four aspects of reproducibility in ML and a five-point checklist for ensuring ML reproducibility across your organization.

Standardizing the Experiment #3: Understanding the Data

In this report, we perform exploratory data analysis on the HackerNews dataset. Our profiling script builds visualizations, extracts summary profiles, and logs samples of the data for reference. We investigate the relationship between our initial set of features and our target.

Standardizing the Experiment #2: Baseline Models (Performance Prediction)

As part of the Standardizing the Experiment Series, we present a report on a baseline post Performance Prediction Model for the HackerNews Dataset.
Back To Top