February 18, 2022
Notes from the seventh session of a brand new Office Hours series: Seven Simple Steps to Standardizing the Experiment with guests Dhruv Nair and Michael Cullan.
Welcome to another recap of the Comet ML Office Hours, powered by The Artists of Data Science! This week we’re covering Session 8 of our new series. This session took place Feb. 23rd, 2022 and we were joined by Dr. Doug Blank, Jacques Verre, Dhruv Nair and Michael Cullan, all of Comet. This was the final session in our Standardizing the Experiment series and we’ll be back in March with Deep Learning for Structured Data.
As a reminder, we’d love to see any and all of you at these fifty minute sessions. Keep an eye on our social media handles for how to sign up for the next series. As always, there’s a lot more in the full session (which you can find on our YouTube channel), so be sure to check it out, alongside clips from roundtables, webinars, and previous Office Hours.
As this was the last week in our eight-part series, it was fitting that we focused on the final step of the machine learning lifecycle–deployment. Of course, we know that the machine learning lifecycle is more a circle than a straight line, but to build on a model you need to see how it works in production.
To kick things off, host Harpreet Sahota asked the panelists what the differences were between ML deployment and more traditional software deployment. Hear what Head of Product Jacques Verre had to say in the clip below.
There are always challenges when working with machine learning for a business objective, especially when you’re on a small team or tasked with creating projects solo.
Most of the panelists have worked on small teams before, but they all agreed that small teams could make big impacts in the company as a whole. Jacques gave some tips on how small data science and machine learning teams could have the biggest impact in large scale companies.
Because the iterative nature of ML doesn’t end after a model is deployed, the group talked about the importance of monitoring models in production. Along with a discussion of Comet’s Model Production Monitoring tool, Dhruv Nair mentioned the irritating habit of ML models failing silently.
This silent failing can mean that you’re not alerted until things go drastically wrong. But there are solutions. Learn more in the clip below.
As always, there’s more to be discussed and discovered. Check out Comet’s contributor-led publication Heartbeat as well as our YouTube, Twitter, and LinkedIn for more great information.
We run these virtual Office Hours every Wednesday, now at 3pm EST (New York, NY). Completely free to attend and participate, we’d love to see any and all of you there! We’ve got a great series planned and welcome questions for Harpreet or any of our guests via email to email@example.com.
Keep an eye out for the registration link to our next series coming soon!