Run open source LLM evaluations with Opik!

Star
Comet logo
  • Opik LLM Evals
  • Products
    • Opik – LLM Evaluation
    • ML Experiment Management
    • ML Artifacts
    • ML Model Registry
    • ML Model Production Monitoring
  • Docs
    • Opik – LLM Evaluation
    • ML Experiment Management
  • Pricing
  • Customers
  • Learn
    • Blog
    • Deep Learning Weekly
    • LLM Course
  • Company
    • About Us
    • News and Events
      • Events
      • Press Releases
    • Careers
    • Contact Us
    • Leadership
  • Login
Get Demo
Try Comet Free
  1. Home
  2. LLMOps
  3. Page 3

Category: LLMOps

  • Academic Research
  • Comet Community Hub
  • Industry
  • Integrations
  • LLMOps
  • Machine Learning
  • Office Hours
  • Partners & Integrations
  • Product
  • Thought Leadership
  • Tutorials
  • Uncategorized
  • Abby Morgan

    November 21, 2024
    Comet Community Hub, LLMOps, Tutorials

    Perplexity for LLM Evaluation

    Perplexity is, historically speaking, one of the “standard” evaluation metrics for language models. And while recent years have seen a…

    Read

    Perplexity for LLM Evaluation
  • Siddharth Mehta

    October 8, 2024
    LLMOps, Product, Tutorials

    OpenAI Evals: Log Datasets & Evaluate LLM Performance with Opik

        OpenAI’s Python API is quickly becoming one of the most-downloaded Python packages. With an easy-to-use SDK and access…

    Read

    OpenAI Evals: Log Datasets & Evaluate LLM Performance with Opik
  • Gideon Mendels

    |

    Jacques Verre

    September 16, 2024
    Comet Community Hub, LLMOps, Product

    Meet Opik: Your New Tool to Evaluate, Test, and Monitor LLM Applications

    Today, we’re thrilled to introduce Opik – an open-source, end-to-end LLM development platform that provides the observability tools you need…

    Read

    Meet Opik: Your New Tool to Evaluate, Test, and Monitor LLM Applications
  • Fabrício Ceolin

    August 30, 2024
    Comet Community Hub, LLMOps, Machine Learning, Tutorials

    Building a Low-Cost Local LLM Server to Run 70 Billion Parameter Models

    A guest post from Fabrício Ceolin, DevOps Engineer at Comet. Inspired by the growing demand for large-scale language models, Fabrício…

    Read

    Building a Low-Cost Local LLM Server to Run 70 Billion Parameter Models
  • Paul Iusztin

    |

    Decoding ML

    July 31, 2024
    Comet Community Hub, LLMOps, Tutorials

    The Ultimate Prompt Monitoring Pipeline

    Welcome to Lesson 10 of 12 in our free course series, LLM Twin: Building Your Production-Ready AI Replica. You’ll learn how…

    Read

    The Ultimate Prompt Monitoring Pipeline
  • Paul Iusztin

    |

    Decoding ML

    July 23, 2024
    LLMOps, Machine Learning, Tutorials

    Beyond Proof of Concept: Building RAG Systems That Scale

    Welcome to Lesson 9 of 12 in our free course series, LLM Twin: Building Your Production-Ready AI Replica. You’ll learn how to use…

    Read

    Beyond Proof of Concept: Building RAG Systems That Scale
  • Paul Iusztin

    |

    Decoding ML

    July 9, 2024
    LLMOps, Machine Learning, Tutorials

    The Engineer’s Framework for LLM & RAG Evaluation

    Welcome to Lesson 8 of 12 in our free course series, LLM Twin: Building Your Production-Ready AI Replica. You’ll learn how to use…

    Read

    The Engineer’s Framework for LLM & RAG Evaluation
  • Paul Iusztin

    |

    Decoding ML

    May 20, 2024
    LLMOps, Machine Learning, Tutorials

    Turning Raw Data Into Fine-Tuning Datasets

    Welcome to Lesson 6 of 12 in our free course series, LLM Twin: Building Your Production-Ready AI Replica. You’ll learn how to use…

    Read

    Turning Raw Data Into Fine-Tuning Datasets
  • Paul Iusztin

    |

    Decoding ML

    May 10, 2024
    LLMOps, Machine Learning, Tutorials

    The 4 Advanced RAG Algorithms You Must Know to Implement

    Welcome to Lesson 5 of 12 in our free course series, LLM Twin: Building Your Production-Ready AI Replica. You’ll learn how to use LLMs,…

    Read

    The 4 Advanced RAG Algorithms You Must Know to Implement
  • Paul Iusztin

    |

    Decoding ML

    April 24, 2024
    LLMOps, Machine Learning, Tutorials

    SOTA Python Streaming Pipelines for Fine-tuning LLMs and RAG – in Real-Time!

    Welcome to Lesson 4 of 12 in our free course series, LLM Twin: Building Your Production-Ready AI Replica. You’ll learn…

    Read

    SOTA Python Streaming Pipelines for Fine-tuning LLMs and RAG – in Real-Time!
←
1 2 3 4 5 … 9
→

Get started today for free.

Trusted by Thousands of Data Scientists

Create Free Account
Contact Sales
Comet logo
  • LinkedIn
  • X
  • YouTube
  • Facebook

Subscribe to Comet

Thank you for subscribing to Comet’s newsletter!

Products

  • Opik
  • Experiment Management
  • Artifacts
  • Model Registry
  • Model Production Monitoring

Learn

  • Documentation
  • Resources
  • Comet Blog
  • Deep Learning Weekly
  • Heartbeat
  • LLM Course

Company

  • About Us
  • News and Events
  • Careers
  • Contact Us

Pricing

  • Pricing
  • Create a Free Account
  • Contact Sales
Capterra badge
AICPA badge

©2025 Comet ML, Inc. – All Rights Reserved

Terms of Service

Privacy Policy

CCPA Privacy Notice

Cookie Settings

We use cookies to collect statistical usage information about our website and its visitors and ensure we give you the best experience on our website. Please refer to our Privacy Policy to learn more.OkPrivacy policy