Run open source LLM evaluations with Opik!

Star
Comet logo
  • Comet logo
  • Opik LLM Evals
  • Products
    • Opik – LLM Evaluation
    • ML Experiment Management
    • ML Artifacts
    • ML Model Registry
    • ML Model Production Monitoring
  • Docs
    • Opik – LLM Evaluation
    • ML Experiment Management
  • Pricing
  • Customers
  • Learn
    • Blog
    • Deep Learning Weekly
  • Company
    • About Us
    • News and Events
      • Events
      • Press Releases
    • Careers
    • Contact Us
    • Leadership
  • Login
Get Demo
Try Comet Free
Get Demo
Try Comet Free
  1. Home
  2. Comet Community Hub

Category: Comet Community Hub

  • Academic Research
  • Comet Community Hub
  • Industry
  • Integrations
  • LLMOps
  • Machine Learning
  • Office Hours
  • Partners & Integrations
  • Product
  • Thought Leadership
  • Tutorials
  • Uncategorized
  • Vincent Koc

    March 27, 2025
    Academic Research, Comet Community Hub

    LLM Evaluation Complexities for Non-Latin Languages

    Large language models (LLMs) have revolutionized natural language processing, yet most development and evaluation efforts have historically centered around Latin-script…

    Read

    LLM Evaluation Complexities for Non-Latin Languages
  • Abby Morgan

    March 26, 2025
    Comet Community Hub, LLMOps, Tutorials

    SelfCheckGPT for LLM Evaluation

    Detecting hallucinations in language models is challenging. There are three general approaches: The problem with many LLM-as-a-Judge techniques is that…

    Read

    SelfCheckGPT for LLM Evaluation
  • Abby Morgan

    February 24, 2025
    Comet Community Hub, LLMOps, Tutorials

    LLM Juries for Evaluation

    Evaluating the correctness of generated responses is an inherently challenging task. LLM-as-a-Judge evaluators have gained popularity for their ability to…

    Read

    LLM Juries for Evaluation
  • Stéphan André

    February 5, 2025
    Comet Community Hub, LLMOps

    LLM Monitoring & Maintenance in Production Applications

    Generative AI has become a transformative force, revolutionizing how businesses engage with users through chatbots, content creation, and personalized recommendations.…

    Read

    LLM Monitoring & Maintenance in Production Applications
  • Abby Morgan

    January 28, 2025
    Comet Community Hub, LLMOps, Machine Learning, Product, Tutorials

    G-Eval for LLM Evaluation

    LLM-as-a-judge evaluators have gained widespread adoption due to their flexibility, scalability, and close alignment with human judgment. They excel at…

    Read

    G-Eval for LLM Evaluation
  • Abby Morgan

    December 19, 2024
    Comet Community Hub, LLMOps, Tutorials

    BERTScore For LLM Evaluation

    Introduction BERTScore represents a pivotal shift in LLM evaluation, moving beyond traditional heuristic-based metrics like BLEU and ROUGE to a…

    Read

    BERTScore For LLM Evaluation
  • Claire Longo

    December 9, 2024
    Comet Community Hub, LLMOps, Tutorials

    Building ClaireBot, an AI Personal Stylist Chatbot

    Follow the evolution of my personal AI project and discover how to integrate image analysis, LLM models, and LLM-as-a-judge evaluation…

    Read

    Building ClaireBot, an AI Personal Stylist Chatbot
  • Abby Morgan

    November 21, 2024
    Comet Community Hub, LLMOps, Tutorials

    Perplexity for LLM Evaluation

    Perplexity is, historically speaking, one of the “standard” evaluation metrics for language models. And while recent years have seen a…

    Read

    Perplexity for LLM Evaluation
  • Gideon Mendels

    |

    Jacques Verre

    September 16, 2024
    Comet Community Hub, LLMOps, Product

    Meet Opik: Your New Tool to Evaluate, Test, and Monitor LLM Applications

    Today, we’re thrilled to introduce Opik – an open-source, end-to-end LLM development platform that provides the observability tools you need…

    Read

    Meet Opik: Your New Tool to Evaluate, Test, and Monitor LLM Applications
  • Fabrício Ceolin

    August 30, 2024
    Comet Community Hub, LLMOps, Machine Learning, Tutorials

    Building a Low-Cost Local LLM Server to Run 70 Billion Parameter Models

    A guest post from Fabrício Ceolin, DevOps Engineer at Comet. Inspired by the growing demand for large-scale language models, Fabrício…

    Read

    Building a Low-Cost Local LLM Server to Run 70 Billion Parameter Models
1 2 3 … 7
→

Get started today for free.

Trusted by Thousands of Data Scientists

Create Free Account
Contact Sales
Comet logo
  • LinkedIn
  • X
  • YouTube
  • Facebook

Subscribe to Comet

Thank you for subscribing to Comet’s newsletter!

Products

  • Opik
  • Experiment Management
  • Artifacts
  • Model Registry
  • Model Production Monitoring

Learn

  • Documentation
  • Resources
  • Comet Blog
  • Deep Learning Weekly
  • Heartbeat
  • LLM Course

Company

  • About Us
  • News and Events
  • Careers
  • Contact Us

Pricing

  • Pricing
  • Create a Free Account
  • Contact Sales
Capterra badge
AICPA badge

©2025 Comet ML, Inc. – All Rights Reserved

Terms of Service

Privacy Policy

CCPA Privacy Notice

Cookie Settings

We use cookies to collect statistical usage information about our website and its visitors and ensure we give you the best experience on our website. Please refer to our Privacy Policy to learn more.OkPrivacy policy