-
LLM Juries for Evaluation
Evaluating the correctness of generated responses is an inherently challenging task. LLM-as-a-Judge evaluators have gained popularity for their ability to…
-
G-Eval for LLM Evaluation
LLM-as-a-judge evaluators have gained widespread adoption due to their flexibility, scalability, and close alignment with human judgment. They excel at…
-
Intro to LLM Observability: What to Monitor & How to Get Started
While LLM usage is soaring, productionizing an LLM-powered application or software product presents new and different challenges compared to traditional…
-
BERTScore For LLM Evaluation
Introduction BERTScore represents a pivotal shift in LLM evaluation, moving beyond traditional heuristic-based metrics like BLEU and ROUGE to a…
-
Building ClaireBot, an AI Personal Stylist Chatbot
Follow the evolution of my personal AI project and discover how to integrate image analysis, LLM models, and LLM-as-a-judge evaluation…
-
Perplexity for LLM Evaluation
Perplexity is, historically speaking, one of the “standard” evaluation metrics for language models. And while recent years have seen a…
-
Building a Low-Cost Local LLM Server to Run 70 Billion Parameter Models
A guest post from Fabrício Ceolin, DevOps Engineer at Comet. Inspired by the growing demand for large-scale language models, Fabrício…
-
8B Parameters, 1 GPU, No Problems: The Ultimate LLM Fine-tuning Pipeline
Welcome to Lesson 7 of 12 in our free course series, LLM Twin: Building Your Production-Ready AI Replica. You’ll learn how to use…
-
The 4 Advanced RAG Algorithms You Must Know to Implement
Welcome to Lesson 5 of 12 in our free course series, LLM Twin: Building Your Production-Ready AI Replica. You’ll learn how to use LLMs,…
-
SOTA Python Streaming Pipelines for Fine-tuning LLMs and RAG – in Real-Time!
Welcome to Lesson 4 of 12 in our free course series, LLM Twin: Building Your Production-Ready AI Replica. You’ll learn…