Quickstart

This guide will help you get started with Opik Optimizer for improving your LLM prompts through systematic optimization.

Prerequisites

  • Python 3.9 or higher
  • An Opik API key (sign up here if you don’t have one)

Installation

Install Opik and the optimizer package:

$# Using pip
>pip install opik opik-optimizer
>
># Using uv (recommended for faster installation)
>uv pip install opik-optimizer

Configure your Opik environment:

$# Install the Opik CLI if not already installed
>pip install opik
>
># Configure your API key
>opik configure

Basic Usage

Here’s a complete example of optimizing a prompt using the Few-Shot Bayesian Optimizer:

1from opik.evaluation.metrics import LevenshteinRatio
2from opik_optimizer import FewShotBayesianOptimizer
3from opik_optimizer.demo import get_or_create_dataset
4from opik_optimizer import (
5 MetricConfig,
6 TaskConfig,
7 from_llm_response_text,
8 from_dataset_field,
9)
10
11# 1. Define your evaluation dataset
12# You can use a demo dataset for testing, or your own dataset
13dataset = get_or_create_dataset("tiny-test")
14
15# 2. Configure the evaluation metric
16# This example uses Levenshtein distance to measure output quality
17metric_config = MetricConfig(
18 metric=LevenshteinRatio(),
19 inputs={
20 "output": from_llm_response_text(), # Model's output
21 "reference": from_dataset_field(name="label"), # Ground truth
22 }
23)
24
25# 3. Define your base prompt
26# Following best practices, provide clear instructions and context
27system_prompt = """You are an expert assistant. Your task is to answer questions
28accurately and concisely. Consider the context carefully before responding."""
29
30# 4. Choose and configure an optimizer
31# FewShotBayesianOptimizer automatically finds optimal examples to include
32optimizer = FewShotBayesianOptimizer(
33 model="gpt-4", # Choose your preferred model
34 project_name="Prompt Optimization",
35 min_examples=2, # Minimum number of examples to try
36 max_examples=8, # Maximum number of examples to try
37 n_iterations=10 # Number of optimization iterations
38)
39
40# 5. Configure the task
41task_config = TaskConfig(
42 instruction_prompt=system_prompt,
43 input_dataset_fields=["text"], # Input field(s) from your dataset
44 output_dataset_field="label", # Output field from your dataset
45 use_chat_prompt=True, # Use chat format for modern LLMs
46)
47
48# 6. Run the optimization
49result = optimizer.optimize_prompt(
50 dataset=dataset,
51 metric_config=metric_config,
52 task_config=task_config,
53)
54
55# 7. View the results
56result.display() # Shows improvement metrics and best prompt

The optimization results will be displayed in your console and are also available in the Opik Agent Optimization dashboard:

Next Steps

  1. Learn about different optimization algorithms to choose the best one for your use case
  2. Explore prompt engineering best practices
  3. Set up your own evaluation datasets
  4. Review the API reference for detailed configuration options