Opik Agent Optimizer API Reference
The Opik Agent Optimizer SDK provides a set of tools for optimizing LLM prompts. This reference guide will help you understand the available APIs and how to use them effectively.
FewShotBayesianOptimizer
Parameters:
The model to used to evaluate the prompt
Optional project name for tracking
Minimum number of examples to include
Maximum number of examples to include
Random seed for reproducibility
Number of threads for parallel evaluation
Controls internal logging/progress bars (0=off, 1=on).
Methods
evaluate_prompt
Parameters:
The prompt to evaluate
Opik Dataset to evaluate the prompt on
Metric function to evaluate on, should have the arguments dataset_item
and llm_output
Optional list of dataset item IDs to evaluate
Optional configuration for the experiment
Optional ID of the optimization
Optional number of items to test in the dataset
get_history
optimize_prompt
Parameters:
The prompt to optimize
Opik Dataset to optimize on
Metric function to evaluate on
Number of trials for Bayesian Optimization
Optional configuration for the experiment, useful to log additional metadata
Optional number of items to test in the dataset
update_optimization
Parameters:
MetaPromptOptimizer
Parameters:
The model to use for evaluation
The model to use for reasoning and prompt generation
Number of optimization rounds
Number of prompts to generate per round
Number of threads for parallel evaluation
Optional project name for tracking
Controls internal logging/progress bars (0=off, 1=on).
Whether to include task-specific context (metrics, examples) in the reasoning prompt.
Methods
evaluate_prompt
Parameters:
The prompt to evaluate
Opik Dataset to evaluate the prompt on
Metric functions
Whether to use the full dataset or a subset
Optional configuration for the experiment, useful to log additional metadata
Optional number of items to test in the dataset
Optional ID of the optimization
Controls internal logging/progress bars (0=off, 1=on).
get_history
optimize_prompt
Parameters:
The prompt to optimize
The dataset to evaluate against
The metric to use for evaluation
A dictionary to log with the experiments
The number of dataset items to use for evaluation
If True, the algorithm may continue if goal not met
update_optimization
Parameters:
EvolutionaryOptimizer
Parameters:
The model to use for evaluation
Optional project name for tracking
Number of prompts in the population
Number of generations to run
Mutation rate for genetic operations
Crossover rate for genetic operations
Tournament size for selection
Number of threads for parallel evaluation
Number of elitism prompts
Whether to use adaptive mutation
Whether to enable multi-objective optimization - When enable optimizes for both the supplied metric and the length of the prompt
Whether to enable LLM crossover
Random seed for reproducibility
Output style guidance for prompts
Whether to infer output style
Controls internal logging/progress bars (0=off, 1=on).
Methods
evaluate_prompt
Parameters:
The prompt to evaluate
The dataset to use for evaluation
Metric function to evaluate on, should have the arguments dataset_item
and llm_output
Optional number of samples to use
Optional list of dataset item IDs to use
Optional experiment configuration
Optional optimization ID
Controls internal logging/progress bars (0=off, 1=on).
get_history
get_llm_crossover_system_prompt
optimize_prompt
Parameters:
The prompt to optimize
The dataset to use for evaluation
Metric function to optimize with, should have the arguments dataset_item
and llm_output
Optional experiment configuration
Optional number of samples to use
Whether to automatically continue optimization
update_optimization
Parameters:
ChatPrompt
Parameters:
Methods
format
Parameters:
to_dict
OptimizationResult
Parameters: