Opik Agent Optimizer API Reference
The Opik Agent Optimizer SDK provides a set of tools for optimizing LLM prompts. This reference guide will help you understand the available APIs and how to use them effectively.
FewShotBayesianOptimizer
Parameters:
The model to used to evaluate the prompt
Minimum number of examples to include
Maximum number of examples to include
Random seed for reproducibility
Number of threads for parallel evaluation
Controls internal logging/progress bars (0=off, 1=on).
Methods
evaluate_prompt
Parameters:
get_history
optimize_prompt
Parameters:
Opik Dataset to optimize on
Metric function to evaluate on
Number of trials for Bayesian Optimization
Optional configuration for the experiment, useful to log additional metadata
Optional number of items to test in the dataset
update_optimization
Parameters:
MetaPromptOptimizer
Parameters:
The model to use for evaluation
The model to use for reasoning and prompt generation
Number of optimization rounds
Number of prompts to generate per round
Controls internal logging/progress bars (0=off, 1=on).
Whether to include task-specific context (metrics, examples) in the reasoning prompt.
Number of threads for parallel evaluation
Methods
evaluate_prompt
Parameters:
get_history
optimize_prompt
Parameters:
The dataset to evaluate against
The metric to use for evaluation
A dictionary to log with the experiments
The number of dataset items to use for evaluation
If True, the algorithm may continue if goal not met
update_optimization
Parameters:
EvolutionaryOptimizer
Parameters:
The model to use for evaluation
Number of prompts in the population
Number of generations to run
Mutation rate for genetic operations
Crossover rate for genetic operations
Tournament size for selection
Number of elitism prompts
Whether to use adaptive mutation
Whether to enable multi-objective optimization - When enable optimizes for both the supplied metric and the length of the prompt
Whether to enable LLM crossover
Random seed for reproducibility
Output style guidance for prompts
Whether to infer output style
Controls internal logging/progress bars (0=off, 1=on).
Number of threads for parallel evaluation
Methods
evaluate_prompt
Parameters:
get_history
get_llm_crossover_system_prompt
optimize_prompt
Parameters:
The prompt to optimize
The dataset to use for evaluation
Metric function to optimize with, should have the arguments dataset_item
and llm_output
Optional experiment configuration
Optional number of samples to use
Whether to automatically continue optimization
update_optimization
Parameters:
ChatPrompt
Parameters:
Methods
copy
get_messages
Parameters:
set_messages
Parameters:
to_dict
OptimizationResult
Parameters: