Optimization algorithms overview
The Opik Optimizer SDK wraps a mix of in-house algorithms (MetaPrompt, Hierarchical Reflective) and external research projects (e.g., GEPA). Each optimizer follows the same API (optimize_prompt, OptimizationResult) so you can swap them without rewriting your pipeline. Use this page to quickly decide which optimizer to run before diving into the detailed guides.
How optimizers run
- Input – you pass a
ChatPromptdefinition, dataset, and metric. Many optimizers also accept additional parameters to set which model to use, number of optimization rounds, and even tool use (MCP and function calling) definitions. - Candidate generation – each algorithm proposes new prompts (MetaPrompt via reasoning LLMs, Evolutionary via mutation/crossover, GEPA via its genetic-Pareto search).
- Evaluation – Opik runs the candidate against your dataset/metric and logs trials to the dashboard. The steps 2-to-3 continue to loop until such time a best prompt is found or the search has been exhausted.
- Result delivery – every optimizer returns an
OptimizationResultwith the best prompt, history, scores, and metadata which is passed back and also available in the UI.
Selection matrix
How to choose
- Identify the constraint (e.g., wording vs. tool usage vs. parameters).
- Check dataset readiness – reflective optimizers need detailed metric reasons.
- Estimate budget – evolutionary/GEPA runs consume more tokens than MetaPrompt.
- Plan follow-up – you can chain optimizers (MetaPrompt → Parameter) when needed.
Next steps
- Follow the individual optimizer guides for configuration details.
- Learn how to chain optimizers for complex workflows.