Optimization algorithms overview

The Opik Optimizer SDK wraps a mix of in-house algorithms (MetaPrompt, Hierarchical Reflective) and external research projects (e.g., GEPA). Each optimizer follows the same API (optimize_prompt, OptimizationResult) so you can swap them without rewriting your pipeline. Use this page to quickly decide which optimizer to run before diving into the detailed guides.

How optimizers run

  1. Input – you pass a ChatPrompt definition, dataset, and metric. Many optimizers also accept additional parameters to set which model to use, number of optimization rounds, and even tool use (MCP and function calling) definitions.
  2. Candidate generation – each algorithm proposes new prompts (MetaPrompt via reasoning LLMs, Evolutionary via mutation/crossover, GEPA via its genetic-Pareto search).
  3. Evaluation – Opik runs the candidate against your dataset/metric and logs trials to the dashboard. The steps 2-to-3 continue to loop until such time a best prompt is found or the search has been exhausted.
  4. Result delivery – every optimizer returns an OptimizationResult with the best prompt, history, scores, and metadata which is passed back and also available in the UI.

Selection matrix

OptimizerOriginBest forKey inputsNotes
MetaPromptOpikGeneral prompt refinementPrompt + dataset + metricReasoning LLM critiques and rewrites prompts, supports MCP workflows and tool schemas.
Hierarchical ReflectiveOpikRoot-cause analysis on complex promptsMetrics with detailed reasonsBatches failures, synthesizes themes, proposes targeted fixes.
Few-Shot BayesianOpikOptimizing few-shot example setsDataset with demonstrationsUses Optuna to pick count/order of examples for chat prompts.
EvolutionaryOpik + DEAPExploring diverse prompt structuresMutation/crossover paramsMulti-objective optimization (score vs. length) and LLM-driven operators.
GEPAExternal (GEPA)Single-turn, reflection-heavy tasksgepa dependency + reflection minibatchesWe provide a wrapper so GEPA consumes Opik datasets/metrics while preserving its Pareto search.
ParameterOpikTemperature / top_p tuningPrompt + parameter search spaceLeaves prompt untouched; focuses on sampling parameters via Bayesian search.

How to choose

  1. Identify the constraint (e.g., wording vs. tool usage vs. parameters).
  2. Check dataset readiness – reflective optimizers need detailed metric reasons.
  3. Estimate budget – evolutionary/GEPA runs consume more tokens than MetaPrompt.
  4. Plan follow-up – you can chain optimizers (MetaPrompt → Parameter) when needed.

Next steps

  • Follow the individual optimizer guides for configuration details.
  • Learn how to chain optimizers for complex workflows.