Chaining optimizers
Some projects benefit from running two or more optimizers back-to-back. For example, use MetaPrompt to improve wording, then Parameter optimizer to fine-tune sampling settings. This guide explains why you might chain runs, the trade-offs, and the APIs you use to pass prompts and metadata between stages.
Strategy patterns
Example pipeline
Checklist
- Freeze datasets and metrics between stages to keep comparisons fair.
- Log pipeline metadata (e.g.,
experiment_config={"pipeline": "hierarchical_then_param"}) so dashboards show lineage. - Budget tokens – chained runs multiply costs; start with smaller
n_samplesand increase once results look promising. - Reuse OptimizationResult – every optimizer returns an
OptimizationResult, so you can passresult.prompt(andresult.details,result.history) directly into the next stage without rebuilding state.
Automation tips
- Use Makefiles or CI workflows to run stage 1 → stage 2 with clear checkpoints.
- Store intermediate prompts in version control alongside metadata (optimizer, score, dataset).
- Notify stakeholders with summary reports generated from
final_result.history.