MIPRO Optimizer: Agent & Complex Optimization
Optimize complex agent behaviors and tool-using prompts.
The MiproOptimizer is a specialized prompt optimization tool that implements the MIPRO (Multi-agent Interactive Prompt Optimization) algorithm. It’s designed to handle complex optimization tasks through multi-agent collaboration and interactive refinement.
This Optimizer is currently deprecated and will be removed in a future release. We recommend trying out the EvolutionaryOptimizer instead.
When to Use This Optimizer:
MiproOptimizer
is the preferred choice for complex tasks, especially those involving
tool use or requiring multi-step reasoning. If you are building an agent that needs to
interact with external tools or follow a sophisticated chain of thought, and you want to optimize
the underlying prompts and agent structure, MiproOptimizer
(which leverages DSPy) is highly
suitable.
Key Trade-offs:
- The optimization process can be more involved than single-prompt optimizers due to its reliance on the DSPy framework and compilation process.
- Understanding basic DSPy concepts (like Modules, Signatures, Teleprompters) can be beneficial for advanced customization and debugging, though Opik abstracts much of this.
- Debugging can sometimes be more complex as you are optimizing a program/agent structure, not just a single string.
Got questions about MiproOptimizer
or its use with DSPy? The Optimizer & SDK
FAQ covers topics such as when to use MiproOptimizer
, its relationship with
DSPy, the role of num_candidates
, and using it without deep DSPy knowledge.
How It Works
The MiproOptimizer
leverages the DSPy library, specifically an internal
version of its MIPRO (Modular Instruction Programming and Optimization) teleprompter, to optimize
potentially complex prompt structures, including those for tool-using agents.
Here’s a simplified overview of the process:
-
DSPy Program Representation: Your task, as defined by
TaskConfig
(including theinstruction_prompt
, input/output fields, and anytools
), is translated into a DSPy program structure. If tools are provided, this often involves creating adspy.ReAct
or similar agent module. -
Candidate Generation (Compilation): The core of MIPRO involves compiling this DSPy program. This “compilation” is an optimization process itself:
- It explores different ways to formulate the instructions within the DSPy program’s modules (e.g., the main instruction, tool usage instructions if applicable).
- It also optimizes the selection of few-shot demonstrations to include within the prompts for these modules, if the DSPy program structure uses them.
- This is guided by an internal DSPy teleprompter algorithm (like
MIPROv2
in the codebase) which uses techniques like bootstrapping demonstrations and proposing instruction variants.
-
Evaluation: Each candidate program (representing a specific configuration of instructions and/or demonstrations) is evaluated on your training dataset (
dataset
) using the specifiedmetric
. TheMiproOptimizer
uses DSPy’s evaluation mechanisms, which can handle complex interactions, especially for tool-using agents. -
Iterative Refinement: The teleprompter iteratively refines the program based on these evaluations, aiming to find a program configuration that maximizes the metric score. The
num_candidates
parameter inoptimize_prompt
influences how many of these configurations are explored. -
Result: The
MiproOptimizer
returns anOptimizationResult
containing the best-performing DSPy program structure found. This might be a single optimized prompt string or, for more complex agents, a collection of optimized prompts that make up the agent’s internal logic (e.g.,tool_prompts
in theOptimizationResult
).
Essentially, MiproOptimizer
uses MIPRO algorithm to not just optimize a single string prompt, but
potentially a whole system of prompts and few-shot examples that define how an LLM (or an LLM-based
agent) should behave to accomplish a task.
The evaluation of each candidate program (Step 3) is crucial. It uses your metric
and dataset
to score how well a
particular set of agent instructions or prompts performs. Since MiproOptimizer
often deals with agentic behavior,
understanding Opik’s broader evaluation tools is beneficial: - Evaluation Overview - Evaluate
Agents (particularly relevant) - Evaluate Prompts -
Metrics Overview
Configuration Options
Basic Configuration
Advanced Configuration
The MiproOptimizer
leverages the DSPy library for its optimization
capabilities, specifically using an internal implementation similar to DSPy’s MIPRO teleprompter
(referred to as MIPROv2
in the codebase).
The constructor for MiproOptimizer
is simple (model
, project_name
, **model_kwargs
). The
complexity of the optimization is managed within the DSPy framework when optimize_prompt
is called.
Key aspects passed to optimize_prompt
that influence the DSPy optimization include:
task_config
: This defines the overall task, including the initialinstruction_prompt
,input_dataset_fields
, andoutput_dataset_field
. Iftask_config.tools
are provided,MiproOptimizer
will attempt to build and optimize a DSPy agent that uses these tools.metric
: Defines how candidate DSPy programs (prompts/agents) are scored. The metric needs to be a function that accepts the parametersdataset_item
andllm_output
.num_candidates
: This parameter fromoptimize_prompt
controls how many different program configurations are explored during optimization.
Example Usage
Model Support
The MiproOptimizer
supports all models available through LiteLLM. This includes models from
OpenAI, Azure OpenAI, Anthropic, Google (Vertex AI / AI Studio), Mistral AI, Cohere, locally hosted
models (e.g., via Ollama), and many others.
For detailed instructions on how to specify different models and configure providers, please refer to the main LiteLLM Support for Optimizers documentation page.
Best Practices
-
Task Complexity Assessment
- Use
MiproOptimizer
for tasks requiring multi-step reasoning or tool use - Consider simpler optimizers for single-prompt tasks
- Evaluate if your task benefits from DSPy’s programmatic approach
- Use
-
Tool Configuration
- Provide clear, detailed tool descriptions
- Include example usage in tool descriptions
- Ensure tool parameters are well-defined
-
Dataset Preparation
- Include examples of tool usage in your dataset
- Ensure examples cover various tool combinations
- Include edge cases and error scenarios
-
Optimization Strategy
- Start with a reasonable
num_candidates
(5-10) - Monitor optimization progress
- Adjust based on task complexity
- Start with a reasonable
-
Evaluation Metrics
- Choose metrics that reflect tool usage success
- Consider composite metrics for complex tasks
- Include task-specific evaluation criteria
Research and References
- MIPRO Paper - Optimizing Instructions and Demonstrations for Multi-Stage Language Model Programs
- DSPy Documentation
- ReAct: Synergizing Reasoning and Acting in Language Models
Next Steps
- Explore other Optimization Algorithms
- Learn about Tool Integration for agent optimization
- Try the Example Projects & Cookbooks for runnable Colab notebooks using this optimizer
- Read the DSPy Documentation to understand the underlying optimization framework