Mipro Optimization

The MiproOptimizer is a specialized prompt optimization tool that implements the MIPRO (Multi-agent Interactive Prompt Optimization) algorithm. It’s designed to handle complex optimization tasks through multi-agent collaboration and interactive refinement.

How It Works

  1. Multi-agent System

    • Specialized agents for different aspects of optimization
    • Collaborative prompt generation and refinement
    • Distributed evaluation and feedback
  2. Interactive Optimization

    • Real-time feedback integration
    • Dynamic prompt adjustment
    • Continuous learning from interactions
  3. Performance Evaluation

    • Multi-metric assessment
    • Parallel testing capabilities
    • Comprehensive logging
  4. Adaptive Learning

    • Experience-based improvement
    • Context-aware optimization
    • Dynamic strategy adjustment

Configuration Options

Basic Configuration

1from opik_optimizer import MiproOptimizer
2
3optimizer = MiproOptimizer(
4 model="openai/gpt-4", # or "azure/gpt-4"
5 project_name="my-project",
6 temperature=0.1,
7 max_tokens=5000,
8 num_threads=8,
9 seed=42
10)

Advanced Configuration

1optimizer = MiproOptimizer(
2 model="openai/gpt-4",
3 project_name="my-project",
4 temperature=0.1,
5 max_tokens=5000,
6 num_threads=8,
7 seed=42,
8 num_agents=3, # Number of optimization agents
9 interaction_rounds=5, # Number of interaction rounds
10 exploration_rate=0.3, # Exploration vs exploitation balance
11 feedback_weight=0.7, # Weight of feedback in optimization
12 convergence_threshold=0.01 # Optimization convergence threshold
13)

Example Usage

1from opik_optimizer import MiproOptimizer
2from opik.evaluation.metrics import LevenshteinRatio
3from opik_optimizer import (
4 MetricConfig,
5 TaskConfig,
6 from_llm_response_text,
7 from_dataset_field,
8)
9from opik_optimizer.demo import get_or_create_dataset
10
11# Initialize optimizer
12optimizer = MiproOptimizer(
13 model="openai/gpt-4",
14 temperature=0.1,
15 max_tokens=5000
16)
17
18# Prepare dataset
19dataset = get_or_create_dataset("hotpot-300")
20
21# Define metric and task configuration (see docs for more options)
22metric_config = MetricConfig(
23 metric=LevenshteinRatio(),
24 inputs={
25 "output": from_llm_response_text(), # Model's output
26 "reference": from_dataset_field(name="answer"), # Ground truth
27 }
28)
29task_config = TaskConfig(
30 type="text_generation",
31 instruction_prompt="Provide an answer to the question.",
32 input_dataset_fields=["question"],
33 output_dataset_field="answer"
34)
35
36# Run optimization
37results = optimizer.optimize_prompt(
38 dataset=dataset,
39 num_trials=10,
40 metric_config=metric_config,
41 task_config=task_config
42)
43
44# Access results
45print(f"Best prompt: {results.best_prompt}")
46print(f"Improvement: {results.improvement_percentage}%")

Model Support

The MiproOptimizer supports all models available through LiteLLM. For a complete list of supported models and providers, see the LiteLLM Integration documentation.

Common Providers

  • OpenAI (gpt-4, gpt-3.5-turbo, etc.)
  • Azure OpenAI
  • Anthropic (Claude)
  • Google (Gemini)
  • Mistral
  • Cohere

Configuration Example

1optimizer = MiproOptimizer(
2 model="anthropic/claude-3-opus", # or any LiteLLM supported model
3 project_name="my-project",
4 temperature=0.1,
5 max_tokens=5000
6)

Best Practices

  1. Agent Configuration

    • Start with 2-3 agents for simple tasks
    • Increase agents for complex problems
    • Monitor agent interactions
  2. Interaction Strategy

    • Balance exploration and exploitation
    • Use appropriate feedback weights
    • Monitor convergence metrics
  3. Performance Tuning

    • Adjust num_threads based on resources
    • Optimize interaction rounds
    • Fine-tune exploration rate
  4. Resource Management

    • Monitor memory usage
    • Balance agent count and performance
    • Optimize parallel processing

Research and References

Next Steps